Patents by Inventor Cyril C. Marrion Jr.
Cyril C. Marrion Jr. has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11544874Abstract: This invention provides a system and method for generating camera calibrations for a vision system camera along three discrete planes in a 3D volume space that uses at least two (e.g. parallel) object planes at different known heights. For any third (e.g. parallel) plane of a specified height, the system and method then automatically generates calibration data for the camera by interpolating/extrapolating from the first two calibrations. This alleviates the need to set the calibration object at more than two heights, speeding the calibration process and simplifying the user's calibration setup, and also allowing interpolation/extrapolation to heights that are space-constrained, and not readily accessible by a calibration object. The calibration plate can be calibrated at each height using a full 2D hand-eye calibration, or using a hand-eye calibration at the first height and then at a second height with translation to a known position along the height (e.g. Z) direction.Type: GrantFiled: May 22, 2020Date of Patent: January 3, 2023Assignee: Cognex CorporationInventors: Gang Liu, Guruprasad Shivaram, Cyril C. Marrion, Jr.
-
Patent number: 11488322Abstract: This invention provides a system and method for training and performing runtime 3D pose determination of an object using a plurality of camera assemblies in a 3D vision system. The cameras are arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of an object, both at training and runtime. Each of the camera assemblies includes a non-perspective lens that acquires a respective non-perspective image for use in the process. The searched object features in one of the acquired non-perspective image can be used to define the expected location of object features in the second (or subsequent) non-perspective images based upon an affine transform, which is computed based upon at least a subset of the intrinsics and extrinsics of each camera. The locations of features in the second, and subsequent, non-perspective images can be refined by searching within the expected location of those images.Type: GrantFiled: December 8, 2010Date of Patent: November 1, 2022Assignee: Cognex CorporationInventors: Lifeng Liu, Aaron S. Wallack, Cyril C. Marrion, Jr.
-
Patent number: 11049280Abstract: This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.Type: GrantFiled: May 13, 2019Date of Patent: June 29, 2021Assignee: Cognex CorporationInventors: Guruprasad Shivaram, Cyril C. Marrion, Jr., Lifeng Liu, Tuotuo Li
-
Publication number: 20210012532Abstract: This invention provides a system and method for generating camera calibrations for a vision system camera along three discrete planes in a 3D volume space that uses at least two (e.g. parallel) object planes at different known heights. For any third (e.g. parallel) plane of a specified height, the system and method then automatically generates calibration data for the camera by interpolating/extrapolating from the first two calibrations. This alleviates the need to set the calibration object at more than two heights, speeding the calibration process and simplifying the user's calibration setup, and also allowing interpolation/extrapolation to heights that are space-constrained, and not readily accessible by a calibration object. The calibration plate can be calibrated at each height using a full 2D hand-eye calibration, or using a hand-eye calibration at the first height and then at a second height with translation to a known position along the height (e.g. Z) direction.Type: ApplicationFiled: May 22, 2020Publication date: January 14, 2021Inventors: Gang Liu, Guruprasad Shivaram, Cyril C. Marrion, JR.
-
Patent number: 10664994Abstract: This invention provides a system and method for generating camera calibrations for a vision system camera along three discrete planes in a 3D volume space that uses at least two (e.g. parallel) object planes at different known heights. For any third (e.g. parallel) plane of a specified height, the system and method then automatically generates calibration data for the camera by interpolating/extrapolating from the first two calibrations. This alleviates the need to set the calibration object at more than two heights, speeding the calibration process and simplifying the user's calibration setup, and also allowing interpolation/extrapolation to heights that are space-constrained, and not readily accessible by a calibration object. The calibration plate can be calibrated at each height using a full 2D hand-eye calibration, or using a hand-eye calibration at the first height and then at a second height with translation to a known position along the height (e.g. Z) direction.Type: GrantFiled: February 25, 2013Date of Patent: May 26, 2020Assignee: Cognex CorporationInventors: Gang Liu, Guruprasad Shivaram, Cyril C. Marrion, Jr.
-
Publication number: 20200065995Abstract: This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.Type: ApplicationFiled: May 13, 2019Publication date: February 27, 2020Inventors: Guruprasad Shivaram, Cyril C. Marrion, JR., Lifeng Liu, Tuotuo Li
-
Patent number: 10290118Abstract: This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.Type: GrantFiled: July 29, 2016Date of Patent: May 14, 2019Assignee: COGNEX CORPORATIONInventors: Guruprasad Shivaram, Cyril C. Marrion, Jr., Lifeng Liu, Tuotuo Li
-
Publication number: 20170132807Abstract: This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.Type: ApplicationFiled: July 29, 2016Publication date: May 11, 2017Inventors: Guruprasad Shivaram, Cyril C. Marrion, Jr., Lifeng Liu, Tuotuo Li
-
Patent number: 9124873Abstract: This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene involving microscopic and near microscopic objects under manufacture moved by a manipulator, so as to acquire contemporaneous images of a runtime object and determine the pose of the object for the purpose of guiding manipulator motion. At least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly can be combined with the searched 2D object features in images of other camera assemblies, based on their trained object features to generate a set of 3D features and thereby determine a 3D pose of the object.Type: GrantFiled: October 24, 2013Date of Patent: September 1, 2015Assignee: Cognex CorporationInventors: Lifeng Liu, Aaron S. Wallack, Cyril C. Marrion, Jr., David J. Michael
-
Publication number: 20140118500Abstract: This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene involving microscopic and near microscopic objects under manufacture moved by a manipulator, so as to acquire contemporaneous images of a runtime object and determine the pose of the object for the purpose of guiding manipulator motion. At least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly can be combined with the searched 2D object features in images of other camera assemblies, based on their trained object features to generate a set of 3D features and thereby determine a 3D pose of the object.Type: ApplicationFiled: October 24, 2013Publication date: May 1, 2014Applicant: Cognex CorporationInventors: Lifeng Liu, Aaron S. Wallack, Cyril C. Marrion, JR., David J. Michael
-
Patent number: 8326084Abstract: A system and method of auto-exposure control is provided for image acquisition hardware using three dimensional information to identify a region(s) of interest within an acquired 2D image or images upon which to apply traditional auto-exposure techniques. By performing auto-exposure analysis over the region of interest, the acquisition property settings can be assigned such that the light levels within the region of interest fall within the linear range, producing sufficient grayscale information for identifying particular objects and profiles in subsequently acquired images. For example, in a machine vision application that detects people passing through a doorway, the region of interest can be the portion of the 2D image that generated 3D features of a head and shoulders profile within a 3D model of the doorway scene. With higher quality images, more accurate detection of people candidates within the monitored scene results.Type: GrantFiled: December 21, 2004Date of Patent: December 4, 2012Assignee: Cognex Technology and Investment CorporationInventors: Cyril C. Marrion, Jr., Sanjay Nichani
-
Publication number: 20120147149Abstract: This invention provides a system and method for training and performing runtime 3D pose determination of an object using a plurality of camera assemblies in a 3D vision system. The cameras are arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of an object, both at training and runtime. Each of the camera assemblies includes a non-perspective lens that acquires a respective non-perspective image for use in the process. The searched object features in one of the acquired non-perspective image can be used to define the expected location of object features in the second (or subsequent) non-perspective images based upon an affine transform, which is computed based upon at least a subset of the intrinsics and extrinsics of each camera. The locations of features in the second, and subsequent, non-perspective images can be refined by searching within the expected location of those images.Type: ApplicationFiled: December 8, 2010Publication date: June 14, 2012Applicant: COGNEX CORPORATIONInventors: Lifeng Liu, Aaron S. Wallack, Cyril C. Marrion, JR.
-
Publication number: 20110317907Abstract: A system and method is provided for remotely analyzing machine vision data. An indication of a choice of vision software is sent from a first computer to a remote second computer. The second computer, using the selected vision software, processes image data to provide a result that is transmitted from the second computer to a designated location.Type: ApplicationFiled: June 13, 2011Publication date: December 29, 2011Applicant: COGNEX CORPORATIONInventors: John Petry, Cyril C. Marrion, JR., Andrew Eames
-
Patent number: 7962898Abstract: A system and method is provided for remotely analyzing machine vision data. An indication of a choice of vision software is sent from a first computer to a remote second computer. The second computer, using the selected vision software, processes image data to provide a result that is transmitted from the second computer to a designated location.Type: GrantFiled: April 27, 2001Date of Patent: June 14, 2011Assignee: Cognex CorporationInventors: John Petry, Cyril C. Marrion, Jr., Andrew Eames
-
Patent number: 7623674Abstract: Enhanced portal security is provided through stereoscopy, including a stereo door sensor for detecting and optionally preventing access violations, such as piggybacking and tailgating. A portal security system can include a 3D imaging system that generates a target volume from plural 2D images of a field of view about a portal; and a processor that detects and tracks people candidates moving through the target volume to detect a portal access event.Type: GrantFiled: November 5, 2003Date of Patent: November 24, 2009Assignee: Cognex Technology and Investment CorporationInventors: Sanjay Nichani, Cyril C. Marrion, Jr., Robert Wolff, David Shatz, Raymond A. Fix, Gene Halbrooks
-
Patent number: 7383536Abstract: A machine vision system located at a user site is programmed from a remote site using a program development system connected via a LAN, WAN, or the Internet. A user application program is developed and tested from the remote location and then downloaded through the network to the machine vision system. Libraries of common software module objects are stored at both locations and used during user program development and implementation in the machine vision system.Type: GrantFiled: August 13, 2003Date of Patent: June 3, 2008Inventors: John P. Petry, III, Cyril C. Marrion, Jr.
-
Patent number: 6963338Abstract: The creation of accurate geometric description-based models of objects in a machine vision system can be performed in a computer-assisted process. A rough model of an image of an object forms the basis of the model. The model is refined through the application of machine vision tools and techniques so as to provide a model including a geometric description. The model can be used in machine vision applications that may be used, for example, to compare images of objects under inspection to the model.Type: GrantFiled: July 31, 1998Date of Patent: November 8, 2005Assignee: Cognex CorporationInventors: Ivan A. Bachelder, Cyril C. Marrion Jr.
-
Patent number: 6571006Abstract: Methods are disclosed that measure the extent of a group of objects within a digital image by comparing signature, representative of the relationship of the objects to one another, against instances of a measured signature at varying positions within the image. The position(s) where the signature(s) vary by a predetermined comparison criterion indicates the extent of the group of objects. It is disclosed that the comparison to a reference signature allows proper identification of measured signatures despite noise in the digital image. A preferred embodiment uses the CALIPER TOOL to generate signatures of edges, where the window of the CALIPER TOOL has a projection axis substantially parallel to the extent being measured. A preferred application is desired wherein the method measure the length of leads in a lead set.Type: GrantFiled: November 30, 1998Date of Patent: May 27, 2003Assignee: Cognex CorporationInventors: Albert A. Montillo, Ivan A. Bachelder, Cyril C. Marrion, Jr.
-
Patent number: 6526165Abstract: A method and apparatus are disclosed for refining a rough geometric description (GD) and a rough pose of an object having extensions. The invention locates anchor points within an image of the object and uses the anchor points to align in at least one dimension the rough GD. In one embodiment, the anchor points are the tips of the extensions of the object; the rough GD of the object is then aligned along the angular orientation indicated by the tips. Thereafter, other dimensions of the rough GD and the rough pose are measured, measuring the dimensions having less unknowns first. The rough GD and the rough pose are then updated to provide the refined GD and refined pose. For one measurement, an extent of a region is measured using the expected area of region to threshold the region and segment it from the remainder of the image before measuring the extent of the region. An application of refining a GD of a leaded object is disclosed.Type: GrantFiled: November 30, 1998Date of Patent: February 25, 2003Assignee: Cognex CorporationInventors: Albert A. Montillo, Ivan A. Bachelder, Cyril C. Marrion, Jr.
-
Patent number: 6408429Abstract: An improved vision system is provided for identifying and assessing features of an article. Systems are provided for developing feature assessment programs, which, when deployed, may inspect parts and/or provide position information for guiding automated manipulation of such parts. The improved system is easy to use and facilitates the development of versatile and flexible article assessment programs. In one aspect, the system comprises a set of step tools from which a set of step objects is instantiated. The set of step tools may comprise machine vision step objects that comprise routines for processing an image of the article to provide article feature information. A control flow data structure and a data flow data structure may each be provided. The control flow data structure charts a flow of control among the step objects.Type: GrantFiled: March 10, 2000Date of Patent: June 18, 2002Assignee: Cognex CorporationInventors: Cyril C. Marrion, Jr., Ivan A. Bachelder, Edward A. Collins, Jr., Masayoki Kawata, Sateesh G. Nadabar