Patents by Inventor Cyril C. Marrion

Cyril C. Marrion has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10706528
    Abstract: Described are methods, systems, apparatus, and computer program products for determining the presence of an object on a target surface. A machine vision system includes a first image capture device configured to image a first portion of a target surface from a first viewpoint and a second image capture device configured to image a second portion of the target surface from a second viewpoint. The machine vision system is configured to acquire a first image from the first image capture device, a second image from the second image capture device, rectify the first image and second image, retrieve a disparity field, generate difference data by comparing, based on the mappings of the disparity field, image elements in the first rectified image and a second image elements in the second rectified image; and determine whether the difference data is indicative of an object on the target surface.
    Type: Grant
    Filed: May 21, 2018
    Date of Patent: July 7, 2020
    Assignee: Cognex Corporation
    Inventors: Cyril C. Marrion, Nickolas James Mullan
  • Patent number: 10664994
    Abstract: This invention provides a system and method for generating camera calibrations for a vision system camera along three discrete planes in a 3D volume space that uses at least two (e.g. parallel) object planes at different known heights. For any third (e.g. parallel) plane of a specified height, the system and method then automatically generates calibration data for the camera by interpolating/extrapolating from the first two calibrations. This alleviates the need to set the calibration object at more than two heights, speeding the calibration process and simplifying the user's calibration setup, and also allowing interpolation/extrapolation to heights that are space-constrained, and not readily accessible by a calibration object. The calibration plate can be calibrated at each height using a full 2D hand-eye calibration, or using a hand-eye calibration at the first height and then at a second height with translation to a known position along the height (e.g. Z) direction.
    Type: Grant
    Filed: February 25, 2013
    Date of Patent: May 26, 2020
    Assignee: Cognex Corporation
    Inventors: Gang Liu, Guruprasad Shivaram, Cyril C. Marrion, Jr.
  • Publication number: 20200065995
    Abstract: This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.
    Type: Application
    Filed: May 13, 2019
    Publication date: February 27, 2020
    Inventors: Guruprasad Shivaram, Cyril C. Marrion, JR., Lifeng Liu, Tuotuo Li
  • Publication number: 20190308326
    Abstract: Described are machine vision systems and methods for simultaneous kinematic and hand-eye calibration. A machine vision system includes a robot and a 3D sensor in communication with a control system. The control system is configured to move the robot to poses, and for each pose: capture a 3D image of calibration target features and robot joint angles. The control system is configured to obtain initial values for robot calibration parameters, and determine initial values for hand-eye calibration parameters based on the initial values for the robot calibration parameters, the 3D image, and joint angles. The control system is configured to determine final values for the hand-eye calibration parameters and robot calibration parameters by refining the hand-eye calibration parameters and robot calibration parameters to minimize a cost function.
    Type: Application
    Filed: June 21, 2019
    Publication date: October 10, 2019
    Inventors: Lifeng Liu, Cyril C. Marrion, Tian Gan, David Michael, Han Xiao
  • Patent number: 10290118
    Abstract: This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: May 14, 2019
    Assignee: COGNEX CORPORATION
    Inventors: Guruprasad Shivaram, Cyril C. Marrion, Jr., Lifeng Liu, Tuotuo Li
  • Publication number: 20190026887
    Abstract: Described are methods, systems, apparatus, and computer program products for determining the presence of an object on a target surface. A machine vision system includes a first image capture device configured to image a first portion of a target surface from a first viewpoint and a second image capture device configured to image a second portion of the target surface from a second viewpoint. The machine vision system is configured to acquire a first image from the first image capture device, a second image from the second image capture device, rectify the first image and second image, retrieve a disparity field, generate difference data by comparing, based on the mappings of the disparity field, image elements in the first rectified image and a second image elements in the second rectified image; and determine whether the difference data is indicative of an object on the target surface.
    Type: Application
    Filed: May 21, 2018
    Publication date: January 24, 2019
    Inventors: Cyril C. Marrion, Nickolas James Mullan
  • Publication number: 20190015991
    Abstract: Described are machine vision systems and methods for simultaneous kinematic and hand-eye calibration. A machine vision system includes a robot or motion stage and a camera in communication with a control system. The control system is configured to move the robot or motion stage to poses, and for each pose: capture an image of calibration target features and robot joint angles or motion stage encoder counts. The control system is configured to obtain initial values for robot or motion stage calibration parameters, and determine initial values for hand-eye calibration parameters based on the initial values for the robot or motion stage calibration parameters, the image, and joint angles or encoder counts. The control system is configured to determine final values for the hand-eye calibration parameters and robot or motion stage calibration parameters by refining the hand-eye calibration parameters and robot or motion stage calibration parameters to minimize a cost function.
    Type: Application
    Filed: September 17, 2018
    Publication date: January 17, 2019
    Inventors: Lifeng Liu, Cyril C. Marrion, Tian Gan
  • Patent number: 10097811
    Abstract: Described are methods, systems, and apparatus, including computer program products for finding correspondences of one or more parts in a camera image of two or more cameras. For a first part in a first camera image of a first camera, a first 3D ray that is a first back-projection of a first feature coordinate of the first part in the first camera image to a 3D physical space is calculated. For a second part in a second camera image of a second camera, a second 3D ray that is a second back-projection of a second feature coordinate of the second part in the second camera image to the 3D physical space is calculated, wherein the first feature coordinate and the second feature coordinate correspond to a first feature as identified in a model. A first distance between the first 3D ray and the second 3D ray is calculated.
    Type: Grant
    Filed: December 14, 2012
    Date of Patent: October 9, 2018
    Assignee: Cognex Corporation
    Inventors: Lifeng Liu, Cyril C. Marrion
  • Patent number: 10076842
    Abstract: Described are machine vision systems and methods for simultaneous kinematic and hand-eye calibration. A machine vision system includes a robot or motion stage and a camera in communication with a control system. The control system is configured to move the robot or motion stage to poses, and for each pose: capture an image of calibration target features and robot joint angles or motion stage encoder counts. The control system is configured to obtain initial values for robot or motion stage calibration parameters, and determine initial values for hand-eye calibration parameters based on the initial values for the robot or motion stage calibration parameters, the image, and joint angles or encoder counts. The control system is configured to determine final values for the hand-eye calibration parameters and robot or motion stage calibration parameters by refining the hand-eye calibration parameters and robot or motion stage calibration parameters to minimize a cost function.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: September 18, 2018
    Assignee: Cognex Corporation
    Inventors: Lifeng Liu, Cyril C. Marrion, Tian Gan
  • Patent number: 9978135
    Abstract: Described are methods, systems, apparatus, and computer program products for determining the presence of an object on a target surface. A machine vision system includes a first image capture device configured to image a first portion of a target surface from a first viewpoint and a second image capture device configured to image a second portion of the target surface from a second viewpoint. The machine vision system is configured to acquire a first image from the first image capture device, a second image from the second image capture device, rectify the first image and second image, retrieve a disparity field, generate difference data by comparing, based on the mappings of the disparity field, image elements in the first rectified image and a second image elements in the second rectified image; and determine whether the difference data is indicative of an object on the target surface.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: May 22, 2018
    Assignee: Cognex Corporation
    Inventors: Cyril C. Marrion, Nickolas James Mullan
  • Publication number: 20180089831
    Abstract: Described are machine vision systems and methods for simultaneous kinematic and hand-eye calibration. A machine vision system includes a robot or motion stage and a camera in communication with a control system. The control system is configured to move the robot or motion stage to poses, and for each pose: capture an image of calibration target features and robot joint angles or motion stage encoder counts. The control system is configured to obtain initial values for robot or motion stage calibration parameters, and determine initial values for hand-eye calibration parameters based on the initial values for the robot or motion stage calibration parameters, the image, and joint angles or encoder counts. The control system is configured to determine final values for the hand-eye calibration parameters and robot or motion stage calibration parameters by refining the hand-eye calibration parameters and robot or motion stage calibration parameters to minimize a cost function.
    Type: Application
    Filed: September 28, 2016
    Publication date: March 29, 2018
    Inventors: Lifeng Liu, Cyril C. Marrion, Tian Gan
  • Patent number: 9734419
    Abstract: This invention provides a system and method to validate the accuracy of camera calibration in a single or multiple-camera embodiment, utilizing either 2D cameras or 3D imaging sensors. It relies upon an initial calibration process that generates and stores camera calibration parameters and residual statistics based upon images of a first calibration object. A subsequent validation process (a) acquires images of the first calibration object or a second calibration object having a known pattern and dimensions; (b) extracts features of the images of the first calibration object or the second calibration object; (c) predicts positions expected of features of the first calibration object or the second calibration object using the camera calibration parameters; and (d) computes a set of discrepancies between positions of the extracted features and the predicted positions of the features.
    Type: Grant
    Filed: December 30, 2008
    Date of Patent: August 15, 2017
    Assignee: COGNEX CORPORATION
    Inventors: Xiangyun Ye, Aaron S. Wallack, Guruprasad Shivaram, Cyril C. Marrion, David Y. Li
  • Publication number: 20170132807
    Abstract: This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.
    Type: Application
    Filed: July 29, 2016
    Publication date: May 11, 2017
    Inventors: Guruprasad Shivaram, Cyril C. Marrion, Jr., Lifeng Liu, Tuotuo Li
  • Publication number: 20170032537
    Abstract: Disclosed herein are systems and methods for machine vision. A machine vision system includes a motion rendering device, a first image sensor, and a second image sensor. The machine visions system includes a processor configured to run a computer program stored in memory that is configured to determine a first transformation that allows mapping between the first coordinate system associated with the motion rendering device and the second coordinate system associated with the first image sensor, and to determine a second transformation that allows mapping between the first coordinate system associated with the motion rendering device and the third coordinate system associated with the second image sensor.
    Type: Application
    Filed: July 27, 2016
    Publication date: February 2, 2017
    Inventors: Tuotuo LI, Lifeng LIU, Cyril C. MARRION
  • Publication number: 20160253793
    Abstract: Described are methods, systems, apparatus, and computer program products for determining the presence of an object on a target surface. A machine vision system includes a first image capture device configured to image a first portion of a target surface from a first viewpoint and a second image capture device configured to image a second portion of the target surface from a second viewpoint. The machine vision system is configured to acquire a first image from the first image capture device, a second image from the second image capture device, rectify the first image and second image, retrieve a disparity field, generate difference data by comparing, based on the mappings of the disparity field, image elements in the first rectified image and a second image elements in the second rectified image; and determine whether the difference data is indicative of an object on the target surface.
    Type: Application
    Filed: February 27, 2015
    Publication date: September 1, 2016
    Inventors: Cyril C. Marrion, Nickolas James Mullan
  • Patent number: 9305231
    Abstract: Described are machine vision systems, methods, and apparatus, including computer program products for associating codes with objects. In an embodiment, a machine vision system includes an area-scan camera having a field of view (FOV), the area-scan camera disposed relative to a first workspace such that the FOV covers at least a portion of the first workspace and a dimensioner disposed relative to a second workspace. The machine vision system includes a machine vision processor configured to: determine an image location of a code in an image; determine a ray in a shared coordinate space that is a back-projection of the image location of the code; determine one or more surfaces of one or more objects based on dimensioning data; determine a first surface of the one or more surfaces that intersects the 3D ray; and associate the code with an object associated with the first surface.
    Type: Grant
    Filed: August 1, 2013
    Date of Patent: April 5, 2016
    Assignee: Cognex Corporation
    Inventors: Cyril C. Marrion, James Negro, Matthew Engle
  • Patent number: 9124873
    Abstract: This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene involving microscopic and near microscopic objects under manufacture moved by a manipulator, so as to acquire contemporaneous images of a runtime object and determine the pose of the object for the purpose of guiding manipulator motion. At least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly can be combined with the searched 2D object features in images of other camera assemblies, based on their trained object features to generate a set of 3D features and thereby determine a 3D pose of the object.
    Type: Grant
    Filed: October 24, 2013
    Date of Patent: September 1, 2015
    Assignee: Cognex Corporation
    Inventors: Lifeng Liu, Aaron S. Wallack, Cyril C. Marrion, Jr., David J. Michael
  • Publication number: 20150036876
    Abstract: Described are machine vision systems, methods, and apparatus, including computer program products for associating codes with objects. In an embodiment, a machine vision system includes an area-scan camera having a field of view (FOV), the area-scan camera disposed relative to a first workspace such that the FOV covers at least a portion of the first workspace and a dimensioner disposed relative to a second workspace. The machine vision system includes a machine vision processor configured to: determine an image location of a code in an image; determine a ray in a shared coordinate space that is a back-projection of the image location of the code; determine one or more surfaces of one or more objects based on dimensioning data; determine a first surface of the one or more surfaces that intersects the 3D ray; and associate the code with an object associated with the first surface.
    Type: Application
    Filed: August 1, 2013
    Publication date: February 5, 2015
    Applicant: Cognex Corporation
    Inventors: Cyril C. Marrion, James Negro, Matthew Engle
  • Publication number: 20140118500
    Abstract: This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene involving microscopic and near microscopic objects under manufacture moved by a manipulator, so as to acquire contemporaneous images of a runtime object and determine the pose of the object for the purpose of guiding manipulator motion. At least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly can be combined with the searched 2D object features in images of other camera assemblies, based on their trained object features to generate a set of 3D features and thereby determine a 3D pose of the object.
    Type: Application
    Filed: October 24, 2013
    Publication date: May 1, 2014
    Applicant: Cognex Corporation
    Inventors: Lifeng Liu, Aaron S. Wallack, Cyril C. Marrion, JR., David J. Michael
  • Patent number: 8600192
    Abstract: This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of a runtime object and determine the pose of the object, and in which at least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly, can be combined with the searched 2D object features in images of other camera assemblies (perspective or non-perspective), based on their trained object features to generate a set of 3D image features and thereby determine a 3D pose of the object. In this manner the speed and accuracy of the overall pose determination process is improved. The non-perspective lens can be a telecentric lens.
    Type: Grant
    Filed: December 8, 2010
    Date of Patent: December 3, 2013
    Assignee: Cognex Corporation
    Inventors: Lifeng Liu, Aaron S. Wallack, Cyril C. Marrion