Patents by Inventor Guruprasad Shivaram

Guruprasad Shivaram has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190176335
    Abstract: A vision-based manipulations system can be configured for, and can be operated with methods including, performing hand-eye calibrations at multiple workstations of the system, performing a cross-station calibration for the system, and determining relationships between the hand-eye calibrations and the cross-station calibration. In some embodiments, the system can be used to move a work object between the workstations based on the cross-station calibration.
    Type: Application
    Filed: December 13, 2017
    Publication date: June 13, 2019
    Inventors: Guruprasad Shivaram, Gang Liu
  • Publication number: 20190163736
    Abstract: A device may receive information associated with an entity. The information may include a first resource and a second resource. The first resource may be associated with a first file type, and the second resource may be associated with a second file type that is different than the first file type. The first resource may be associated with a first source, and the second resource may be associated with a second source that is different than the first source. The device may extract a plurality of attributes associated with the entity based on the information. The device may implement a natural language processing technique to extract the plurality of attributes. The device may associate the plurality of attributes with a plurality of elements based on extracting the plurality of attributes. The device may provide information that identifies the plurality of elements and the plurality of attributes to permit and/or cause an action to be performed.
    Type: Application
    Filed: August 19, 2016
    Publication date: May 30, 2019
    Inventors: Abhishek Datta SHARMA, Madhura SHIVARAM, Suraj Govind JADHAV, Kaushal MODY, Deepak KUMAR, Guruprasad DASAPPA, Arvind MAHESWARAN
  • Patent number: 10290118
    Abstract: This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: May 14, 2019
    Assignee: COGNEX CORPORATION
    Inventors: Guruprasad Shivaram, Cyril C. Marrion, Jr., Lifeng Liu, Tuotuo Li
  • Patent number: 10223589
    Abstract: This invention provides a system and method for guiding the workpieces to optimal positions to train an assembly system that is generally free of the use of a CMM or similar metrology device. The system and method expresses the image features of the workpieces, when they are in their respective stations, in a common coordinate system. This ability allows a user to visualize the result of assembling the workpieces without actually assembling them, in a “virtual assembly”. The virtual assembly assists guiding placement of workpieces in respective stations into a desired relative alignment. The system and method illustratively generates a composite image using the images from cameras used in guiding the workpieces that helps the user visualize how the part would appear following assembly. The user can reposition the images of workpieces in their respective stations until the composite image has a desired appearance.
    Type: Grant
    Filed: March 1, 2016
    Date of Patent: March 5, 2019
    Assignee: Cognex Corporation
    Inventors: Guruprasad Shivaram, Willard Foster
  • Patent number: 9734419
    Abstract: This invention provides a system and method to validate the accuracy of camera calibration in a single or multiple-camera embodiment, utilizing either 2D cameras or 3D imaging sensors. It relies upon an initial calibration process that generates and stores camera calibration parameters and residual statistics based upon images of a first calibration object. A subsequent validation process (a) acquires images of the first calibration object or a second calibration object having a known pattern and dimensions; (b) extracts features of the images of the first calibration object or the second calibration object; (c) predicts positions expected of features of the first calibration object or the second calibration object using the camera calibration parameters; and (d) computes a set of discrepancies between positions of the extracted features and the predicted positions of the features.
    Type: Grant
    Filed: December 30, 2008
    Date of Patent: August 15, 2017
    Assignee: COGNEX CORPORATION
    Inventors: Xiangyun Ye, Aaron S. Wallack, Guruprasad Shivaram, Cyril C. Marrion, David Y. Li
  • Publication number: 20170132807
    Abstract: This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.
    Type: Application
    Filed: July 29, 2016
    Publication date: May 11, 2017
    Inventors: Guruprasad Shivaram, Cyril C. Marrion, Jr., Lifeng Liu, Tuotuo Li
  • Patent number: 9569850
    Abstract: This invention provides a system and method for determining the pose of shapes that are known to a vision system that undergo both affine transformation and deformation. The object image with fiducial is acquired. The fiducial has affine parameters, including degrees of freedom (DOFs), search ranges and search step sizes, and control points with associated DOFs and step sizes. Each 2D affine parameter's search range and the distortion control points' DOFs are sampled and all combinations are obtained. The coarsely specified fiducial is transformed for each combination and a match metric is computed for the transformed fiducial, generating a score surface. Peaks are computed on this surface, as potential candidates, which are refined until a match metric is maximized. The refined representation exceeding a predetermined score is returned as potential shapes in the scene. Alternately the candidate with the best score can be used as a training fiducial.
    Type: Grant
    Filed: October 15, 2014
    Date of Patent: February 14, 2017
    Assignee: COGNEX CORPORATION
    Inventors: Guruprasad Shivaram, Lowell D. Jacobson, David Y. Li
  • Publication number: 20170024613
    Abstract: This invention provides a system and method for guiding the workpieces to optimal positions to train an assembly system that is generally free of the use of a CMM or similar metrology device. The system and method expresses the image features of the workpieces, when they are in their respective stations, in a common coordinate system. This ability allows a user to visualize the result of assembling the workpieces without actually assembling them, in a “virtual assembly”. The virtual assembly assists guiding placement of workpieces in respective stations into a desired relative alignment. The system and method illustratively generates a composite image using the images from cameras used in guiding the workpieces that helps the user visualize how the part would appear following assembly. The user can reposition the images of workpieces in their respective stations until the composite image has a desired appearance.
    Type: Application
    Filed: March 1, 2016
    Publication date: January 26, 2017
    Inventors: Guruprasad Shivaram, Willard Foster
  • Publication number: 20150104068
    Abstract: This invention provides a system and method for determining the pose of shapes that are known to a vision system that undergo both affine transformation and deformation. The object image with fiducial is acquired. The fiducial has affine parameters, including degrees of freedom (DOFs), search ranges and search step sizes, and control points with associated DOFs and step sizes. Each 2D affine parameter's search range and the distortion control points' DOFs are sampled and all combinations are obtained. The coarsely specified fiducial is transformed for each combination and a match metric is computed for the transformed fiducial, generating a score surface. Peaks are computed on this surface, as potential candidates, which are refined until a match metric is maximized. The refined representation exceeding a predetermined score is returned as potential shapes in the scene. Alternately the candidate with the best score can be used as a training fiducial.
    Type: Application
    Filed: October 15, 2014
    Publication date: April 16, 2015
    Inventors: Guruprasad Shivaram, Lowell D. Jacobson, David Y. Li
  • Patent number: 8442304
    Abstract: This invention provides a system and method for determining the three-dimensional alignment of a modeled object or scene. A 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. 3D points are computed for each pair of cameras to derive a 3D point cloud. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified whereby the closest match is the best refined three-dimensional pose.
    Type: Grant
    Filed: December 29, 2008
    Date of Patent: May 14, 2013
    Assignee: Cognex Corporation
    Inventors: Cyril C. Marrion, Nigel J. Foster, Lifeng Liu, David Y. Li, Guruprasad Shivaram, Aaron S. Wallack, Xiangyun Ye
  • Publication number: 20110157373
    Abstract: This invention provides a system and method for runtime determination (self-diagnosis) of camera miscalibration (accuracy), typically related to camera extrinsics, based on historical statistics of runtime alignment scores for objects acquired in the scene, which are defined based on matching of observed and expected image data of trained object models. This arrangement avoids a need to cease runtime operation of the vision system and/or stop the production line that is served by the vision system to diagnose if the system's camera(s) remain calibrated. Under the assumption that objects or features inspected by the vision system over time are substantially the same, the vision system accumulates statistics of part alignment results and stores intermediate results to be used as indicator of current system accuracy. For multi-camera vision systems, cross validation is illustratively employed to identify individual problematic cameras.
    Type: Application
    Filed: December 24, 2009
    Publication date: June 30, 2011
    Applicant: COGNEX CORPORATION
    Inventors: Xiangyun Ye, David Y. Li, Guruprasad Shivaram, David J. Michael
  • Publication number: 20100166294
    Abstract: This invention provides a system and method for determining the three-dimensional alignment of a modeledobject or scene. After calibration, a 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. A stereo matching process is then performed on at least two (a pair) of the rectified preprocessed images at a time by locating a predetermined feature on a first image and then locating the same feature in the other image. 3D points are computed for each pair of cameras to derive a 3D point cloud. The 3D point cloud is generated by transforming the 3D points of each camera pair into the world 3D space from the world calibration. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses.
    Type: Application
    Filed: December 29, 2008
    Publication date: July 1, 2010
    Applicant: COGNEX CORPORATION
    Inventors: Cyril C. Marrion, Nigel J. Foster, Lifeng Liu, David Y. Li, Guruprasad Shivaram, Aaron S. Wallack, Xiangyun Ye