Patents by Inventor Aaron S. Wallack

Aaron S. Wallack has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160039096
    Abstract: A system and method for robustly calibrating a vision system and a robot is provided. The system and method enables a plurality of cameras to be calibrated into a robot base coordinate system to enable a machine vision/robot control system to accurately identify the location of objects of interest within robot base coordinates.
    Type: Application
    Filed: October 23, 2015
    Publication date: February 11, 2016
    Inventors: Aaron S. Wallack, Lifeng Liu, Xiangyun Ye
  • Patent number: 9124873
    Abstract: This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene involving microscopic and near microscopic objects under manufacture moved by a manipulator, so as to acquire contemporaneous images of a runtime object and determine the pose of the object for the purpose of guiding manipulator motion. At least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly can be combined with the searched 2D object features in images of other camera assemblies, based on their trained object features to generate a set of 3D features and thereby determine a 3D pose of the object.
    Type: Grant
    Filed: October 24, 2013
    Date of Patent: September 1, 2015
    Assignee: Cognex Corporation
    Inventors: Lifeng Liu, Aaron S. Wallack, Cyril C. Marrion, Jr., David J. Michael
  • Publication number: 20140347473
    Abstract: This invention provides a system for measuring displacement of an object surface having a displacement sensor that projects a line on the object surface and receives light from the projected line at an imager in a manner defines a plurality of displacement values in a height direction. A vision system processor operates on rows of imager pixels to determine a laser line center in columns of imager pixels in each of a plurality of regions of interest. Each region of interest defines a plurality of rows that correspond with expected locations of the projected line on the object surface. A GUI can be used to establish the regions. In further embodiments, the system generates grayscale images with the imager. These grayscale images can be compared to a generated height image to compensate for contrast-induced false height readings. Imager pixels can be compared to a reference voltage to locate the line.
    Type: Application
    Filed: January 7, 2014
    Publication date: November 27, 2014
    Applicant: Cognex Corporation
    Inventors: Robert A. Wolff, Michael C. Moed, Mikhail Akopyan, Robert Tremblay, Willard Foster, Aaron S. Wallack
  • Patent number: 8872911
    Abstract: A method and apparatus for assessing at least one of motion linearity of a motion stage, stage motion straightness of a motion stage, image capture repeatability of a motion stage and camera and accuracy of a calibration plate used to assess motion stage characteristics, the method including using a line scan camera to generate two dimensional images of a calibration plate having a plurality imageable features thereon, examining the images to identify actual coordinates of the imageable features and using the actual coordinates to assess linearity, straightness, repeatability and/or plate accuracy.
    Type: Grant
    Filed: January 5, 2010
    Date of Patent: October 28, 2014
    Assignee: Cognex Corporation
    Inventors: Aaron S. Wallack, David J. Michael
  • Publication number: 20140118500
    Abstract: This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene involving microscopic and near microscopic objects under manufacture moved by a manipulator, so as to acquire contemporaneous images of a runtime object and determine the pose of the object for the purpose of guiding manipulator motion. At least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly can be combined with the searched 2D object features in images of other camera assemblies, based on their trained object features to generate a set of 3D features and thereby determine a 3D pose of the object.
    Type: Application
    Filed: October 24, 2013
    Publication date: May 1, 2014
    Applicant: Cognex Corporation
    Inventors: Lifeng Liu, Aaron S. Wallack, Cyril C. Marrion, JR., David J. Michael
  • Patent number: 8600192
    Abstract: This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of a runtime object and determine the pose of the object, and in which at least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly, can be combined with the searched 2D object features in images of other camera assemblies (perspective or non-perspective), based on their trained object features to generate a set of 3D image features and thereby determine a 3D pose of the object. In this manner the speed and accuracy of the overall pose determination process is improved. The non-perspective lens can be a telecentric lens.
    Type: Grant
    Filed: December 8, 2010
    Date of Patent: December 3, 2013
    Assignee: Cognex Corporation
    Inventors: Lifeng Liu, Aaron S. Wallack, Cyril C. Marrion
  • Patent number: 8588511
    Abstract: An image of a semiconductor interconnection pad is analyzed to determine a geometric description of the zone regions of a multiple zone semiconductor interconnection pad. Edge detection machine vision tools are used to extract features in the image. The extracted features are analyzed to derive geometric descriptions of the zone regions of the pad, that are applied in semiconductor device inspection, fabrication, and assembly operations.
    Type: Grant
    Filed: December 19, 2006
    Date of Patent: November 19, 2013
    Assignee: Cognex Corporation
    Inventors: Gang Liu, Aaron S. Wallack, David J. Michael
  • Patent number: 8442304
    Abstract: This invention provides a system and method for determining the three-dimensional alignment of a modeled object or scene. A 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. 3D points are computed for each pair of cameras to derive a 3D point cloud. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified whereby the closest match is the best refined three-dimensional pose.
    Type: Grant
    Filed: December 29, 2008
    Date of Patent: May 14, 2013
    Assignee: Cognex Corporation
    Inventors: Cyril C. Marrion, Nigel J. Foster, Lifeng Liu, David Y. Li, Guruprasad Shivaram, Aaron S. Wallack, Xiangyun Ye
  • Patent number: 8315457
    Abstract: A system and method for performing multi-image training for pattern recognition and registration is provided. A machine vision system first obtains N training images of the scene. Each of the N images is used as a baseline image and the N?1 images are registered to the baseline. Features that represent a set of corresponding image features are added to the model. The feature to be added to the model may comprise an average of the features from each of the images in which the feature appears. The process continues until every feature that meets a threshold requirement is accounted for. The model that results from the present invention represents those stable features that are found in at least the threshold number of the N training images. The model may then be used to train an alignment/inspection tool with the set of features.
    Type: Grant
    Filed: November 12, 2008
    Date of Patent: November 20, 2012
    Assignee: Cognex Corporation
    Inventors: Nathaniel Bogan, Xiaoguang Wang, Aaron S. Wallack
  • Publication number: 20120147149
    Abstract: This invention provides a system and method for training and performing runtime 3D pose determination of an object using a plurality of camera assemblies in a 3D vision system. The cameras are arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of an object, both at training and runtime. Each of the camera assemblies includes a non-perspective lens that acquires a respective non-perspective image for use in the process. The searched object features in one of the acquired non-perspective image can be used to define the expected location of object features in the second (or subsequent) non-perspective images based upon an affine transform, which is computed based upon at least a subset of the intrinsics and extrinsics of each camera. The locations of features in the second, and subsequent, non-perspective images can be refined by searching within the expected location of those images.
    Type: Application
    Filed: December 8, 2010
    Publication date: June 14, 2012
    Applicant: COGNEX CORPORATION
    Inventors: Lifeng Liu, Aaron S. Wallack, Cyril C. Marrion, JR.
  • Publication number: 20120148145
    Abstract: This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of a runtime object and determine the pose of the object, and in which at least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly, can be combined with the searched 2D object features in images of other camera assemblies (perspective or non-perspective), based on their trained object features to generate a set of 3D image features and thereby determine a 3D pose of the object. In this manner the speed and accuracy of the overall pose determination process is improved. The non-perspective lens can be a telecentric lens.
    Type: Application
    Filed: December 8, 2010
    Publication date: June 14, 2012
    Applicant: COGNEX CORPORATION
    Inventors: Lifeng Liu, Aaron S. Wallack, Cyril C. Marrion
  • Patent number: 8189904
    Abstract: Digital image processing methods are applied to an image of a semiconductor interconnection pad to preprocess the image prior to an inspection or registration. An image of a semiconductor pads exhibiting spatial patterns from structure, texture or features are filtered without affecting features in the image not associated with structure or texture. The filtered image is inspected in a probe mark inspection operation.
    Type: Grant
    Filed: November 17, 2010
    Date of Patent: May 29, 2012
    Assignee: Cognex Technology and Investment Corporation
    Inventors: Aaron S. Wallack, Juha Koljonen, David J. Michael
  • Patent number: 8126260
    Abstract: This invention provides a system and method for determining position of a viewed object in three dimensions by employing 2D machine vision processes on each of a plurality of planar faces of the object, and thereby refining the location of the object. First a rough pose estimate of the object is derived. This rough pose estimate can be based upon predetermined pose data, or can be derived by acquiring a plurality of planar face poses of the object (using, for example multiple cameras) and correlating the corners of the trained image pattern, which have known coordinates relative to the origin, to the acquired patterns. Once the rough pose is achieved, this is refined by defining the pose as a quaternion (a, b, c and d) for rotation and a three variables (x, y, z) for translation and employing an iterative weighted, least squares error calculation to minimize the error between the edgelets of trained model image and the acquired runtime edgelets.
    Type: Grant
    Filed: May 29, 2007
    Date of Patent: February 28, 2012
    Assignee: Cognex Corporation
    Inventors: Aaron S. Wallack, David J. Michael
  • Patent number: 8111904
    Abstract: The invention provides inter alia methods and apparatus for determining the pose, e.g., position along x-, y- and z-axes, pitch, roll and yaw (or one or more characteristics of that pose) of an object in three dimensions by triangulation of data gleaned from multiple images of the object. Thus, for example, in one aspect, the invention provides a method for 3D machine vision in which, during a calibration step, multiple cameras disposed to acquire images of the object from different respective viewpoints are calibrated to discern a mapping function that identifies rays in 3D space emanating from each respective camera's lens that correspond to pixel locations in that camera's field of view. In a training step, functionality associated with the cameras is trained to recognize expected patterns in images to be acquired of the object.
    Type: Grant
    Filed: October 7, 2005
    Date of Patent: February 7, 2012
    Assignee: Cognex Technology and Investment Corp.
    Inventors: Aaron S Wallack, David Michael
  • Publication number: 20110314385
    Abstract: A method and system is provided for viewing machine vision information. The machine vision information includes machine vision data representing a sequence of machine vision processing steps. The machine vision information pertaining to a machine vision process on a given machine vision processor is produced. The machine vision information is displayed at a device remotely located from the given machine vision processor. A selection interface is provided on the device to allow a user to view the machine vision data corresponding to at least one stage of the machine vision processing.
    Type: Application
    Filed: June 13, 2011
    Publication date: December 22, 2011
    Applicant: COGNEX CORPORATION
    Inventors: Raymond A. Fix, Aaron S. Wallack
  • Publication number: 20110280472
    Abstract: A system and method for robustly calibrating a vision system and a robot is provided. The system and method enables a plurality of cameras to be calibrated into a robot base coordinate system to enable a machine vision/robot control system to accurately identify the location of objects of interest within robot base coordinates.
    Type: Application
    Filed: May 14, 2010
    Publication date: November 17, 2011
    Inventors: Aaron S. Wallack, Lifeng Liu, Xiangyun Ye
  • Patent number: 7965887
    Abstract: Machine vision tools are applied to color images using methods that utilize an optimized spectrum of the color information. Such methods include full color normalized correlation techniques and methods to convert color images to greyscale using weighting factors that maximize color contrast in a corresponding greyscale image.
    Type: Grant
    Filed: December 1, 2005
    Date of Patent: June 21, 2011
    Assignee: Cognex Technology and Investment Corp.
    Inventors: Aaron S. Wallack, David Michael
  • Patent number: 7961201
    Abstract: A method and system is provided for viewing machine vision information. The machine vision information includes machine vision data representing a sequence of machine vision processing steps. The machine vision information pertaining to a machine vision process on a given machine vision processor is produced. The machine vision information is displayed at a device remotely located from the given machine vision processor. A selection interface is provided on the device to allow a user to view the machine vision data corresponding to at least one stage of the machine vision processing.
    Type: Grant
    Filed: December 21, 2000
    Date of Patent: June 14, 2011
    Assignee: Cognex Corporation
    Inventors: Raymond A. Fix, Aaron S. Wallack
  • Patent number: 7885453
    Abstract: Digital image processing methods are applied to an image of a semiconductor interconnection pad to preprocess the image prior to an inspection or registration. An image of a semiconductor pads exhibiting spatial patterns from structure, texture or features are filtered without affecting features in the image not associated with structure or texture. The filtered image is inspected in a probe mark inspection operation.
    Type: Grant
    Filed: June 7, 2007
    Date of Patent: February 8, 2011
    Assignee: Cognex Technology and Investment Corporation
    Inventors: Aaron S. Wallack, Juha Koljonen, David J. Michael
  • Publication number: 20100166294
    Abstract: This invention provides a system and method for determining the three-dimensional alignment of a modeledobject or scene. After calibration, a 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. A stereo matching process is then performed on at least two (a pair) of the rectified preprocessed images at a time by locating a predetermined feature on a first image and then locating the same feature in the other image. 3D points are computed for each pair of cameras to derive a 3D point cloud. The 3D point cloud is generated by transforming the 3D points of each camera pair into the world 3D space from the world calibration. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses.
    Type: Application
    Filed: December 29, 2008
    Publication date: July 1, 2010
    Applicant: COGNEX CORPORATION
    Inventors: Cyril C. Marrion, Nigel J. Foster, Lifeng Liu, David Y. Li, Guruprasad Shivaram, Aaron S. Wallack, Xiangyun Ye