Patents by Inventor David J. Michael
David J. Michael has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20210350595Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to generate point cloud histograms. A one-dimensional histogram can be generated by determining a distance to a reference for each 3D point of a 3D point cloud. A one-dimensional histogram is generated by adding, for each histogram entry, distances that are within the entry's range of distances. A two-dimensional histogram can be determined by generating a set of orientations by determining, for each 3D point, an orientation with at least a first value for a first component and a second value for a second component. A two-dimensional histogram can be generated based on the set of orientations. Each bin can be associated with ranges of values for the first and second components. Orientations can be added for each bin that have first and second values within the first and second ranges of values, respectively, of the bin.Type: ApplicationFiled: May 10, 2021Publication date: November 11, 2021Applicant: Cognex CorporationInventors: Hongwei Zhu, David J. Michael, Nitin M. Vaidya
-
Patent number: 11158039Abstract: A system and method for three dimensional (3D) vision inspection using a 3D vision system. The system and method comprising acquiring at least one 3D image of a 3D object using the 3D vision system, using the 3D vision system; extracting a 3D visible runtime mask of the 3D image; using the 3D vision system, comparing the 3D runtime visible mask to a 3D reference visible mask; and, using the 3D vision system, determining if a difference of pixels exists between the 3D runtime visible mask and the 3D reference visible mask.Type: GrantFiled: June 24, 2016Date of Patent: October 26, 2021Assignee: COGNEX CORPORATIONInventors: David J Michael, Gang Liu, Ali Zadeh
-
Patent number: 11132552Abstract: This invention provides a system and method that employs reduction in bandwidth and the amount of data stored, along with queuing of data, so that the significant and/or relevant shipboard visual information is detected and communicated continuously from the ship (or other remote vehicle/location) to shore without loss of useful/high-priority information and within the available bandwidth of that typical, available satellite link. The system and method supports remote configuration and management from shore to the ship over the same communications channel but in the reverse direction.Type: GrantFiled: February 12, 2021Date of Patent: September 28, 2021Assignee: ShipIn Systems Inc.Inventors: Ilan Naslavsky, Osher Perry, Moran Cohen, David J. Michael
-
Patent number: 10957072Abstract: This invention applies dynamic weighting between a point-to-plane and point-to-edge metric on a per-edge basis in an acquired image using a vision system. This allows an applied ICP technique to be significantly more robust to a variety of object geometries and/or occlusions. A system and method herein provides an energy function that is minimized to generate candidate 3D poses for use in alignment of runtime 3D image data of an object with model 3D image data. Since normals are much more accurate than edges, the use of normal is desirable when possible. However, in some use cases, such as a plane, edges provide information in relative directions the normals do not. Hence the system and method defines a “normal information matrix”, which represents the directions in which sufficient information is present. Performing (e.g.) a principal component analysis (PCA) on this matrix provides a basis for the available information.Type: GrantFiled: February 21, 2018Date of Patent: March 23, 2021Assignee: Cognex CorporationInventors: Andrew Hoelscher, Simon Barker, Adam Wagman, David J. Michael
-
Publication number: 20200394812Abstract: A system and method for estimating dimensions of an approximately cuboidal object from a 3D image of the object acquired by an image sensor of the vision system processor is provided. An identification module, associated with the vision system processor, automatically identifies a 3D region in the 3D image that contains the cuboidal object. A selection module, associated with the vision system processor, automatically selects 3D image data from the 3D image that corresponds to approximate faces or boundaries of the cuboidal object. An analysis module statistically analyzes, and generates statistics for, the selected 3D image data that correspond to approximate cuboidal object faces or boundaries. A refinement module chooses statistics that correspond to improved cuboidal dimensions from among cuboidal object length, width and height. The improved cuboidal dimensions are provided as dimensions for the object. A user interface displays a plurality of interface screens for setup and runtime operation.Type: ApplicationFiled: June 11, 2019Publication date: December 17, 2020Inventors: Ben R. Carey, Nickolas J. Mullan, Gilbert Chiang, Yukang Liu, Nitin M. Vaidya, Hongwei Zhu, Daniel Moreno, David J. Michael
-
Publication number: 20200388053Abstract: This invention provides an easy-to-manufacture, easy-to-analyze calibration object which combines measurable and repeatable, but not necessarily accurate, 3D features—such as a two-sided calibration object/target in (e.g.) the form of a frustum, with a pair of accurate and measurable features, more particularly parallel faces separated by a precise specified thickness, so as to provide for simple field calibration of opposite-facing DS sensors. Illustratively, a composite calibration object can be constructed, which includes the two-sided frustum that has been sandblasted and anodized (to provide measurable, repeatable features), with a flange whose above/below parallel surfaces have been ground to a precise specified thickness. The 3D corner positions of the two-sided frustum are used to calibrate the two sensors in X and Y, but cannot establish absolute Z without accurate information about the thickness of the two-sided frustum; the flange provides the absolute Z information.Type: ApplicationFiled: May 18, 2020Publication date: December 10, 2020Inventors: Aaron S. Wallack, Gang Liu, Robert A. Wolff, David J. Michael, Ruibing Wang, Hongwei Zhu
-
Patent number: 10812778Abstract: This invention provides a system and method for concurrently (i.e. non-serially) calibrating a plurality of 3D sensors to provide therefrom a single FOV in a vision system that allows for straightforward setup using a series of relatively straightforward steps that are supported by an intuitive graphical user interface (GUI). The system and method requires minimal data input about the scene or calibration object used to calibrate the sensors. 3D features of a stable object, typically employing one or more subobjects, are first measured by one of the image sensors, and then the feature measurements are used in a calibration in which each of the 3D sensors images a discrete one of the subobjects, resolves features thereon and computes a common coordinate space between the plurality of 3D sensors. Sensor(s) can be mounted on the arm of an encoderless robot or other conveyance and motion speed can be measured in setup.Type: GrantFiled: August 9, 2016Date of Patent: October 20, 2020Assignee: Cognex CorporationInventors: Ruibing Wang, Aaron S. Wallack, David J. Michael, Hongwei Zhu
-
Patent number: 10757394Abstract: This invention provides a system and method for concurrently (i.e. non-serially) calibrating a plurality of 3D sensors to provide therefrom a single FOV in a vision system that allows for straightforward setup using a series of relatively straightforward steps that are supported by an intuitive graphical user interface (GUI). The system and method requires minimal input significant data about the imaged scene or calibration object used to calibrate the sensors, thereby effecting a substantially “automatic” calibration procedure. 3D features of a stable object, typically employing a plurality of 3D subobjects are first measured by one of the plurality of image sensors, and then the feature measurements are used in a calibration in which each of the 3D sensors images a discrete one of the subobjects, resolves features thereon and computes a common coordinate space between the plurality of 3D sensors. Laser displacement sensors and a conveyor/motion stage can be employed.Type: GrantFiled: November 9, 2015Date of Patent: August 25, 2020Assignee: Cognex CorporationInventors: Ruibing Wang, Aaron S. Wallack, David J. Michael, Hongwei Zhu
-
Patent number: 10482621Abstract: This invention provides a system and method for estimating match of a 3D alignment pose of a runtime 3D point cloud relative to a trained model 3D point cloud. It includes scoring a match of a candidate pose of the runtime 3D point cloud relative to the trained model 3D point cloud, including a visibility check that comprises (a) receiving a 3D camera optical center (b) receiving the trained model 3D point cloud; (c) receiving the runtime 3D point cloud; and (d) constructing a plurality of line segments from the optical center to a plurality of 3D points in the 3D point cloud at the runtime candidate pose. A system and method for determining an accurate representation of a 3D imaged object by omitting spurious points from a composite point cloud based on the presence or absence of such points in a given number of point clouds is also provided.Type: GrantFiled: July 20, 2017Date of Patent: November 19, 2019Assignee: COGNEX CORPORATIONInventors: Andrew Hoelscher, Aaron S. Wallack, Adam Wagman, David J. Michael, Hongjun Jia
-
Patent number: 10452949Abstract: This invention provides a system and method for aligning first three-dimensional (3D) point cloud image representing a model with a second 3D point cloud image representing a target, using a vision system processor. A passing overall score is established for possible alignments of the first 3D point cloud image with the second 3D point cloud image. A coverage score for at least one alignment of the first 3D point cloud image with the second 3D point cloud image is estimated so that the coverage score describes an amount of desired features in the first 3D point cloud image present in the second 3D point cloud image. A clutter score is estimated so that the clutter score describes extraneous features in the second 3D point cloud image. An overall score is computed as a difference between the coverage score and the clutter score.Type: GrantFiled: November 12, 2015Date of Patent: October 22, 2019Assignee: Cognex CorporationInventors: Hongjun Jia, David J. Michael, Adam Wagman, Andrew Hoelscher
-
Patent number: 10438036Abstract: This invention provides a system and method for reading and decoding ID features located on a surface of a curved, sloped and/or annular object, such as a tire moving on a conveyor. A plurality of 3D sensors are operatively connected to a vision system processor. The sensors are calibrated by calibration parameters to generate a stitched-together 3D image of a field of view in a common coordinate space. A motion conveyance (e.g. a conveyor) causes the object and the 3D sensors to move in relative motion, and the conveyance provides motion information to the vision system processor. An ID finder locates ID features within a version of the 3D image and a decoder (e.g. an OCR reader) generates data from the ID features. The ID finder can locate a trained portion of the ID and the search for variable code elements at a known orientation relative to the trained portion.Type: GrantFiled: November 9, 2016Date of Patent: October 8, 2019Assignee: Cognex CorporationInventors: Matthew R. Reome, Ali M. Zadeh, Robert A. Wolff, Ruibing Wang, Aaron S. Wallack, David J. Michael, Hongwei Zhu, Benjamin D. Klass
-
Publication number: 20190259177Abstract: This invention applies dynamic weighting between a point-to-plane and point-to-edge metric on a per-edge basis in an acquired image using a vision system. This allows an applied ICP technique to be significantly more robust to a variety of object geometries and/or occlusions. A system and method herein provides an energy function that is minimized to generate candidate 3D poses for use in alignment of runtime 3D image data of an object with model 3D image data. Since normals are much more accurate than edges, the use of normal is desirable when possible. However, in some use cases, such as a plane, edges provide information in relative directions the normals do not. Hence the system and method defines a “normal information matrix”, which represents the directions in which sufficient information is present. Performing (e.g.) a principal component analysis (PCA) on this matrix provides a basis for the available information.Type: ApplicationFiled: February 21, 2018Publication date: August 22, 2019Inventors: Andrew Hoelscher, Simon Barker, Adam Wagman, David J. Michael
-
Patent number: 10380767Abstract: A system and method for selecting among 3D alignment algorithms in a 3D vision system is provided. The system and method includes a 3D camera assembly to acquire at least a runtime image defined by a 3D point cloud or runtime 3D range image having features of a runtime object and a vision system processor. A training image is provided. It is defined by a 3D point cloud or 3D range image having features of a model. A selection process is operated by the vision processor. It analyzes at least one training region of the training image having the features of the model and determines a distribution of surface normals in the at least one training region. It also selects, based upon a characteristic of the distribution, at least one 3D alignment algorithm from a plurality of available 3D alignment algorithms to align the features of the model with respect to the features of the runtime object.Type: GrantFiled: June 27, 2017Date of Patent: August 13, 2019Assignee: Cognex CorporationInventors: Simon Barker, David J. Michael
-
Patent number: 10192283Abstract: This invention provides a system and method for determining the level of clutter in an image in a manner that is rapid, and that allows a scoring process to quickly determine whether an image is above or below an acceptable level of clutter—for example to determine if the underlying imaged runtime object surface is defective without need to perform a more in-depth analysis of the features of the image. The system and method employs clutter test points that are associated with regions on the image that should contain a low gradient magnitude, indicative of emptiness. This enables the runtime image to be analyzed quickly by mapping trained clutter test points at locations in the coordinate space in which lack of emptiness indicates clutter, and if detected, can rapidly indicate differences and/or defects that allow for the subject of the image to be accepted or rejected without further image analysis.Type: GrantFiled: December 22, 2014Date of Patent: January 29, 2019Assignee: COGNEX CORPORATIONInventors: Jason Davis, David J. Michael, Nathaniel R. Bogan
-
Publication number: 20180374239Abstract: This invention provides an easy-to-manufacture, easy-to-analyze calibration object which combines measurable and repeatable, but not necessarily accurate, 3D features—such as a two-sided calibration object/target in (e.g.) the form of a frustum, with a pair of accurate and measurable features, more particularly parallel faces separated by a precise specified thickness, so as to provide for simple field calibration of opposite-facing DS sensors. Illustratively, a composite calibration object can be constructed, which includes the two-sided frustum that has been sandblasted and anodized (to provide measurable, repeatable features), with a flange whose above/below parallel surfaces have been ground to a precise specified thickness. The 3D corner positions of the two-sided frustum are used to calibrate the two sensors in X and Y, but cannot establish absolute Z without accurate information about the thickness of the two-sided frustum; the flange provides the absolute Z information.Type: ApplicationFiled: October 13, 2017Publication date: December 27, 2018Inventors: Aaron S. Wallack, Robert A. Wolff, David J. Michael, Ruibing Wang, Hongwei Zhu
-
Patent number: 10057498Abstract: This invention provides a vision system camera assembly and method for using the same that employs a light-field camera with an associated vision system image sensor and overlying microlens optics to acquire images of a scene. The camera generates a light field allowing object features at varying depths of field to be clearly imaged in a concurrent manner. In an illustrative embodiment a vision system, and associated method of use thereof, which images an object or other subject in a scene includes a vision system camera with an optics assembly and a light field sensor assembly. The camera is constructed and arranged to generate light field image data from light received through the optics assembly. A light field process analyzes the light field image data and that generates selected image information. A vision system processor then operates a vision system process on the selected image information to generate results therefrom.Type: GrantFiled: March 15, 2013Date of Patent: August 21, 2018Assignee: Cognex CorporationInventors: Laurens Nunnink, William Equitz, David J. Michael
-
Publication number: 20180225799Abstract: This invention provides a system and method for scoring a candidate pose in a geometric-pattern matching tool of a vision system by using trained color, grayscale and/or range (height) information (“color/grayscale/range”) in association with edge-aligned candidate poses. A trained pattern includes associated color/grayscale/range information in a set of test points. At runtime, a color, grayscale and/or range image is acquired and/or provided. A runtime pose is established with a coordinate space for the color/grayscale/range image with respect to the trained pattern, where the runtime pose is generated by an alignment tool. The color/grayscale/range test points are mapped onto the coordinate space for the image. The match is then determined at the respective mapped test points. Based on the test point match, a score is determined. The score is used in conjunction with the alignment result in runtime to accept or reject candidate poses from (e.g.) acquired images of runtime objects.Type: ApplicationFiled: February 3, 2017Publication date: August 9, 2018Inventors: Jason Davis, David J. Michael
-
Patent number: 9995573Abstract: The present application discloses a probe placement module for placing probes on a virtual object depicted in an image. The probe placement module is configured to place probes on interest points of an image so that the probes can accurately represent a pattern depicted in the image. The probe placement module can be configured to place the probes so that the probes can extract balanced information on all degrees of freedom associated with the pattern's movement, which improves the accuracy of the model generated from the probes.Type: GrantFiled: January 23, 2015Date of Patent: June 12, 2018Assignee: Cognex CorporationInventors: Simon Barker, David J. Michael, William M. Silver
-
Publication number: 20180130224Abstract: This invention provides a system and method for estimating match of a 3D alignment pose of a runtime 3D point cloud relative to a trained model 3D point cloud. It includes scoring a match of a candidate pose of the runtime 3D point cloud relative to the trained model 3D point cloud, including a visibility check that comprises (a) receiving a 3D camera optical center (b) receiving the trained model 3D point cloud; (c) receiving the runtime 3D point cloud; and (d) constructing a plurality of line segments from the optical center to a plurality of 3D points in the 3D point cloud at the runtime candidate pose. A system and method for determining an accurate representation of a 3D imaged object by omitting spurious points from a composite point cloud based on the presence or absence of such points in a given number of point clouds is also provided.Type: ApplicationFiled: July 20, 2017Publication date: May 10, 2018Inventors: Andrew Hoelscher, Aaron S. Wallack, Adam Wagman, David J. Michael, Hongjun Jia
-
Publication number: 20180130234Abstract: A system and method for selecting among 3D alignment algorithms in a 3D vision system is provided. The system and method includes a 3D camera assembly to acquire at least a runtime image defined by a 3D point cloud or runtime 3D range image having features of a runtime object and a vision system processor. A training image is provided. It is defined by a 3D point cloud or 3D range image having features of a model. A selection process is operated by the vision processor. It analyzes at least one training region of the training image having the features of the model and determines a distribution of surface normals in the at least one training region. It also selects, based upon a characteristic of the distribution, at least one 3D alignment algorithm from a plurality of available 3D alignment algorithms to align the features of the model with respect to the features of the runtime object.Type: ApplicationFiled: June 27, 2017Publication date: May 10, 2018Inventors: Simon Barker, David J. Michael