Abstract: Methods for classifying and measuring orientations of objects, as nonlimiting examples, implants utilizing two-dimensional radiographs. One such method determines a three-dimensional orientation of an object based on its area projected onto a two-dimensional image and known or measured geometry. Another such method provides an automated solution to computationally determine the orientation and characterizing features of an implant based on two-dimensional radiographs. Orientations and characteristics of one or more objects in the vicinity of an object of interest may also be determined.
Type:
Grant
Filed:
August 8, 2019
Date of Patent:
January 18, 2022
Assignee:
Loyola University Chicago
Inventors:
Michael Patrick Murphy, Cameron James Killen, Karen Wu
Abstract: Identifying words to accurately describe, with a range of specificity, an image is provided. A vector space corresponding to the image is generated using a convolutional neural network to extract a hierarchy of features ranging from broad to specific from the image. Closest vocabulary ranging from broad to specific are identified for the image using Huffman coding on the vector space. Accurate words ranging from broad to specific are identified that describe the image based on vocabulary output of the Huffman coding on the vector space. The accurate words ranging from broad to specific describing the image are output.
Type:
Grant
Filed:
August 2, 2018
Date of Patent:
January 18, 2022
Assignee:
International Business Machines Corporation
Inventors:
Craig M. Trim, Aaron K. Baughman, Barry Michael Graham, Todd R. Whitman
Abstract: A method for monitoring a work process in which a workpiece is worked on by a tool, the method comprising: providing an irregular pattern of indicia remote from the workpiece and the tool; imaging the indicia by an imaging device carried by the workpiece and thereby estimating the location of the workpiece with respect to the indicia; imaging the indicia by an imaging device carried by the tool and thereby estimating the location of the tool with respect to the indicia; and correlating the location of the workpiece and the tool to estimate an operation performed by the tool on the workpiece.
Type:
Grant
Filed:
November 7, 2018
Date of Patent:
January 11, 2022
Assignee:
Mo-Sys Engineering Limited
Inventors:
Michael Paul Alexander Geissler, Martin Peter Parsley
Abstract: Systems and methods for optimal electron beam metrology guidance are disclosed. According to certain embodiments, the method may include receiving an acquired image of a sample, determining a set of image parameters based on an analysis of the acquired image, determining a set of model parameters based on the set of image parameters, generating a set of simulated images based on the set of model parameters. The method may further comprise performing measurement of critical dimensions on the set of simulated images and comparing critical dimension measurements with the set of model parameters to provide a set of guidance parameters based on comparison of information from the set of simulated images and the set of model parameters. The method may further comprise receiving auxiliary information associated with target parameters including critical dimension uniformity.
Type:
Grant
Filed:
August 28, 2019
Date of Patent:
January 4, 2022
Assignee:
ASML Netherlands B.V.
Inventors:
Lingling Pu, Wei Fang, Nan Zhao, Wentian Zhou, Teng Wang, Ming Xu
Abstract: A method for constructing a data processing model, includes: acquiring a model description parameter and sample data of a target data processing model; determining a base model according to the model description parameter and the sample data; and training the base model according to the sample data to obtain the target data processing model.
Abstract: The present disclosure relates to a computer-implemented system and method for finding matching occurrences of an item of interest (or image or sub-image) within a document (or larger image) via cross correlation and setting a dynamic threshold for each document (or larger image). The described system and method are capable of matching and locating the one or more items of interest within each specific document (or larger image).
Abstract: An information processing apparatus includes a first setting unit configured to set a plurality of observation necessity degrees of positions in a real space and times, a first display unit configured to display the plurality of observation necessity degrees mapped based on a position in the real space and a time, a second display unit configured to display a plurality of targets detected from a captured image, based on observation necessity degrees each corresponding to a different one of the plurality of targets, and a receiving unit configured to receive an input of information corresponding to at least one target of the plurality of targets. The first setting unit resets at least one observation necessity degree of the plurality of observation necessity degrees based on the input information.
Abstract: A computer-implemented method for determining whether a first image contains at least a portion of a second image includes: determining a first set of feature points associated with the first image; removing from said first set of feature points at least some feature points in the first set that correspond to one or more textures in the first image; and then attempting to match feature points in said first set of feature points with feature points in a second set of feature points associated with said second image to determine whether said first image contains at least a portion of said second image.
Abstract: An object identification device includes: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: compare a captured image against a plurality of identification images for identifying objects; and determine, after the comparison result indicates that a plurality of objects are included in the captured image, whether or not the plurality of objects are same objects, based on a first parameter indicating a geometric relation between the identification images and a second parameter indicating a geometric relation between identification image related to each identified object and the captured image.
Abstract: An image processing apparatus includes a provisional model generation portion, a matching score obtaining portion, an evaluation portion, and a determination portion. The provisional model generation portion is configured to generate a plurality of provisional models. The matching score obtaining portion is configured to perform the pattern matching between each of the plurality of provisional models and each of a plurality of evaluation images, and obtain a first matching score group that is a set of first matching scores indicating a highest degree of similarity and a second matching score group. The evaluation portion is configured to calculate an evaluation value from the first matching score group and the second matching score group. The determination portion is configured to determine a matching model from among the plurality of provisional models on a basis of the calculated evaluation value.
Abstract: An image processing apparatus extracts a first region and a second region from a medical image, identifies a third region that is included in the second region and that is at a distance greater than or equal to a threshold from the first region, and acquires a feature value that is a value indicating a feature of the second region on the basis of the third region.
Abstract: An image generation apparatus includes a processing circuit and a memory storing at least one computational image. The at least one computational image is a light-field image, a compressive sensing image, or a coded image. The processing circuit (a1) identifies a position of an object in the at least one computational image using a classification device, (a2) generates, using the at least one computational image, a display image in which an indication for highlighting the position of the object is superimposed, and (a3) outputs the display image.
Abstract: Disclosed is a method for planning screw locking path using an ant colony algorithm, which includes: obtaining designated positions of screw holes to be locked; using a distance between the screw holes to be locked as pheromone; obtaining a set of initial paths for all lockings; determining whether a condition for ending an iteration is met, and the condition is whether all the locking paths have passed through all the designated positions; if the condition for ending the iteration is not met, obtaining a supplementary path for each locking path to form an entire path of each locking path until the condition for ending the iteration is met; taking a set of entire paths of all the locking paths as a set of final paths; obtaining a shortest path from the set of final paths.
Abstract: A system and method for semantic segmentation with a soft cross-entropy loss is provided. The system inputs a first color image to an input layer of a semantic segmentation network for a multi-class classification task. The semantic segmentation network generates, at an auxiliary stride, a first feature map as an output of an auxiliary layer of the semantic segmentation network based on the input first color image. The system extracts the generated first feature map from the auxiliary layer and computes a probability map as a set of soft labels over a set of classes of the multi-class classification task, based on the extracted first feature map. The system further computes an auxiliary cross-entropy loss between the computed probability map and a ground truth probability map for the auxiliary stride and trains the semantic segmentation network for the multi-class classification task based on the computed auxiliary cross-entropy loss.
Abstract: A system and method for star tracking includes: capturing an image of stars; detecting and selecting visible stars from the captured image; extracting features from the selected stars by forming a convex hull from the selected stars to generate a spherical polygon; computing the area and higher order moments of the spherical polygon; and pattern matching the extracted feature against a database of star catalog. The pattern matching includes matching the area of the spherical polygon to a plurality of polygon areas stored in the database and when the number of the matching candidates is more than one, matching a next extracted higher order moment with a respective higher order moment in the database, and repeating said matching of the next extracted higher order moment until the number of the matching candidates is equal to one.
Type:
Grant
Filed:
January 17, 2020
Date of Patent:
November 30, 2021
Assignee:
Raytheon Company
Inventors:
Huy P. Nguyen, Dieter G. Krausser, Pradyumna Kannan
Abstract: First and second confidence values are determined. The first confidence value corresponds to a likelihood that, from a position and an orientation of a mobile computing device, the mobile computing device is pointing to a visual code. The second confidence value corresponds to a likelihood that, from an image captured by the mobile computing device, the mobile computing device is pointing to the visual code. Whether the mobile computing device is pointing to the visual code is determined based on the first confidence value and the second confidence value.
Type:
Grant
Filed:
June 29, 2017
Date of Patent:
November 16, 2021
Assignee:
Hewlett-Packard Development Company, L.P.
Inventors:
Lucio Polese Cossio, Renato Oliveira da Silva
Abstract: A method and apparatus for detecting a moving target, an electronic equipment and a storage medium. The method includes obtaining a first frame image and a second frame image which are adjacent, and a rotation matrix and a translation matrix between the first and second frame images, the first and second frame image include the same moving target; extracting first feature points from the first frame image; determining second feature points corresponding to the first feature points from the second frame image based on the second frame image and the first feature points; determining distances between the second feature points and corresponding polar lines based on the rotation matrix and the translation matrix; determining third feature points located on the moving target based on the distances; and detecting the moving target based on the third feature points.
Type:
Grant
Filed:
December 30, 2020
Date of Patent:
November 16, 2021
Assignee:
BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD
Abstract: A method of identifying contiguities in images is disclosed. The contiguities are indicative features and various qualities of an image, which may be used for identifying objects and/or relationships in images. Alternatively, the contiguities may be helpful in ensuring that an image has a desired switching factor, so as to create a desired effect when combined with other images in a composite image. The contiguity may be a group of picture elements that are adjacent to one another that form a continuous image element that extends generally horizontally (e.g., diagonally, horizontally) across the image.
Abstract: A method for monitoring a gauge uses a machine learning model that was previously trained based on a set of training data to classify relative indicator positions between any pair of gauge images. The method receives an input indicative of a first reference gauge value and a first gauge image. The method classifies, using the machine learning model, the indicator position on the first gauge image relative to the indicator position on a first reference gauge image associated with the first reference gauge value and provides an output having a first value indicative of the indicator position of the first gauge image being on a first side of the indicator position of the first reference gauge image and a second value indicative of the indicator position of the first gauge image being on a second side of the indicator position of the first reference gauge image.
Abstract: System and techniques for calibrating a crop row computer vision system are described herein. An image set that includes crop rows and furrows is obtained. Models of the field are searched to find a model that best fits the field. A calibration parameter is extracted from the model and communicated to a receiver.
Type:
Grant
Filed:
July 11, 2019
Date of Patent:
November 9, 2021
Assignee:
Raven Industries, Inc.
Inventors:
Yuri Sneyders, John D. Preheim, Jeffrey Allen Van Roekel