Abstract: An image matching technique locates feature points in a template image such as a logo and then does the same in a test image. Feature points from the template image are then matched to the feature points in the test image. An additional matching technique boosts the number of points that match each other. The additional points improve the match quality and help discriminate true from false positive matches.
Abstract: In an example, a method and apparatus for obtaining an image mask is provided. After a magnitude image and a phase image of a to-be-processed image is obtained, magnitude coherent data of each pixel point in the magnitude image and phase coherent data of each pixel point in the phase image may be calculated. Then, a binarization threshold processing may be performed on the magnitude coherent data of each pixel point in the magnitude image to obtain a magnitude image mask. A binarization threshold processing may be performed on the phase coherent data of each pixel point in the phase image to obtain a phase image mask. In this way, an image mask of the to-be-processed image may be obtained by using the magnitude image mask and the phase image mask.
August 5, 2016
Date of Patent:
January 29, 2019
Shenyang Neusoft Medical Systems Co., Ltd.
Abstract: A tracker is described which comprises an input configured to receive captured sensor data depicting an object. The tracker has a processor configured to access a rigged, smooth-surface model of the object and to compute values of pose parameters of the model by calculating an optimization to fit the model to data related to the captured sensor data. Variables representing correspondences between the data and the model are included in the optimization jointly with the pose parameters.
December 29, 2015
Date of Patent:
January 22, 2019
Microsoft Technology Licensing, LLC
Jonathan James Taylor, Thomas Joseph Cashman, Andrew William Fitzgibbon, Toby Sharp, Jamie Daniel Joseph Shotton
Abstract: There are provided an apparatus and method for diagnosis using a medical image. The apparatus includes: an analyzing unit configured to detect a lesion area, and generate a group of candidate lesion areas with respect to the detected lesion area; and an interface unit configured to arrange one or more candidate lesion areas selected among the group of candidate lesion areas with information about each of the one or more selected candidate lesion areas in a first region of an interface.
Abstract: A novel disparity computation technique is presented which comprises multiple orthogonal disparity maps, generated from approximately orthogonal decomposition feature spaces, collaboratively generating a composite disparity map. Using an approximately orthogonal feature set extracted from such feature spaces produces an approximately orthogonal set of disparity maps that can be composited together to produce a final disparity map. Various methods for dimensioning scenes and are presented. One approach extracts the top and bottom vertices of a cuboid, along with the set of lines, whose intersections define such points. It then defines a unique box from these two intersections as well as the associated lines. Orthographic projection is then attempted, to recenter the box perspective. This is followed by the extraction of the three-dimensional information that is associated with the box, and finally, the dimensions of the box are computed. The same concepts can apply to hallways, rooms, and any other object.
Abstract: A data set may be compressed by predicting a value for the values of the data set. A comparative value may then be determined between a predicted value and an actual value for the particular points of the data set. The comparators for the particular points of the data set may then be encoded.
Abstract: A sample set of images is received. Each image in the sample set may be associated with one or more social cues. Correlation of each image in the sample set with an image class is scored based on the one or more social cues associated with the image. Based on the scoring, a training set of images to train a classifier is determined from the sample set. In an embodiment, an extent to which an evaluation set of images correlates with the image class is determined. The determination may comprise ranking a top scoring subset of the evaluation set of images.
Abstract: A two-dimensional code is attached to a location of a reagent storage unit which is visually recognizable from the outside, and a coordinate position of the two-dimensional code in a coordinate system of the two-dimensional code and coordinate information of an installation position of a reagent bottle are held. After that, an image of the two-dimensional code is captured by a portable terminal so that a coordinate system of an image capture unit of the portable terminal is converted into the coordinate system of the two-dimensional code using AR technology. The coordinate information of the installation position of the reagent bottle in the coordinate system of the two-dimensional code is regarded as positional coordinates in the captured image on the basis of the conversion, thereby ascertaining the position of the reagent bottle on the captured image and displaying the ascertained position on a display unit.
Abstract: Methods and systems for creating an image-based measurement model based only on measured, image-based training data are presented. The trained, image-based measurement model is then used to calculate values of one or more parameters of interest directly from measured image data collected from other wafers. The image-based measurement models receive image data directly as input and provide values of parameters of interest as output. In some embodiments, the image-based measurement model enables the direct measurement of overlay error. In some embodiments, overlay error is determined from images of on-device structures. In some other embodiments, overlay error is determined from images of specialized target structures. In some embodiments, image data from multiple targets, image data collected by multiple metrologies, or both, is used for model building, training, and measurement. In some embodiments, an optimization algorithm automates the image-based measurement model building and training process.
Abstract: An object of the present invention is to provide a fitting support device and method which make it possible to reliably select apparel such as clothes that match user's appearance. The fitting support device includes: a color-characteristic processing unit 100 that acquires color characteristic data relating to user's skin color on the basis of captured image data; a body processing unit 101 that colors, on the basis of the color characteristic data, three-dimensional body shape data corresponding to body shape data on a user to thereby create body image data; a color-pattern processing unit 102 that acquires color pattern data corresponding to the color characteristic data, on the basis of clothing data; a wearing processing unit 103 that creates wearing image data on the basis of the body image data and the color pattern data; and a fitting processing unit 104 that creates fitting image data by synthesizing head portion image data on the user and the wearing image data.
Abstract: An image processing apparatus circuitry receives first image data from a first image capture device of an area adjacent to an automobile and also receives second image data from a second image capture device of the adjacent area. The circuitry combines the first image data with the second image data to form composite image data of a junction region of the at least a portion of the adjacent area. The circuitry changes over time respective image areas taken from the first image capture device and second image capture device to form the composite image data of the junction region.
Abstract: The present principles relate to a method and a device for reducing noise in a component of a picture. The method includes obtaining a low-pass filtered component by low-pass filtering the component of the picture, for a current pixel in the component of the picture. For at least one current neighboring pixel of the current pixel, computing a distance, relative to the current neighboring pixel, between the value of the current pixel in the low-pass filtered component and the value of the current neighboring pixel in the low-pass filtered component. When the distances relative to the at least one neighboring pixels of the current pixel are lower than a first threshold, modifying the value of the current pixel in said component of the picture according to a linear combination of the value of the current pixel in the component of the picture and the value of the current pixel in the low-pass filtered component.
Abstract: An image alignment method and apparatus, where the method and apparatus include obtaining image information of two to-be-aligned images, determining, using a cross-correlation measurement model, first coordinate offset according to the image information of the two images, where the first coordinate offset are used to indicate position deviations of to-be-aligned pixels between the two images in the coordinate system, and aligning the two images according to coordinates of pixels in the first image in the coordinate system and the first coordinate offset.
Abstract: Disclosed are systems and methods for improving interactions with and between computers in content communicating, rendering, generating, hosting and/or providing systems supported by or configured with computing devices, servers and/or platforms. The systems interact to improve the quality of data used in processing interactions between or among processors in such systems for determining obscured portions of displayed digital content. The disclosed method and apparatus involve acquiring and recording coordinates of each pixel in a digital image, and marking the pixels located at a boundary of the image as boundary pixels. The pixels located at a first region block are extracted and marked as obstruction pixels. An obstructed cutting space area corresponding to each pixel is determined based on positional relations of each pixel in the image. An image obstruction score is calculated based on the cutting space areas and utilized for rendering the pixels of the image.
Abstract: A tire inspection line includes first and second inspection posts and a transfer apparatus. The first post is for macroscopic inspection and includes a driver for rotating a tire, a macro-image acquisition device for acquiring a macroscopic image of the tire, and a first processor for analyzing the macroscopic image by digital image processing, comparing the macroscopic image with a reference image, and detecting deviations in shape. The second post is for microscopic inspection and includes a driver for rotating the tire, a micro-image acquisition device for acquiring a microscopic image of the tire, and a second processor for analyzing the microscopic image by digital image processing, comparing the microscopic image with a reference image representing a desired surface condition of the tire, and detecting local surface deviations. The transfer apparatus is for transferring the tire from the first post to a discharge point or to the second post.
September 24, 2014
Date of Patent:
November 6, 2018
Compagnie Generale des Etablissements Michelin
Abstract: A device and method for three-dimensional reconstruction of a scene by image analysis is provided. This device comprises an image acquisition device to capture images of the scene, an analysis device to calculate a three-dimensional reconstruction of the scene from at least one image of the scene taken by the image acquisition device, and a projection device to project a first light pattern and a second light pattern, which are complementary, on the examined scene, the first light pattern and the second light pattern being projected along separate projection axes forming a non-zero angle between them, so as to be superimposed while forming a uniform image with homogenous intensity in a projection plane.
Abstract: The present application discloses an image interpolation method for interpolating a pixel and enhancing an edge in an image, comprising detecting an edge position in an image; obtaining edge characteristics associated with the edge position; determining whether an interpolation point is located within an edge region based on the edge characteristics of an array of p×q pixels surrounding the interpolation point, wherein p and q are integers larger than 1; determining edge direction of an interpolation point located within the edge region, wherein the edge direction is normal to gradient direction; classifying the edge direction accordingly to in angle subclasses and n angle classes; wherein each angle class comprises one or more subclasses, m and n are integers, and n?m; selecting a one-dimensional horizontal interpolation kernel based on the angle class; performing a horizontal interpolation using the selected one-dimensional horizontal interpolation kernel; and performing a vertical interpolation using a one
Abstract: Identifying a masked suspect is one of the toughest challenges in biometrics that exist. This is an important problem faced in many law-enforcement applications on almost a daily basis. In such situations, investigators often only have access to the periocular region of a suspect's face and, unfortunately, conventional commercial matchers are unable to process these images in such a way that the suspect can be identified. Herein, a practical method to hallucinate a full frontal face given only a periocular region of a face is presented. This approach reconstructs the entire frontal face based on an image of an individual's periocular region. By using an approach based on a modified sparsifying dictionary learning algorithm, faces can be effectively reconstructed more accurately than with conventional methods. Further, various methods presented herein are open set, and thus can reconstruct faces even if the algorithms are not specifically trained using those faces.
June 17, 2015
Date of Patent:
October 30, 2018
Carnegie Mellon University
Felix Juefei-Xu, Dipan K. Pal, Marios Savvides
Abstract: One or more contemporaneous signature images are captured while a user generates an electronic signature for a document. When one or more contemporaneous signature images maps to a verification image, signature data representative of an electronic signature is associated with the document.