Abstract: In the fingerprint identification system according to the present disclosure, the fingerprint sensor collects multiple frames of fingerprint images sliding-inputted by a user, the judging unit determines whether, among the multiple frames of fingerprint images, there is a first overlap region between a current frame of fingerprint images and a previous frame of fingerprint images; if yes, the judging unit removes the first overlap region from the current frame of fingerprint images and superposes the previous frame of fingerprint images with the current frame of fingerprint images to form a superposed fingerprint image; the judging unit completes judgment of all the multiple frames of fingerprint images to obtain a template fingerprint image; the processing unit saves characteristic points of the complete template fingerprint image.
Abstract: Methods and systems for performing one or more functions for a specimen using output simulated for the specimen are provided. One system includes one or more computer subsystems configured for acquiring output generated for a specimen by one or more detectors included in a tool configured to perform a process on the specimen. The system also includes one or more components executed by the one or more computer subsystems. The one or more components include a learning based model configured for performing one or more first functions using the acquired output as input to thereby generate simulated output for the specimen. The one or more computer subsystems are also configured for performing one or more second functions for the specimen using the simulated output.
January 9, 2017
Date of Patent:
July 23, 2019
Kris Bhaskar, Scott Young, Mark Roulo, Jing Zhang, Laurent Karsenti, Mohan Mahadevan, Bjorn Brauer
Abstract: A method of matching stereo images includes: determining, first information indicating which orientations are dominant in each pixel location of a right image and a left image among the stereo images; determining second information indicating for each pixel location in the right image and the left image whether it lies on a pure horizontal edge; detecting points in the left image and the right image based on the first information and the second information; and performing a matching on the left image and the right image using the detected points to estimate a sparse disparity map.
Abstract: Disclosed are OCR-based system and method for recognizing a map image. The system for map recognition comprise a recognition unit for recognizing text on an inputted image by means of a text recognition technology; a search unit for searching, in a database comprising toponym data, for a toponym corresponding to the text; and a provision unit for providing map information comprising the toponym as a result of the recognition of the inputted image.
Abstract: A biometric authentication device includes: a memory; and a processor coupled to the memory and the processor configured to execute a process, the process comprising: acquiring a plurality of sets of palm images taken under different illumination conditions of an illumination source from one another in obtaining images of a palm with light emitted from the illumination source; adjusting a time interval at which the imaging device acquires the image sets, in accordance with a condition for imaging the palm; extracting a biological feature from each of the image sets; and comparing each biological feature extracted by the extracting with a biological feature registered in advance.
Abstract: Method and system for managing multiple impressions of a patient's jaw for an orthodontic treatment is provided. The method includes scanning at least a first impression and a second impression of same jaw for the orthodontic treatment; determining if the first jaw impression and the second jaw impression have distortion in different areas; selecting the first jaw impression or the second jaw impression as a base impression; and replacing a distorted tooth data from the base impression with data for the same tooth from a non-base impression. The method also includes scanning at least a first jaw impression for the orthodontic treatment; scanning a bite impression for the orthodontic treatment; matching the scanned first jaw impression with the scanned bite impression; comparing bite information with a tooth occlusal surface; and determining if reconstruction is to be performed on the tooth occlusal surface.
Abstract: Facial expressions are recognized using relations determined by class-to-class comparisons. In one example, descriptors are determined for each of a plurality of facial expression classes. Pair-wise facial expression class-to-class tasks are defined. A set of discriminative image patches are learned for each task using labelled training images. Each image patch is a portion of an image. Differences in the learned image patches in each training image are determined for each task. A relation graph is defined for each image for each task using the differences. A final descriptor is determined for each image by stacking and concatenating the relation graphs for each task. Finally, the final descriptors of the images of the are fed into a training algorithm to learn a final facial expression model.
Abstract: The present invention relates to a method for the three-dimensional detection of objects, in which an image is detected and is compared to a known reference image and mutually corresponding objects in the images are identified by means of a correlation process. A binarization scheme is used in the correlation process and compares randomly selected point pairs to one another. The point pairs are fixed by means of a plurality of iterations.
Abstract: A method for rules based data extraction includes extracting content from a document by executing a set of rules on the document. Executing the set of rules comprises for a rule in the set includes obtaining, from a runtime context and based on a rule definition, a input for the rule, executing, using the input for the rule, rule code for the rule to obtain output, where the rule code for the rule is distinct from the rule definition, and storing the output in the runtime context. The method further includes extracting content from the runtime context, and storing the extracted content.
Abstract: A media capture device (MCD) that provides a multi-sensor, free flight camera platform with advanced learning technology to replicate the desires and skills of the purchaser/owner is provided. Advanced algorithms may uniquely enable many functions for autonomous and revolutionary photography. The device may learn about the user, the environment, and/or how to optimize a photographic experience so that compelling events may be captured and composed into efficient and emotional sharing. The device may capture better photos and videos as perceived by one's social circle of friends, and/or may greatly simplify the process of using a camera to the ultimate convenience of full autonomous operation.
Abstract: A method for fine-grained object recognition in a robotic system is disclosed that includes obtaining an image of an object from an imaging device. Based on the image, a deep category-level detection neural network is used to detect pre-defined categories of objects. A feature map is generated for each pre-defined category of object detected by the deep category-level detection neural network. Embedded features are generated, based on the feature map, using a deep instance-level detection neural network corresponding to the pre-defined category of the object, wherein each pre-defined category of an object comprises a corresponding different instance-level detection neural network. An instance-level of the object is determined based on classification of the embedded features.
Abstract: A spatio-temporal awareness engine combines a low-resolution tracking process and high resolution tracking process to employ an array of imaging sensors to track an object within the visual field. The system utilizes a low-resolution conversion through noise filtering and feature consolidation to load-balance the more computationally-intensive aspects of object tracking, allowing for a more robust system, while utilizing less computer resources. A process for target recovery and object path prediction in a robotic drone may include tracking targets using a combination of visual and acoustic multimodal sensors, operating a camera as a main tracking sensor of the multimodal sensors and feeding output of the camera to a spatiotemoral engine, complementing the main tracking sensor with non-visual, fast secondary sensors to assign rough directionality to a target tracking signal, and applying the rough directionality to prioritize visual scanning by the main tracking sensor.
October 10, 2017
Date of Patent:
June 18, 2019
Airspace Systems, Inc.
Guy Bar-Nahum, Hoang Anh Nguyen, Jasminder S. Banga
Abstract: A control system configured to use a database and a data storage area includes a control device for controlling a process executed on an object; and an image processing device arranged in association with the control device for taking an image of the object and processing image data acquired from taking the image of the object. The control device and the image processing device may work independently or collaboratively to send the database at least one of an attribute value or results information, and designation information in association with each other for the same object, the attribute value managed by the control device and corresponding to any attribute defined in the database, the results information representing a process result from the image processing device, and the designation information specifying a storage destination in the data storage area for the image data acquired from taking an image of the object.
Abstract: A method of providing a handwriting style correction function and an electronic device adapted to the method are provided. The electronic device includes: a touch screen; a processor electrically connected to the touch screen; and a memory electrically connected to the processor.
Abstract: Disclosed is a method of anonymization of digital images through elimination of the Photo-Response Non Uniformity noise pattern which is unique to the imaging sensor and latent in all digital images taken by digital cameras or devices with imaging sensors.
Abstract: Disclosed are an apparatus and a method for determining lesion similarity of a medical image. The apparatus for determining lesion similarity according to one aspect of the present invention may comprise: an image input unit for receiving a reference image comprising a reference lesion area, and a target image comprising a target lesion area; and a similarity determination unit for determining similarity of the reference lesion area and the target lesion area by applying an advantage weight, which increases as getting closer to the center of the reference lesion area, to pixels included in a first area of the reference lesion area, and a penalty weight, which increases as getting farther away from the reference lesion area, to pixels included in a second area of the target lesion area.
Abstract: A biometrics authentication apparatus and a biometrics authentication method are disclosed. The biometrics authentication apparatus includes: a light source configured to emit a light; a modulator configured to change a spatial distribution of the light that is scattered and reflected from a region of interest of a user; a detector configured to detect an integral power of the light that is scattered from the region of interest; and a processor configured to obtain a measurement signal based on the integral power of the light, compare the measurement signal with a reference signal stored in a memory, and determine whether to authenticate the user based on a degree of match between the measurement signal and the reference signal.
February 28, 2017
Date of Patent:
May 21, 2019
SAMSUNG ELECTRONICS CO., LTD.
Alexey Dmitrievich Lantsov, Alexey Andreevich Shchekin, Maksim Vladimirovich Riabko, Anton Sergeevich Medvedev, Sergey Nikolaevich Koptyaev
Abstract: An apparatus for enrollment and verification of a user comprising one touch two factor biometric sensors. An enrollment process creates the baseline abstract identity information for the user. Subsequent verification processes capture new abstract identity information to be matched to the baseline on an encrypted server. A first camera takes a first surface image of a portion of a user's finger to capture the pattern of friction ridges and valleys and intersection points. A second camera takes a second subsurface image of a vein map below the surface of the user's finger. These are then fused into a binary format that cannot be reversed to reacquire either the fingerprint or the vein map. The enrollment data and the verification data are then compared to each other in order to determine if they match for authentication of the user.
Abstract: A method of generating and communicating lane information from a host vehicle to a vehicle-to-vehicle (V2V) network includes collecting visual data from a camera, detecting a lane within the visual data, generating a lane classification for the lane based on the visual data, assigning a confidence level to the lane classification, generating a lane distance estimate from the visual data, generating a lane model from the lane classification and the lane distance estimate, and transmitting the lane model and the confidence level to the V2V network.
June 30, 2016
Date of Patent:
May 7, 2019
DURA OPERATING, LLC
Donald Raymond Gignac, Aaron Evans Thompson, Danish Uzair Siddiqui, Rajashekhar Patil, Gordon M. Thomas
Abstract: A face detecting method and a face detecting system are provided. The face detecting method includes the following steps: At least one original image block is received. The original image block is transformed by a transforming unit to obtain a plurality of different transformed image blocks. Whether each of the transformed image blocks contains a face is detected by a detecting unit according to only one identical face database and a detecting result value is outputted accordingly. The transformed image blocks are detected by a plurality of parallel processing cores. Whether a maximum of the detecting result values is larger than a threshold value is determined by a determiner. If the maximum of the detecting result values is larger than the threshold value, then the determiner deems that the original image block contains a face.