Abstract: A method, performed in an electronic device, for connecting to a target device is disclosed. The method includes capturing an image including a face of a target person associated with the target device and recognizing an indication of the target person. The indication of the target person may be a pointing object, a speech command, and/or any suitable input command. The face of the target person in the image is detected based on the indication and at least one facial feature of the face in the image is extracted. Based on the at least one facial feature, the electronic device is connected to the target device.
Type:
Grant
Filed:
March 19, 2014
Date of Patent:
March 1, 2016
Assignee:
QUALCOMM Incorporated
Inventors:
Kang Kim, Min-Kyu Park, Yongwoo Cho, Kyu Woong Hwang, Duck-Hoon Kim
Abstract: There is provided an image processing apparatus including: a prediction tap selection unit which selects a pixel which is a prediction tap used for prediction operation for acquiring a pixel value of a target pixel which is a target from a second image obtained by converting a first image, from the first image; a classification unit which classifies the target pixel to any class from a plurality of classes; a tap coefficient output unit which outputs a tap coefficient of a class of the target pixel from tap coefficients, acquired by learning to minimize an error between a result of the prediction operation using a student image corresponding to the first image and a teacher image corresponding to the second image; and an operation unit which acquires a pixel value of the target pixel by performing the prediction operation using the tap coefficient and the prediction tap.
Abstract: A biometric information processing apparatus includes an aligning unit that aligns a hand image in a first vein image that includes a vein pattern of one hand among a right hand and a left hand as seen from one side of a palm side of the hand or a back side of the hand, with a hand image in a second vein image that includes a vein pattern of the other hand among the right hand and the left hand as seen from the other side among the palm side of the hand or the back side of the hand; and a match determining unit that determines a line element, among a plurality of line elements in the first vein image, that matches any of a plurality of line elements in the second vein image.
Abstract: Provided is a technology which enables further improvement of the accuracy of the determination in the pattern matching processing. A dictionary learning device 1 includes a score calculation unit 2 and a learning unit 3. The score calculation unit 2 calculates a matching score representing a similarity-degree between a sample pattern, which is a sample of a pattern which is likely to be subjected to a pattern matching processing, and a degradation pattern resulting from a degrading processing on the sample pattern. The learning unit 3 learns a quality dictionary based on the calculated matching score and the degradation pattern. The quality dictionary is a dictionary which is used in a processing to evaluate a degradation degree (quality) of a matching target pattern of being pattern of an object on which the pattern matching processing is carried out.
Abstract: An electronic device and method identify a block of text in a portion of an image of real world captured by a camera of a mobile device, slice sub-blocks from the block and identify characters in the sub-blocks that form a first sequence to a predetermined set of sequences to identify a second sequence therein. The second sequence may be identified as recognized (as a modifier-absent word) when not associated with additional information. When the second sequence is associated with additional information, a check is made on pixels in the image, based on a test specified in the additional information. When the test is satisfied, a copy of the second sequence in combination with the modifier is identified as recognized (as a modifier-present word). Storage and use of modifier information in addition to a set of sequences of characters enables recognition of words with or without modifiers.
Abstract: Methods to select and extract tabular data among the optical character recognition returned strings to automatically process documents, including documents containing academic transcripts.
Type:
Grant
Filed:
June 13, 2014
Date of Patent:
February 2, 2016
Assignee:
Lexmark International Technology, SA
Inventors:
Ralph Meier, Harry Urbschat, Thorsten Wanschura, Johannes Hausmann
Abstract: A sensor for detecting fingerprints is provided having first and second substrates, a two-dimensional array of sensing elements formed on the first substrate, and a plurality of thin-film transistors or TFTs for controlling the sensing elements at pixel locations along the array. Each of the sensing elements detects one of electrical signals (e.g., capacitance, resistance, or impedance), temperature, or light via one of the first or second substrates representative of one or more fingerprints. The top of the second substrate or the bottom of the first substrate may provide a platen upon which one or more fingers can be disposed. The sensor may be utilized in a fingerprint scanner having one or more processors driving sensing elements or reading from sensing elements analog signals representative of one or more fingerprints, and generating an image representative of the one or more fingerprints from the analog signals.
Abstract: An image processing device comprises a part-point specifying unit configured to specify a part point of an object; a feature-quantity extracting unit configured to extract one or a plurality of feature quantities from a pixel of a sampling point or from a pixel group including the pixel of the sampling point, for each of a plurality of sampling points, and extract candidate feature quantities corresponding to the part point constituted by the extracted plurality of feature quantities corresponding to the respective sampling points, the plurality of sampling points comprising the part point specified by the part-point specifying unit and at least one point on the image other than the part point; and a feature-quantity generating unit configured to generate one or a plurality of comparison feature quantities corresponding to the part points based on a predetermined standard by using the candidate feature quantities extracted by the feature-quantity extracting unit.
Abstract: According to one embodiment, a person image processing apparatus includes: an input processor configured to input a plurality of pieces of image data captured at different times by an image capture module; an extraction module configured to extract a person display area showing a same person from each of the pieces of image data captured at the different times; a feature detector configured to detect a feature point showing a feature of a part of a person from the person display area extracted from each of the pieces of image data and acquire reliability of the part shown in the feature point; and a correction module configured to, when correcting the person display area subjected to input processing by the input processor, perform weighting based on the reliability of the feature point included in the person display area.
Abstract: To generate a high resolution image of higher quality depending on a target area. An image processing apparatus includes a blur amount estimating unit configured to estimate a blur amount in a set target area in each of images indicated by a plurality of low resolution image data, and a reference image selecting unit configured to select a low resolution image to be a reference for generating the high resolution image depending on the estimated blur amount.
Abstract: A computer-readable recording medium storing a program for causing a computer to execute an image accumulating procedure, the procedure includes: specifying a second image similar to a first image that is associated with text information; displaying the second image in an identifiable manner, and the text information on a display device; and storing the text information associated with image information that is related to the second image, based on instruction information that instructs a use of the text information with respect to the second image.
Abstract: Methods, and apparatus for performing methods, for selecting a classifier engine. Methods include, for two or more portions of a set of items of known classification, classifying members of each portion using a particular classifier engine; selecting a portion of the set of items whose classifications satisfy a first criteria; classifying members of the selected portion of the set of items using two or more classifier engines; and selecting a classifier engine whose classification of the selected portion of the set of items satisfies a second criteria.
Type:
Grant
Filed:
April 30, 2012
Date of Patent:
December 22, 2015
Assignee:
Hewlett-Packard Development Company, L.P.
Inventors:
Steven J Simske, Malgorzata M Sturgill, Jason S Aronoff, Paul S Everest
Abstract: A method, non-transitory computer readable medium, and video analyzing computing device that generates a scene averaged frame value for each scene of an original video and a each scene of a resembling video. Each of a subset of the scenes of the resembling video is mapped to a corresponding scene of the original video based on a comparison of the scene averaged frame values. A singular value score is generated for each frame of each of the subset of scenes of the resembling video and each frame of the corresponding one of the scenes of the original video is generated. Matching one(s) of the one or more frames of each of the at least a subset of scenes of the resembling video is identified based on a comparison of at least a subset of the singular value scores. A first watermark is extracted from the identified matching frames.
Abstract: A method of host-directed illumination for verifying the validity of biometric data of a user is provided that includes capturing biometric data from a user with an authentication device during authentication and directing illumination of the biometric data from a host authentication system during the capturing operation. Moreover, the method includes comparing illumination characteristics of the captured biometric data against illumination characteristics expected to result from the directing operation, and determining that the user is a live user when the illumination characteristics of the captured biometric data match the illumination characteristics expected to result from the directing operation.
Type:
Grant
Filed:
May 27, 2014
Date of Patent:
December 1, 2015
Assignee:
DAON HOLDINGS LIMITED
Inventors:
Richard Jay Langley, Michael Peirce, Nicolas Jacques Jean Sezille
Abstract: The present disclosure concerns a method of verifying the presence of a living face in front of a camera (112), the method including: capturing by said camera a sequence of images of a face; detecting a plurality of features of said face in each of said images; measuring parameters associated with said detected features to determine whether each of a plurality of liveness indicators is present in said images; determining whether or not said face is a living face based on the presence in said images of a combination of at least two of said liveness indicators.
Abstract: A sensing circuit and method is disclosed, which may comprise a plurality of transmitting or receiving elements each defining a pixel location defined by a gap between the respective one of the plurality of transmitting or receiving elements and a single element of the opposing type to the respective transmitting or receiving element, and a controller configured to provide or receive a probing signal to or from a group of at least two of the plurality of transmitting or receiving elements, at the same time, thereby increasing the effective area providing the transmitting of or the receiving of the probing signal for each pixel location imaged. The group of transmitting or receiving elements may form a symmetric pattern, which may be centered on the pixel location. The plurality of transmitting or receiving elements may form at least one linear pixel array with the respective single receiving or transmitting element.
Type:
Grant
Filed:
December 19, 2012
Date of Patent:
November 24, 2015
Assignee:
Synaptics Incorporated
Inventors:
Richard Alexander Erhart, Paul Wickboldt
Abstract: An image transformation apparatus includes a detection unit which is configured to detect, from each of a user image and a reference image, feature points of a face and angle information of the face, a feature points adjusting unit which is configured to adjust the feature points of the user image or the reference image by using the detected angle information, a face analysis unit which is configured to compare facial features contained in the user image and the reference image by using the adjusted feature points, and an image transformation unit which is configured to transform the user image by using a result of the comparison of the facial features from the face analysis unit.
Type:
Grant
Filed:
September 11, 2013
Date of Patent:
November 24, 2015
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Do-wan Kim, Alexander Limonov, Jin-sung Lee, Kil-soo Jung
Abstract: Provided is an image processing device including a corresponding pixel computation unit configured to, with respect to image data containing an image data area and an ignored area, replace a pixel in the ignored area having influence on a spatial analysis process with a pixel having no influence on the spatial analysis process.
Abstract: A backward-compatible stereo image processing system and a method of generating a backward-compatible stereo image. One embodiment of the backward-compatible stereo image processing system includes: (1) first and second viewpoints for an image, (2) an intermediate viewpoint for the image, and (3) first and second output channels configured to provide respective images composed of high spatial frequency content of the intermediate viewpoint and respective low spatial frequency content of the first and second viewpoints.
Abstract: An implementation of SSP using variable depth for the vectors extending normal to the surface voxels of the brain so as to avoid white matter uptake extraction is provided. The implementation also provides the possibility to compare SSP for an individual amyloid imaging agent image to a SSP normal database and allows for 3D visualization of SSP information.
Type:
Grant
Filed:
September 28, 2012
Date of Patent:
November 3, 2015
Assignee:
GE Healthcare Limited
Inventors:
Johan Axel Lilja, Nils Lenaart Thurf Jell, Roger Lundqvist