Abstract: An image processing apparatus includes an image input unit for inputting color image data, a first color conversion unit for converting the inputted color image data into first image data of the L*a*b color space having independently brightness component and chromaticity component, a background detection unit for detecting a background color component from the converted first image data of the L*a*b color space, a background removing unit for converting, based on a white value conversion parameter derived based on a ratio of the background color corresponding to the background component to a prescribed reference white color in the L*a*b color space, the first image data into second image data having the background color in white, a second color conversion unit for converting the second image data of the L*a*b color space into third image data of a prescribed color space, and an output unit for outputting the converted third image data of the prescribed color space.
Abstract: A method on a computing device for categorizing one or more blocks of an image is disclosed. The method includes computing a membership value of each of the one or more blocks for each of one or more categories based on a set of parameters associated with each of the one or more blocks. The one or more categories comprise at least an image category. Each of the one or more blocks is categorized in the one or more categories based on the membership value. A category of at least one block is modified to the image category based on a reference signal and the membership value such that the number of blocks categorized under the image category increases.
Abstract: A method for identifying a person using a mobile communication device, having a camera unit adapted for recording three-dimensional (3D) images, by recording a 3D image of the person's face using the camera unit, performing face recognition on the 2D image data in the recorded 3D image to determine at least two facial points on the 3D image the of person's face, determining a first distance between the at least two facial points in the 2D image data, determining a second distance between the at least two facial points using the depth data of the recorded 3D image, determining a third distance between the at least two facial points using the first distance and the second distance, and identifying the person by comparing the determined third distance to stored distances in a database, wherein each of the stored distances are associated with a person.
Type:
Grant
Filed:
May 23, 2013
Date of Patent:
July 21, 2015
Assignees:
Sony Corporation, Sony Mobile Communications AB
Abstract: A computer implemented method and apparatus for automatically identifying a representative image for an image group. The method comprises dividing an image group into one or more clusters based on an average time gap of the image group, wherein the images in the image group are in sequential timestamp order wherein the average time gap is calculated using a time span calculated from the timestamp of a first image in the image group to the timestamp of a last image in the image group; recursively dividing a largest cluster in the one or more clusters to determine a resultant cluster, wherein the resultant cluster comprises no time gaps larger than the average time gap of the resultant cluster; and identifying a representative image from the resultant cluster as an image representative for the image group.
Abstract: Disclosed is a computer implemented method for fully automated tissue diagnosis that trains a region of interest (ROI) classifier in a supervised manner, wherein labels are given only at a tissue level, the training using a multiple-instance learning variant of backpropagation, and trains a tissue classifier that uses the output of the ROI classifier. For a given tissue, the method finds ROIs, extracts feature vectors in each ROI, applies the ROI classifier to each feature vector thereby obtaining a set of probabilities, provides the probabilities to the tissue classifier and outputs a final diagnosis for the whole tissue.
Type:
Grant
Filed:
March 26, 2013
Date of Patent:
June 23, 2015
Assignee:
NEC Laboratories America, Inc.
Inventors:
Eric Cosatto, Pierre-Francois Laquerre, Christopher Malon, Hans-Peter Graf
Abstract: A character recognition apparatus includes an extracting unit extracting a feature point for a line in a handwritten character, first and second generation units, a learning unit, and a determination unit. The first generation unit generates first feature data from feature points for lines including an in-same-character line (first line) and being selected from lines in character-code-specified handwritten characters (known lines). The second generation unit generates second feature data from feature points for lines including an after-character-transition line (second line) and being selected from known lines. The learning unit causes a discriminator to learn classifications for first and second lines based on the first and second feature data. The determination unit determines whether each line in character-code-unknown handwritten characters is a first or second line, based on which classification is determined by the discriminator for feature data for the line.
Abstract: A method for detecting biometric characteristics in a captured biometric data image is provided that includes determining, by a processor, an approximate location for a biometric characteristic in a frame included in captured biometric data, and determining region of interest positions over the frame. Moreover, the method includes calculating a set of feature values for each position, generating a displacement for each set of feature values and generating a median displacement, and adjusting the biometric characteristic location by the median displacement.
Abstract: Provided are systems and methods for detecting blood alcohol level. The system for detecting blood alcohol level comprises a receiver configurable to receive an input video of an eye of a user and a processor configurable to: stabilize the input video; analyze the input video; based on the analysis, detect a horizontal gaze nystagmus level; and based on the horizontal gaze nystagmus level, determine an equivalent blood alcohol level of the user. The system outputs data associated with the equivalent blood alcohol level via an interface. Additionally, the system comprises a screen configurable to display a moving object. The input video captures eye movements of the user following the moving object. To illuminate the eye of the user, the system may generate red light.