Abstract: The present invention enables permanent biometric authentication without the risk of forgery or the like. The present invention enables living-tissue discrimination as well as biometric authentication. The roughness distribution pattern of deep-layer tissue of the skin covered with epidermal tissue is detected, thereby extracting a unique pattern of the living tissue. Then, biometric authentication is performed based upon the detected pattern. The roughness distribution pattern of the deep-layer tissue of the skin is optically detected using difference in optical properties between the epidermal tissue and the deep-layer tissue of the skin. In this case, long-wavelength light, e.g., near-infrared light is used as illumination light cast onto the skin tissue. A fork structure of a subcutaneous blood vessel is used as the portion which is to be detected, for example. The portion which is to be detected is determined based upon the structure of the fork structure.
Abstract: A method performed by a mobile terminal may include receiving an image that includes text, translating the text into another language and superimposing and displaying the translated text over the received image.
Abstract: A method is disclosed for noise reduction in images with locally different and directional noise, in particular for noise reduction in image data records of computed tomography. In at least one embodiment of the method, two image data records of an identical object region that have mutually independent noise are provided. The two image data records are decomposed by a discrete wavelet transformation into a number of frequency bands, detailed images having high frequency structures being obtained in at least two different directions. Noise images in the respective frequency bands and directions are obtained by subtracting a wavelet coefficient of the two input images. These noise images are used to estimate noise locally and as a function of direction, and on the basis of this estimate local threshold values are calculated and applied to the averaged wavelet coefficients of the input images. A result image with reduced noise is obtained after an inverse wavelet transformation.
Abstract: An image compression and decompression method compresses data based upon the data states, and decompresses the compressed data based upon the codes generated during the compression.
Abstract: The present invention relates to a method and system for a multimodal biometric system utilizing a single image to generate hand shape and palmprint features. The invention utilizes a digital camera, and incorporates feature subset selection algorithms to eliminate redundant data. The inventions, through the use of feature algorithm, successfully fuses the hand shape features and palmprint features at the features level.
Abstract: Non-adjacent rows (i) in the visibility matrix (Msr) that have a large number of common elements are automatically detected (E7) and any such detected rows are permutated to put the lines detected as having a large number of common elements into sequence to form a modified visibility matrix (M?sr), and digital image coding is applied (E8) to the Boolean elements of the modified visibility matrix (M?sr).
Type:
Grant
Filed:
August 19, 2005
Date of Patent:
March 6, 2012
Assignee:
France Telecom
Inventors:
Isabelle Marchal, Christian Bouville, Loïc Bouget
Abstract: A compact authentication device that prevents user from feeling pressure and is strong against external light, when capturing an image of a finger blood vessel pattern with transmitted light. The device includes a guidance part for determining the finger position, a light source disposed on at least one side of the guidance part to emit light to be transmitted though the finger, an image capture part for capturing the transmitted light, a shading unit for limiting an irradiation region of the light, a finger thickness measuring unit, a unit for controlling a light amount of the light source based on a result of the measurement, a unit for recording registered image patterns of the finger, a unit for collating a captured image pattern from the image capture part with the registered patterns, and a unit for controlling different processing according to the collation result.
Abstract: In embodiments consistent with the subject matter of this disclosure, a user may input one or more strokes as digital ink to a processing device. The processing device may produce and present a recognition result, which may include a misrecognized portion. A user may indicate a desire to correct the misrecognized portion and may further select one or more strokes of the misrecognized portion. The processing device may then present the one or more recognition alternates corresponding to the selected one or more strokes of the misrecognized portion. In some embodiments, the processing device may permit a user to rewrite the selected one or more strokes of the misrecognized portion with newly entered digital ink. Features, such as, rewriting and correction of the input digital ink may be discoverable in some embodiments.
Type:
Grant
Filed:
April 19, 2007
Date of Patent:
February 14, 2012
Assignee:
Microsoft Corporation
Inventors:
Milan Vukosavljevic, Bodin Dresevic, Dejan Ivkovic, Goran Predovic
Abstract: Color is edited using a color representation including digital values B (brightness), e and f such that B=?{square root over (D2+E2+F2)}, e=E/B, f=F/B, where DEF is a linear color coordinate system. Alternatively, color is represented using digital values B, C (chroma) and H (hue), where cos C=D/B and tan H=E/F. Brightness can be changed without a color shift by changing the B coordinate and leaving unchanged the other coordinates e and f or C and H. Other features are also provided. Brightness coding methods are provided to reduced the size of image data for storage and/or network transmission. The coding methods include logarithmic coding. Some embodiments use logarithmic or linear coding depending on the brightness at a particular pixel.
Abstract: An input pattern feature amount is decomposed into element vectors. For each of the feature vectors, a discriminant matrix obtained by discriminant analysis is prepared in advance. Each of the feature vectors is projected into a discriminant space defined by the discriminant matrix and the dimensions are compressed. According to the feature vector obtained, projection is performed again by the discriminant matrix to calculate the feature vector, thereby suppressing reduction of the feature amount effective for the discrimination and performing effective feature extraction.
Abstract: An image processing apparatus includes display means for displaying a zoom image resulting from enlargement of a certain area in an original image to a zoom area; important object determining means for determining whether the absolute value of the difference in pixel value between a pixel on a boundary line, which is the outermost circumference of the zoom area, and the pixel that adjoins the pixel on the boundary line in the direction perpendicular to the boundary line and outward from the zoom area is lower than or equal to a predetermined threshold value to determine whether any important object is across the boundary line; and resetting means for resetting a zoom parameter used for determining the zoom area if the important object determining means determines that any important object is across the boundary line.
Abstract: Methods and systems for creating three-dimensional models from two-dimensional images are provided. According to one embodiment, a computer-implemented method of creating a polygon-based three-dimensional (3D) model from a two-dimensional (2D) pixel-based image involves creating an inflatable polygon-based 3D image and extruding the inflatable polygon-based 3D image. The inflatable polygon-based 3D image is created based on a 2D pixel-based input image by representing pixels making up the 2D pixel-based input image as polygons. The inflatable polygon-based 3D image is extruded by generating z-coordinate values for reference points associated with the polygons based upon a biased diffusion process.
Abstract: Natural images which are similar to each other contained in a page represented by page description data are corrected so that they have natural appearance to the eye. To achieve this object, an image recognizing unit recognizes images in a page represented by page description data, and a natural image determining unit determines whether or not each recognized image is a natural image. An image analyzing unit calculates a setup condition for image correction for each natural image. A second correction condition calculating unit calculates, for the similar natural images being similar to each other, a correction condition for making image qualities of the similar natural images substantially uniform. An image correcting unit applies image correction based on the setup condition and the correction condition to the similar natural images.
Abstract: Pixels of a binary image obtained by binarizing an image are scanned in a predetermined direction, labels are assigned to the pixels according to binarization information about the respective pixels, information about the assigned labels is stored sequentially for each of a plurality of lines along the predetermined direction, information about coordinate values in the binary image of pixels assigned the same label is stored, a determination is made as to whether or not, in a current line among the plurality of lines, there is a pixel assigned the same label as a label assigned to a pixel contained in a line which was scanned immediately before the current line, when a determination is made that there is no pixel assigned the same label, a feature point in a connected component formed by connecting together pixels specified by the coordinate values is calculated based on the stored information about the coordinate values, a feature vector representing a feature of the image is calculated based on the calculat
Abstract: Disclosed is an image processing apparatus including a tone converting section to convert a tone level of a target pixel in multi-level image data based on a threshold value of the tone level so that the number of tone levels is reduced; a resolution converting section to output a pixel block according to the tone level, to generate image data with higher resolution; and an error diffusing section to diffuse an error; and wherein when the converted tone level is a predetermined value or lower, the resolution converting section refers to an output sequence category of a black pixel in a pixel block of a surrounding pixel, selects an output sequence pattern belonging to an output sequence category which allows a black pixel in a pixel block of the target pixel and the surrounding pixel to be concentrated, and outputs a pixel block corresponding to the selected pattern.
Type:
Grant
Filed:
August 15, 2007
Date of Patent:
January 31, 2012
Assignee:
Konica Minolta Business Technologies, Inc.
Abstract: A multi-dimensional data enhancement system uses large kernel filtering, decimation, and interpolation, in multi-dimensions to enhance the multi-dimensional data in real-time. The multi-dimensional data enhancement system is capable of performing large kernel processing in real-time because the required processing overhead is significantly reduced. The reduction in processing overhead is achieved through the use of low pass filtering and decimation that reduces the amount of data that needs to be processed in order to generate an unsharp mask comprising low spatial frequencies that can be used to process the data in a more natural way.
Type:
Grant
Filed:
February 15, 2010
Date of Patent:
January 31, 2012
Assignee:
Z Microsystems Visualization Technologies, LLC
Abstract: Disclosed is a technique that eliminates problems that result when a face image fails to be detected in a case where the image of a subject obtained continuously is subjected to face-image detection processing. A face-image portion is detected in the image of a subject. If an evaluation value for evaluating the degree of face likeliness of the face-image portion is equal to or greater than a threshold value, the result of face detection is updated. A timer is set. If the timer has not timed out in a case where the evaluation value of a face image in the next frame of the image of the subject is less than the threshold value, the face-image portion of the preceding frame is regarded as the face-image portion of the next frame and processing regarding this face-image portion is executed. Thus, even if a face-image portion is no longer detected, processing regarding a face-image portion can be executed using the face-image portion of the preceding frame.
Abstract: An iris recognition system having pupil and iris border conditioning prior to iris mapping and analysis. The system may obtain and filter an image of an eye. A pupil of the mage may be selected and segmented. Portions of the pupil border can be evaluated and pruned. A curve may be fitted on at least the invalid portions of the pupil border. The iris of the eye with an acceptable border of the pupil as an inside border of the iris may be selected from the image. The iris outside border having sclera and eyelash/lid boundaries may be grouped using a cluster angular range based on eye symmetry. The sclera boundaries may be fitted with a curve. The eyelash/lid boundaries may be extracted or masked. The iris may be segmented, mapped and analyzed.
Abstract: Provided is an improved apparatus and method for recognizing pattern data. The method including extracting a high frequency component with Y data from pattern data sensed through a camera equipped in a mobile station to more clearly recognize edge portions. The high frequency component and the Y data are weighted with predetermined weights, and input data is generated using the high frequency component and Y data weighted with the pre-set weights. Accordingly, edge portions of the input data are more clearly defined, thereby increasing a recognition rate of the pattern data.
Abstract: A system and method for labeling feature clusters in frames of image data for optical navigation uses distances between feature clusters in a current frame of image data and feature clusters in a previous frame of image data to label the feature clusters in the current frame of image data using identifiers associated with the feature cluster in the previous frame of image data that have been correlated with the feature clusters in the current frame of image data.
Type:
Grant
Filed:
April 17, 2007
Date of Patent:
January 10, 2012
Assignee:
Avago Technologies ECBU IP (Singapore) Pte. Ltd.