HAND-BASED BIOMETRIC ANALYSIS

Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of U.S. Provisional Application No. 60/814,163, filed Jun. 16, 2006, the disclosure of which is hereby incorporated by reference.

GOVERNMENT SUPPORT

The invention described in this patent application was made in part by government support under NASA Grant #NCC5-583. The United States Government may have rights in this invention.

BACKGROUND

Existing techniques for biometrics-based authentication systems include methods that employ recognition of various biometric tokens including, for example, fingerprint, face, hand, and iris recognition. Various biometric choices have strengths and weaknesses depending on their systems' applications and requirements. The present discussion focuses on hand-based biometric analysis. The geometry of the hand contains relatively invariant features of an individual. In existing systems, hand-based authentication is sometimes employed in small-scale person authentication applications due to the fact that geometric features of the hand (e.g., finger length/width, area/size of the palm) are not as distinctive as fingerprint or iris features.

However, existing techniques and systems rely on outmoded or inconvenient processes in order to increase identification accuracy. Among these include strict requirements about hand orientation. Existing systems go as far as to require the use physical pegs or guides to direct hand orientation during image capture in order to allow assumptions to be made during analysis which simplify computational requirements. Such requirements are undesirable from a user's perspective, however, because they make use of such a system cumbersome and potentially uncomfortable. Additionally, adding physical restrictions to a system increases the likelihood that the system will require special, costly equipment.

Another strict requirement in existing techniques is the method by which they perform recognition of an image of a hand. Many existing systems and techniques focus on extraction of several landmark points on the surface or silhouette of the hand in order to identify the shape. This point extraction is not performed easily and is frequently prone to localization errors. Such errors can, by altering the very shape and borders of the segments created, substantially increase the difficulty of performing verification or identification. Additionally, existing systems and techniques require the recognition of lines or prints on the hand or fingers in order to perform analysis. Such identification is more prone to error, both from acquisition mistakes and from inconsistencies in hand appearance from day to day.

What is needed is a system that can perform biometric analysis for identification and/or verification which does not require restrictions on hand placement to the extensive degree used in existing systems. Additionally, what is needed are techniques for performing such biometric analysis that are robust with regard to changes in placement and changes in points and lines on the hand itself.

SUMMARY

Techniques and systems for performing hand-based biometric analysis are described. In various implementations the techniques and systems will comprise one or more of the following features, either separately or in combination.

The applicants have invented systems and methods for performing hand-based biometric analysis. The systems and methods have a variety of different aspects and these aspects are exhibited in various implementations. In one aspect this biometric analysis is performed for identification of a person based on analysis of the person's hand. In another aspect this biometric analysis is performed for verification of a person's identity based on analysis of the person's hand.

In some embodiments, an orientation-independent analysis of an image of a hand is described. In another aspect, this orientation-independent analysis includes computation of Zernike moments.

Certain embodiments acquire an image of a hand without need to extract landmark points, use pegs or require other orientation restrictions. In one aspect, the use of Zernike moments for analysis provides for rotation-invariant analysis, lessening the need for restrictions on hand placement and orientation.

In some embodiments, hand images are acquired through the use of a lighting table and a camera. In one aspect, images are acquired without the use of equipment which is particular to hand verification and identification. In another aspect, images are made into silhouettes before analysis.

In some embodiments, an order value for Zernike moments is chosen to increase accuracy while allowing for efficient computation. In one aspect, this order is chosen through experimental analysis of known images.

In some embodiments, efficient computation of Zernike moments may re-use stored common terms during computation, which can reduce computation time in certain implementations. In another aspect, efficient computation of Zernike moments may employ the use of a pre-determined lookup table of computed terms used in Zernike moment calculations.

In some embodiments, the use of arbitrary-precision arithmetic for computation of Zernike moments can increase analysis accuracy. Hybrid computations may be used by, for example, combining arbitrary-precision arithmetic with arithmetic of another precision, such as double-precision. Doing so can increase computational efficiencies in some applications.

In some embodiments, segmentation of a hand image can increase computational efficiency and analysis accuracy. In some implementations, an image of a forearm can be segmented and removed from an image of a hand. In certain embodiments, an image of a hand may be segmented into separate palm and finger images. Each of the palm and finger images may be separately analyzed using Zernike moments to increase recognition accuracy. In some applications, finger segments are cleaned before analysis to avoid artifacts from segmentation.

In some embodiments, feature parameters, including Zernike descriptors, of different parts of the hand are algebraically fused into a feature vector for storage and comparison (a process also known as feature-level fusion).

In certain implementations, a metric can be chosen to be used in comparing feature vectors. In some implementations, a simple Euclidian distance may be utilized as such a metric.

In some embodiments, matching scores can be obtained by comparing corresponding feature parameters, including Zernike descriptors, of different parts of a hand to stored feature parameters for known hands. In some embodiments, these scores can be algebraically fused for comparison (a process also known as score-level fusion). In some embodiments, such a fusion can be performed through the use of a weighted summation. In some embodiments, a statistical classifier may be used to fuse the scores. In yet another aspect, a support vector machine is used to map scores into positive or negative identifiers.

In some embodiments, the outputs of several comparisons between different segments can be considered as votes in a majority-vote score generation process.

In certain applications a system for biometric analysis comprises modules for image acquisition and segmentation, for image analysis, for storing feature parameters for hand images which have previously been received by the system, and for comparing stored feature parameters to parameters gained from newly-entered hand images. In some applications feature parameters of hand images which are submitted for identification or verification can be stored if, for example, they are identified as belonging to a person who had feature parameters already stored in the system.

This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.

In addition, the foregoing is a brief explanation of background and examples or features of the invention or certain embodiments of the invention. It is to be understood that all embodiments of the invention do not necessarily address all issues noted in the examples above or include all features or advantages noted in the summary and detailed description.

Additional features and advantages will be made apparent from the following detailed description of embodiments that proceeds with reference to the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a diagram illustrating an overview of the hand-based biometric analysis techniques described herein.

FIG. 2 is a flowchart illustrating an example process for performing hand-based biometric analysis.

FIG. 3 is a block diagram illustrating an example system for performing hand-based biometric analysis.

FIG. 4 is a flowchart illustrating an example process for acquiring and processing an image in the hand-based biometric analysis techniques described herein.

FIGS. 5(a)-(c) are pictorial examples of image acquisition for the hand-based biometric analysis techniques described herein.

FIG. 6 is a flowchart illustrating an example process for segmenting an image of a hand and forearm according to the hand-based biometric analysis techniques described herein.

FIGS. 7(a)-(c) are examples of the segmentation process of FIG. 6.

FIGS. 8(a)-(d) are examples of finger movement observed in acquired hand images.

FIG. 9 is a flowchart illustrating an example process for segmenting an image of fingers and a palm according to the hand-based biometric analysis techniques described herein.

FIGS. 10(a)-(d) are examples of the segmentation process of FIG. 9.

FIG. 11 is an example of points on a hand where smoothing of finger image segments is desirable.

FIGS. 12(a)-(b) are example of finger image segments before and after being smoothed.

FIG. 13 is an example of common terms in a calculation of Zernike moments.

FIG. 14 is a flowchart illustrating an example process for calculating Zernike moments according to the hand-based biometric analysis techniques described herein.

FIG. 15 is an example of reconstruction of an image using Zernike moments of different orders.

FIG. 16(a) is a graph of example errors for reconstructing finger images using Zernike moments of different orders.

FIG. 16(b) is an example of reconstruction of a finger image using Zernike moments of different orders.

FIG. 17(a) is a graph of example errors for reconstructing entire hand images using Zernike moments of different orders.

FIG. 17(b) is an example of reconstruction of an entire hand image using Zernike moments of different orders.

FIG. 18 is a flowchart illustrating an example process for fusing feature parameters using feature-level fusion according to the hand-based biometric analysis techniques described herein.

FIG. 19 is a flowchart illustrating an example process for fusing feature parameters using score-level fusion according to the hand-based biometric analysis techniques described herein.

FIG. 20 is a flowchart illustrating an example process for fusing feature parameters using decision-level fusion according to the hand-based biometric analysis techniques described herein.

FIG. 21 is a block diagram illustrating an example computing environment for performing the biometric analysis techniques described herein.

DETAILED DESCRIPTION

The following description relates to examples of systems and methods for performing hand-based biometric analysis. While, however, much of the description herein is directed to performing image processing and analysis for images of hands, this should not be read as a limitation on the techniques and systems described herein. Similarly, although many of the descriptions herein are directed toward verification and/or identification of a person based on an acquired image of a hand, this should not be read as a requirement of all implementations of the systems and techniques described herein. Many of the processes and modules described herein may operate on other types of images, or even arbitrary images, as well as being used for purposes other than biometric analysis.

1. Examples of Hand-Based Biometric Analysis Techniques

The present application presents design and implementation examples for a hand-based biometric analysis system using high-order Zernike moments. In some implementations, the biometric analysis takes the form of hand-based verification or identification. FIG. 1 shows a block diagram illustrating, at a general level, the steps of one implementation of a biometric analysis systems according to the techniques described here, with specificity to hand-based verification. In the example system, at block 110 an image of a hand is acquired. Next, the image undergoes preprocessing before feature analysis is performed. Thus, at block 120, the image is binarized to produce a black and white silhouette, followed by a segmentation to separate the hand and arm segments of the image at block 130. Then, at block 140, a further segmentation occurs to segment fingers from the palm of the image. Next, at blocks 150, each of these segments undergo a feature extraction to create Zernike descriptors for each image segment. Then, at block, 160 the extracted feature data is fused according to techniques described herein, and a verification decision is made at block 170, utilizing input data from a database of feature data 180.

In various implementations, the analysis and decision-making can be performed by computing Zernike descriptors for each segmented image, and then fusing the descriptors into a feature vector which can then be compared to a database of known trusted feature vectors. Alternatively, each part of the hand can be compared separately to one or more known hand segments to obtain a matching score for that part of the hand; after comparison the matching scores can be fused together to obtain an overall matching score. In another alternative, a majority-vote process can be used for to make comparisons of different image segments for verification or identification.

As used herein, the term “verification” for an image of a hand generally refers to the determination, given a subject identifying him or herself as a particular identity and supplying an image of their hand, that the subject is believed by the system to be the particular identity. By contrast “identification” for an image of a hand generally refers to the system itself choosing a likely identity for a person given an image of their hand. Because no concrete claim of identity is made, identification frequently means comparing an image of a hand to multiple identity records. As such processes of identification are generally more complex and take more time than verification, and can be thought of as specializations of verification processes. In various implementations, the degree of belief or trust required to achieve a verification or identification may change or be modified by an administrator. Thus, the systems and techniques provided herein allow for arbitrary strengthening or weakening of the biometric analysis techniques; in various implementations modification of the relative strength of these techniques may provide for greater or fewer positive or negative identifications. Additionally, it should be noted that, while the term “biometric” is used frequently herein, the term refers only to the usage of parameters that represent a biological specimen. The term is not intended to be limited to specific measurements, such as length or width of physical hand features. Instead, the term incorporates parameters, such a Zernike moment parameters representing hand shape, which indirectly represent the shape of biological features.

In one example of improvement of existing techniques, certain of the systems and techniques described herein can operate on 2D images acquired without reference to a particular orientation. Thus, in one example, hand images are obtained by placing a hand on a planar lighting table without requiring guidance pegs, which have traditionally been used to orient a hand during image collection to increase identification accuracy. This can improve convenience for the user by allowing a greater degree of freedom in hand orientation during image capture, both saving acquisition time and allowing providing the user with a more comfortable experience. Moreover, certain of the described systems and techniques are able to perform biometric analysis without requiring direct measurement of the hand image. Thus, these systems and techniques can operate without extracting landmarks on the fingers (e.g., finding finger joints or tips or looking for lines on a palm), a process which can be prone to error.

In another example, the use of Zernike moments has been improved in certain implementations. Zemike moments have been employed in a wide range of applications in image analysis and object recognition. They can be, on the surface, quite attractive for representing hand shape information due to having minimal redundancy (i.e., orthogonal basis functions), and being relatively invariant with respect to translation, rotation, and scale, as well as robust to noise. In many existing applications, however, their use in biometric analysis has been limited to low-orders only or small low-resolution images. This is because high-order Zemike moments traditionally come with high computational requirements and can lack accuracy due to numerical errors. Unfortunately, these low-order moments are frequently insufficient to capture shape details accurately. Although there exist some techniques that rely on approximate polar coordinate transformations, it is difficult to obtain satisfactory results in the context of hand-based verification using these techniques because the approximations involved can negatively affect accuracy.

These difficulties are addressed in certain implementations of the present systems and techniques by computation of Zernike descriptors which uses a modified technique which recognizes terms which show up repeatedly during computation and which performs the evaluation of these repeated terms separately. Additionally, certain terms, which can be recomputed, are stored before analysis using a lookup table to save computations. Through these implementations, present systems and techniques can, if desired, reduce computation complexity while avoiding error introduction common to existing Zernike computation techniques. Additionally, these techniques preserve can accuracy, by avoiding any form of coordinate transformations and by using arbitrary precision arithmetic. In some implementations, certain of these techniques can provide the ability to utilize various shape descriptors to provide a more powerful representation of hand shape, replacing the conventional hand-crafted geometric features.

2. Further Examples of Hand-Based Biometric Analysis Techniques

FIG. 2 shows an exemplary block diagram of a procedure for performing hand-based biometric analysis according to the systems and techniques described herein. In particular, the FIG. 2 is a flowchart of an example process 200 for acquiring and analyzing an image of a hand. In various implementations, the illustrated process blocks may be merged, divided into sub-blocks, or omitted. In one implementation, the processes of FIG. 2 are performed by the Hand-Based Biometric Analysis Module 300 of FIG. 3. Various processes of FIG. 2 may be performed by software or hardware modules; examples of such modules are found as sub-modules of module 300. Additionally, various processes described herein may be shared by modules illustrated in FIG. 3 or may be performed by modules which are not illustrated. The process begins at block 210, at which an image of a hand is acquired and processed. In one implementation, this process is performed by the image acquisition system 310 of FIG. 3. Additional detail about the processes of block 210 can be found below.

The process continues to blocks 220 and 230 where the input image goes through a segmentation process. During this process, the image of the combined arm and hand are segmented to isolate the hand image at block 220, while the hand image is further segmented at block 230 into separate finger and palm segments. In one implementation the arm image is discarded after the process of block 220. In one implementation, the processes of blocks 210-230 are performed by the image segmentation module 320 of FIG. 3. Further detail about the processes of blocks 220 and 230 can be found below.

In an alternative implementation, the finger-palm segmentation procedure of block 230 is not performed. However, although Zernike moments can tolerate certain finger movement (e.g., 6 degrees rotation about the axis being perpendicular to the joint of the finger with the palm), Zernike moments become more sensitive when the fingers move close to each other. Moreover, Zernike moments generally cannot tolerate very well situations where the hand is bent at the wrist. Thus, the finger segmentation process of block 230 can aid in improving both the accuracy and the processing speed of the systems shown here.

Next, at block 240, the system performs feature extraction of the images by computing the Zernike moments of each image segment independently to obtain feature parameters. In one implementation, the processes of block 240 are performed by the image analysis module 340 of FIG. 3. Finally, at block 250, feature parameters, typically in the form of Zernike descriptors obtained by computation of Zernike moments at block 240 are compared to known hand image data to determine if the currently-analyzed hand image is similar to previously-analyzed images. Further detail about the processes of blocks 240 and 250 can be found below. In one implementation, the processes of block 250 are performed by the decision module 360 of FIG. 3 and utilize stored feature parameters kept in the data storage 380. In various implementations, the data storage may also contain additional biometric or identity data, such that identify records can be kept, and/or may contain repeated terms used in Zernike moment computation, as will be explained in detail below. In various implementations the data storage 380 comprises more-structured storage implementations, such as a database (as in the earlier example) or alternatively may comprise less-structured storage implementations, such as a drive or a disk array.

3. Examples of Image Acquisition

FIG. 4 shows an exemplary block diagram of a procedure for performing hand-based biometric analysis according to the systems and techniques described herein, and in particular, the processes performed by the image acquisition system 310 when performing the processes of block 210. In various implementations, the illustrated process blocks may be merged, divided into sub-blocks, or omitted. The process begins at block 410, where the image of a hand is captured. One example of an image acquisition system according to the techniques described herein consists of a VGA resolution CCD camera and a planar lighting table, which provides the surface on which a hand may be placed. An example of such a system can be found in FIG. 5a, in which the direction of the camera is perpendicular to the lighting table. In the example, the camera has been calibrated to remove lens distortion. In the illustrated implementation, the image obtained by the camera is of the back of the hand being analyzed. In some applications, biometric analysis can be performed on such an image without the need to acquire an image of the front of the hand, or of any particular lines or features of the palm or fingers. However, in alternative implementations, the biometric analysis may be augmented through additional image (or other biometric) analysis techniques, such as the analysis of finger or hand prints.

In alternative implementations both the camera and the lighting table can be placed inside a box to more effectively eliminate light interferences from a surrounding environment. However, the depicted implementation, especially when utilized alongside the biometric analysis techniques described herein, provides images of high-enough quality without much effort to control the environment that they can be used in biometric anlaysis. When the user places his/her hand on the surface of the lighting table, an almost binary, shadow, and noise free silhouette of the hand is obtained (e.g., the examples shown in FIGS. 5b and 5c. Another alternative implementation uses a flatbed scanner tuned to capture the hand silhouette. Yet another implementation processes the image through a threshold or filter to create a silhouette with a more stark contrast to facilitate later analysis.

In one implementation, during the acquisition process users are asked to stretch their hand and place it inside a large rectangular region marked on the surface of the table. This facilitates visibility of the whole hand and avoids perspective distortions. However, while various implementations may utilize broad directions in order to facilitate analysis. In the illustrated implementation, there are no limitations on the orientation of the hand. This can provide an advantage over previous implementations, which typically require the use of pegs or other strict orientation guides.

The images can be captured using a gray scale camera; in another implementation, however a color CCD camera can be used if available. Thus, in block 420, if the image is taken in color, it is modified to create a grayscale image. One implementation of such a process uses the luminance values of pixels to obtain a grayscale image. For instance, luminance Yi,j of a pixel (i,j) can be given by Yi,=0.299Ri,+0.587Gi,+0.114Bi, (6) where Ri,,Gi,,Bi, denote the RGB values of a pixel. Next, at block 430, the grayscale image is binarized to create a binary image (e.g. an image containing only black and white pixels), in order to facilitate later analysis. The binary value Bi,j of a pixel can be calculated as

B i , j = { 1 if Y i , j < T 0 otherwise 1

where T is a constant threshold. In one implementation, this threshold is determined experimentally; one exemplary value for the threshold is T=0.5. The resulting silhouette is accurate and consistent due to the design of the image acquisition system. This is useful for the use of high order Zernike moments as Zernike moments can be sensitive to small changes in silhouette shape.

4. Examples of Image Segmentation

As discussed above, after processing of an image, the image segmentation module performs the segmentation of the hand, forearm and fingers. One example segmentation process is summarized as follows. In one implementation, for separating the forearm from the hand, first the palm is detected by finding the largest circle inside the hand/arm silhouette. Then the intersection of the forearm with the circle's boundary and image boundary is found to segment the hand. In one implementation, in order to segment the fingers and the palm, the fingers are filtered out first using morphological closing; next, the palm is subtracted from the whole silhouette to segment fingers. The fingers image segments are then processed to remove artifacts of the previous segmentation which could affect analysis. Details of these processes follow.

In the examples discussed above, the binary silhouette provided by the acquisition module is the union of the hand and the forearm. The forearm, however, does not have as many distinctive features, and its silhouette at different acquisition sessions is not expected to be the same due to clothing and freedom in hand placement. Thus, the present embodiment removes the forearm segment before continuing image processing.

To segment the forearm, one implementation utilizes an assumption that a user providing a hand image is not wearing very loose clothing on the arm. Under this assumption, the palm can be identified as becomes a thicker region of the silhouette, which enables the palm's detection through the finding of the largest circle inside the silhouette.

FIG. 6 is a flowchart of an example process 600 for segmenting an acquired arm/hand image to remove the forearm segment. In various implementations, the illustrated process blocks may be merged, divided into sub-blocks, or omitted. The process begins at block 610, where the image segmentation module 320 initializes a circular structuring element D with a very large radius R. Next, at block 620, the module 320 applies a closing operator on the image using D. Then, at decision block 625, the module determines if the output is an empty image. If the output is empty, then at block 630 the size of the circle element is reduced (e.g. by setting R:=R−1) and the process returns to block 620. If the circle is not empty, then the resulting image from the closing operator should be the largest circle inside the silhouette. One example of an output of this procedure on a sample image is illustrated in FIG. 7b. Thus, at block 640, once the largest circle is found, the arm is segmented by detecting its intersection with the circle and the boundary of the image; the arm portion of the image can then be removed at block 650. FIG. 7c shows the resulting silhouette after discarding the arm region.

Segmentation of finger and palm portions can also be useful to obtain an accurate hand analysis. In one implementation, to support accurate image capture and analysis, users are instructed to stretch their hands in order to avoid touching fingers. However, finger motion is often unavoidable. An example of samples collected from the same user, shown in FIGS. 8(a)-8(d), illustrates this. As can be seen, the angles between fingers can change significantly between different samples. And while Zernike moments can tolerate finger motion to some degree, however, in order to deal with the potential for large finger motion, the present embodiment segments the fingers and palm to be processed separately. This segmentation, by providing multiple groups of feature parameters can allow for more focused optimization (such as by selecting different orders for Zernike moments for different segments), thus increasing potential accuracy.

One example of such segmentation processing is shown in FIG. 9. To perform this segmentation, first, a morphological closing operation based on a circular disk is applied on the input image, as shown in FIG. 10(a). In one implementation of the operation, the radius of the structuring element was experimentally set to 25 pixels (i.e., making it thicker than a typical widest finger found in an exemplary database of examples). This closing operation filters out the fingers from the silhouette, as shown in FIGS. 7(b) and 7(c). The remaining part of the silhouette (e.g. FIG. 7(c)) corresponds to the palm, which is then subtracted from the input image to obtain the finger segments, as shown in FIG. 7(d). In another technique, fingers are segmented from a palm by detecting landmark points on the hand, such as fingertips and valleys, such as is performed in some traditional hand-based verification techniques. However, these techniques tend to be prone to errors due to inaccuracies in landmark detection and thus are not used in the exemplary implementations. In one implementation, finger regions are individually identified using connected components analysis.

As FIG. 10(d) illustrates, however, due to the segmentation process the segmented fingers often have sharp tails at the locations where they meet the palm. The curvature of the hand contour at these locations is less severe for the little, point, and thumb fingers, as shown in FIG. 11. As a result, because these fingers are not as easily cut off by finger segmentation, for a given segmentation there may be significant differences in the length of the tails corresponding to these fingers. Examples of these differences are shown as shown in FIG. 12(a), where different samples from the same subject are shown. In some cases, especially when the hand is small, there are significant differences in the length of the tails, which can make accurate and efficient computation of Zernike moments difficult.

To remove these tails, thus facilitating later analysis, the process of FIG. 9 continues at block 930, where finger segments are smoothed out by applying an extra morphological closing step. In one implementation, this closing step is performed with a simple 4 by 4 square with values set to one. The benefit of such processing can be observed by comparing the finger image segments of FIG. 12(a) to FIG. 12(b). The circles in the Figures illustrate circles which can enclose each finger. Thus, by applying this closing processing, the amount of tail reduction can easily be seen. Additional benefits of this smoothing step can be seen in the following Table, which illustrates the effect of this process by showing the normalized distances between the circles surrounding pairs of corresponding fingers shown in FIGS. 12(a) and 12(b):

Pair of Fingers dbefore dafter Little 0.5904 0.0901 Point 0.7881 0.1135 Thumb 0.7424 0.1253

As the table shows, the application of a smoothing processing step has the potential to improve matching scores considerably by reducing the difference between successive scans of the same finger.

5. Examples of Zernike Moment Computation

In various implementations, once various segments have been identified for a hand silhouette, Zernike moments are computed for each of the various segments in order to arrive at a set of Zemike descriptors (such as in block 240 of FIG. 2). It is these descriptors that are used, either as they are created or in a fused form, as feature parameters, which can be stored for future hand-based biometric analysis or compared against previously-stored feature parameters to determine if the hand is the same as a known hand.

Generally, Zernike moments are based on a set of complex polynomials that form a complete orthogonal set over the interior of the unit circle. A Zernike moment for an image is defined as the projection of the image on these orthogonal basis functions. Specifically, the basis functions Vn,m(x, y) are given by:


Vn,m(x,y)=Vn,m(ρ,θ)=Rn,m(ρ)ejmθ  2

where n is a nonnegative integer known as the “order” of the Zernike moment resulting from these functions. Additionally, in the implementation given as equation 2, j=√{square root over (−1)}, m is a nonzero integer subject to the constraints that n−m is even and m<n, ρ is the length of the vector from origin to (x,y), θ is the angle between the vector and the x axis in a counter clockwise direction, and Rn,m(ρ) is what is known as a Zernike radial polynomial. Rn,m(ρ) is defined as follows:

R n , m ( ρ ) = k = m , n - k = even n ( - 1 ) n - k 2 ( n + k 2 ) ! ( n - k 2 ) ! ( k + m 2 ) ! ( k - m 2 ) ! ρ k 3

which is denoted, for the sake of simplicity of terminology, as:

R n , m ( ρ ) = k = m , n - k = even n β n , m , k ρ k 4

From this definition, it follows that Rn,m(ρ)=Rn,−m(ρ), and from the orthogonality of the basis functions Vn,m(x, y), the following holds:

n + 1 π x 2 + y 2 1 V n , m ( x , y ) V p , q * ( x , y ) = δ n , p δ m , q where 5 δ a , b = { 1 if a = b 0 otherwise 6

It is this orthogonality that, in part, allows the Zernike functions to provide a useful basis for an image function.

For a digital image defined by a digital image function f(x, y), then, the Zernike moment of order n with repetition is given by:

Z n , m = n + 1 π x 2 + y 2 1 f ( x , y ) V n , m * ( x , y ) 7

where V*n,m(x, y) is the complex conjugate of V*n,m(x, y). In some of the examples described herein, the digital image function f(x, y) need only describe, for each (x, y) pair, whether the pixel at that point in the binary image is on or off. In alternative implementations, more complex digital image functions may be used.

To compute the Zernike moments of a given image, in one implementation the center of mass of the object is taken to be the origin. As Equation 7 shows, because the radial polynomial is symmetric, the magnitude of the Zernike moments are rotation invariant. By taking the center of mass to be the origin before computing a Zernike moment, the moments are, barring subtle changes in images, essentially translation-invariant as well. Thus, for substantially-similar images, their Zernike moments will be substantially similar, even if one is rotated or moved around. Similarly, in some implementations the systems and techniques scaled images inside a unit circle to provide scale invariance.

In some implementations, once Zernike moments have been determined for an image (such as that of a hand), the image can be reconstructed. This reconstruction is not necessary for every implementation of creating and comparing a database of hand-based verification data, however. This can be done using the following truncated expansion:

f ( x , y ) = n = 0 N C n , 0 2 R n , 0 ( ρ ) + n = 1 N m > 0 ( C n , m cos ( m θ ) + S n , m sin ( m θ ) ) R n , m ( ρ ) 8

where N is the maximum order of Zernike moments being used, and Cn,m and Sn,m denote, respectively, the real and complex parts of the Zernike moment terms Zn,m. This reconstruction may be used, for example, to illustrate a hand image chosen from a database of images upon analysis; this could provide additional feedback to a user or operator of a biometric analysis apparatus.

As mentioned above, one method used in existing systems to improve the speed of Zernike moments computation involves using a quantized polar coordinate system. In one such technique, a square to a circle transformation was employed for this purpose. In another, for an M×M image, angles were quantized to 4M levels and radii were quantized to M levels. Quantization techniques such as these suffer from a side effect, however, as errors are introduced in the computation of high order Zernike moments.

The described procedures that follow employ improved techniques that avoid using quantization, providing computation of the moments with comparable accuracy to traditional approaches (e.g., no approximations). To save computation time, these techniques find terms which occur repeatedly in various orders. Once these terms are computed, they are stored to avoid re-computing the terms later, and are available to be linearly combined with other pre-computed terms. These other terms are stored in a lookup table (such as in the data storage 380) and do not depend on any underlying image for which Zernike moments are being computed. Additionally, in one implementation, arbitrary precision arithmetic is used to increase accuracy.

The terms that can be isolated for repeat usage can be found through substitution of Equations 4 and 2 into Equation 7, which results in the following equation:

Z n , m = n + 1 π x 2 + y 2 1 ( k = m n β n , m , k ρ k ) - j m θ f ( x , y ) = n + 1 π k = m n β n , m , k ( x 2 + y 2 1 - j m θ ρ k f ( x , y ) ) 9

It is this final summation (shown in parenthesis at the end) that can be isolated to determine repeating terms. For the sake of simplicity of terminology then, Equation 9 can be rewritten to clarify the repeating term:

Z n , m = n + 1 π k = m n β n , m , k χ m , k 10

Because these χm,k terms do not rely on order number for their computation, once an image function is defined, the χm,k terms defined in Equation 10 can be re-used as common terms in future computation of moments. In some implementations, it would be possible, while computing Zernike moments up to order N, for a process to compute χm,k for each repetition. However, as FIG. 7 shows, computing χm,k once and recording these for future use is enough for computing Zernike moments of any order and any repetition by simply taking linear combinations as shown in Equation 10. FIG. 13 illustrates one example of common terms for Zernike moments up to order 10 for repetition m=0. Moreover, the coefficients βn,m,k (detailed in Equations 3 and 4) do not depend on an image function or coordinates; therefore, they can be stored ahead of time in a small lookup table to save computation.

FIG. 14 is a flowchart of an example process 1400 performed by the Image Analysis Module 340 for computing Zernike moments for an image using stored and re-used terms. In various implementations, the illustrated process blocks may be merged, divided into sub-blocks, or omitted. The process begins at block 1410, where the βn,m,k terms, as defined above, are computed and stored in a lookup table for later use. In various implementations, this process may be performed before any image acquisition is performed, as the βn,m,k terms do not rely on an image; alternatively the computation may be performed during image acquisition or analysis. Next, at block 1420, the various terms which are needed for computation of the Zernike moment for the image being analyzed are determined. As discussed above, this will change to some degree based on the chosen order of the Zernike moments and the repetition. The process then continues to a loop at block 1430, where a sub-process is performed for each term in the linear combination of Equation 10 used to compute a Zernike moment for the image. Thus, at decision block 1435, the module 340 determines if the necessary χm,k term in question at this point in the loop has been computed already. If not, at block 1440 the module computes the term using the image function and stores the term for later use. If, instead the χm,k term has been computed, then at block 1450 the term is recovered from storage. Next, at block 1460 the χm,k and βn,m,k are combined in the linear combination of equation 10, and the loop continues at block 1470.

Some implementations of the systems and methods described herein also may take advantage of adjustments in numerical precision in calculating Zernike moments to increase accuracy and/or efficiency. Depending on image size and maximum order chosen, double precision arithmetic may not provide enough precision; serious numerical errors can be introduced in the computation of moments under these conditions. The use of arbitrary precision arithmetic can overcome some of these limitations of double precision arithmetic and avoid undesired errors.

Consideration of the order of the Zernike moments affects both reconstruction accuracy as well as computational efficiency. This effect is demonstrated in FIG. 15, where the 300×300 binary input image at the top-left corner is reconstructed using different orders of Zernike moments. Traditionally, capturing the details of the input image usually utilizes high orders. Using high orders is often not practical, however, due to information redundancy and computational complexity issues. Additionally, there is an inherent limitation in the precision of arbitrary high-order Zernike moments due to the circular geometry of their domain. Thus, in some implementations, the minimum order that still provides high verification accuracy is determined.

To determine this minimum order, one implementation uses the average reconstruction error on a large number of hand images to decide the maximum moment order that would be useful in the context of the biometric analysis described herein. FIG. 16(a) shows the reconstruction error of fingers for different orders. As it can be observed, the error almost saturates for orders higher than 40. In FIG. 16(b), the reconstructions of a finger for different orders are shown. In FIG. 16(b), the first image is the original image, while, from left to right, top to bottom, reconstructed images of original image are shown up to order 2, 5, 10, 20, 30, 40, 50, 60 and 70, respectively. The saturation observed in FIG. 16(a) is visually evident in FIG. 16(b). By contrast, FIGS. 17(a) and 17(b) show a similar reconstruction error graph and reconstructed images for an image of an entire hand. The reconstructed images of FIG. 17(b) are for the same orders as in the images of FIG. 16(b). As FIGS. 17(a) and 17(b) show, a higher order is necessary for good precision when attempting to precisely analyze an entire hand.

The cost of higher-order Zernike moment computation is very high, especially when precision is a requirement. Using one implementation for computing high order Zernike moments, it takes typically six minutes to compute Zernike moments up to order 70, while it takes only 35 seconds to compute moments up to order 30. One reason for low execution speed is the use of arbitrary precision arithmetic. However, experimentation has found that moments of up to order 30 can be computed with relatively high accuracy even without the use of arbitrary-precision arithmetic. Thus, in an alternative implementation, a hybrid implementation is used, where the use of arbitrary precision arithmetic is restricted to high orders only, increasing system speed. In one such implementation, it was experimentally found that using double precision instead of arbitrary precision arithmetic to compute moments up to order 36 yielded an error of less than 0.5%. Additional alternative hardware implementations using FPGAs can speed up the process as well.

This great increase in speed and reduction in complexity for lower orders supports the segmentation of the hand into finger and palm segments, as described above. As for the chosen order for the image segments, the experimentally-obtained order chosen to represent fingers in one implementation of the system is 20, while the order chosen to represent a palm is 30. In various implementations, a maximum order depends on the resolution of the image. Experimental results justify this implementation decision. To decrease the size of feature parameters, one implementation uses dimensionality reduction based on Principal Components Analysis (PCA).

6. Examples of Fusion and Identity Decision-Making

Following feature extraction, various implementations utilize some form of data fusion and comparison to determine if the hand image being analyzed matches any known hands. Various implementations may employ score-level fusion, feature-level fusion, or decision-level fusion (or some combination thereof) to make comparisons between the hand images which are being analyzed and known images. The methods differ in order and way in which they fuse and compare data. In a score-level fusion implementation, Zernike descriptors are compared segment-by-segment with known descriptors to obtain scores for each segment. These scores are then fused to arrive at an overall score for the hand image. In a feature-level fusion implementation, Zernike descriptors are fused together into a single descriptor for the hand, possibly along with dimensionality reduction or feature selection. This descriptor is then compared to previously-stored hand descriptors to obtain a matching score. In alternative implementations, other methods of comparing data obtained by computing Zernike moments may be employed.

The fusion, comparison, and decision processes described below describe the use of storage feature parameter records, in particular comparisons to them. While the processes described below are not made with reference to a particular number of records, in various implementations, the processes described herein may utilize one or more feature parameter records per person or per segment. Thus, in one implementation, a comparison between parameters for a just-acquired image may only involve comparison with a single stored set of parameters for that image. In an alternative implementation, multiple sets of enrollment templates, each a set of feature parameters, may be kept for each image or image segment. In such an implementation, comparison can take the form of comparing parameters for an acquired segment with multiple enrollment templates. In one such implementation, if a score, or distance, is being calculated between the just-acquired parameter set and the recorded sets (such as is described herein), the score can be calculated as the smallest such score found from the comparisons. Thus, if feature parameters for a thumb are compared to five thumb enrollment templates for a given identity, the score is taken as the lowest out of the five comparisons. In another implementation, a mathematical manipulation may be performed on the various scores to arrive at a combined score for that segment.

FIG. 18 is a flowchart of an example process 1800 performed by the decision module 360 for fusing data associated with Zernike descriptors using feature-level fusion and comparing data to known hand samples. In various implementations, the illustrated process blocks may be merged, divided into sub-blocks, or omitted. In one implementation, Principal Components Analysis is used in the implementation of the procedure. The process begins at block 1820, where the module 360 receives Zernike descriptors obtained by the hand-image analysis for comparison to known hands. Next, at block 1830, Zernike descriptors of the fingers and the palm are fused into a feature vector which represents the overall geometry of the hand. The resulting representation, in certain implementations, is invariant to various transformations, including translation, rotation and scaling transformations.

Next, at block 1840, the fused data is used to compare the present hand which is being analyzed to previously-collected data. During one implementation of the biometric analysis process, the Euclidean distance between the query and the templates provides a similarity score for verification purposes. In one implementation, multiple enrollment templates are employed per subject and the smallest distance between the query and a subject's templates indicates the similarity of the query to that subject. In various implementations, the comparison may be performed with reference to a single stored feature vector (for example if a particular identity is being verified) or multiple comparisons may be made with a plurality of stored feature vectors, in order to identify the owner of a hand.

Next, at decision block 1855, the difference, or differences, in the scores is compared to a identification threshold. The value chosen for the threshold can affect the level of security provided by the biometric analysis. Thus, a higher threshold value could result in false positive identifications (or verifications), while too low a value could result in false negatives. In one implementation, such a threshold is pre-determined experimentally or according to operational requirements, such as security level, or the identity of the person being compared. Finally, depending on the decision made at decision block 1855, the system either reports a positive decision (for verification or identification) at block 1870 or a negative decision at block 1880. The process then ends.

By comparison, FIG. 19 is a flowchart of an example process 1900 performed by the decision module 360 for fusing data associated with Zernike descriptors using score-level fusion and comparing data to known hand samples. In various implementations, the illustrated process blocks may be merged, divided into sub-blocks, or omitted. The process begins at block 1920, where the module 360 receives Zernike descriptors obtained by the hand-image analysis for comparison to known hands. Next, at block 1930, the feature parameter descriptors for each segment (e.g. the palm and each finger) are compared individually to stored descriptors. Similarly to the implementations discussed above, this may be done with reference to a particular stored set of descriptors, or with a plurality of stored descriptors.

Next, depending on the implementation, the score-level fusion procedure may make a decision based on a weighted sum analysis or using support vector machines. If the implementation is using weighed-sum, then at block 1940, these scores are fused into an overall score using weighted summation. To verify (or identify) a user, the module compares the input image with the templates stored in the database and picks the template with the minimum distance from the input. Specifically, given the matching scores s, for i=1 . . . 6, the overall score S is obtained as follows:

S ( Q , T ) = i = 1 6 α i S ( Q i , T i ) 11

Where S denotes the similarity measure (e.g. Euclidean distance) between the query Q and the template T. Qi and Ti represent the i-th part of the hand. In one implementation, the first five parts correspond to the little, ring, middle, point, and thumb fingers while the sixth party corresponds to the back of the palm. The parameters αi are the weights associated with the i-th part of the hand. In one implementation, they satisfy the following constraint:

i = 1 6 α i = 1 12

In a feature-level fusion implementation, a similar weighed scheme may be used to determine the fused vectors.

Determining the proper weights to be used in the summation is of importance to obtain good accuracy. In one implementation, weights are determined experimentally through a search over an empirically determined set of weights to maximize accuracy over a small database of 80 samples from 40 subjects. Feature vectors are fully invariant to translations and rotations. As a result, generally any type of distance metric can be used for computing similarities by the decision module 360. Next, at decision block 1945, the score is compared to a identification threshold, similarly to the feature-level fusion described above, to determine if it is below the threshold. Finally, depending on the decision made at decision block 1945, the system either reports a positive decision (for verification or identification) at block 1970 or a negative decision at block 1980. The process then ends.

In an alternative implementation, support vector machines are used. A support vector machine (“SVM”) is a binary classifier that maps input patterns X to output labels y∈−1,1. In general, an SVM has the following form:

f ( X ) = i Ω α i y i K ( X , X i ) + b 13

where αi are Lagrange multipliers, Ω corresponds to the indices of the support vectors for which αi≠0,b is a bias term, X is an input vector, and K(X,Xi) is a kernel function. Classification decisions are based on whether the value f(X) is above or below a threshold, and thus can be adjusted for greater or lesser security similarly to the processes above by adjusting the threshold. Given a pair of hands to be verified, the input vector X is composed of the scores between corresponding parts of the hand. Assigning the input vector to the class “1” implies that both hands come from the same subject while assigning it to the class “−1” implies that they come from different subjects.

Thus, in implementations utilizing a support vector machine, at block 1950 the previously-obtained scores are mapped to either a positive or negative decision according to the output of the SVM. At this point, either a positive or negative decision is then reported in either block 1970 or 1980 depending on the result.

FIG. 20 is a flowchart of an example process 2000 performed by the decision module 360 for fusing data associated with Zernike descriptors using a decision-level fusion process and comparing data to known hand samples. The illustrated implementation depicts a process for “majority voting.” Majority voting is among the most straightforward decision-level fusion strategies. In various implementations, the illustrated process blocks may be merged, divided into sub-blocks, or omitted. The process begins at block 2020, where the module 360 receives Zernike descriptors obtained by the hand-image analysis for comparison to known hands. Next, at block 2030, the feature parameter descriptors for each segment (e.g. the palm and each finger) are compared individually to stored descriptors to obtain scores for each segment. Similarly to the implementations discussed above, this may be done with reference to a particular stored set of descriptors, or with a plurality of stored descriptors.

Next, at decision block 2045, the decision module 360 determines if a majority of the scores are below one or more preset thresholds. Thus, different thresholds may be set for each type of segment, although in some implementations thresholds could be repeated. Then, if a majority of the segments have scores below the threshold, at block 2070 a positive decision is reported. If not, at block 2080 a negative decision is reported. The process then ends.

7. Computing Environment

The above hand-based biometric analysis techniques and systems can be performed on any of a variety of computing devices. The techniques can be implemented in hardware circuitry, as well as in software executing within a computer or other computing environment, such as shown in FIG. 21.

FIG. 21 illustrates a generalized example of a suitable computing environment 2100 in which described embodiments may be implemented. The computing environment 2100 is not intended to suggest any limitation as to scope of use or functionality of the invention, as the present invention may be implemented in diverse general-purpose or special-purpose computing environments.

With reference to FIG. 21, the computing environment 2100 includes at least one processing unit 2110 and memory 2120. In FIG. 21, this most basic configuration 2130 is included within a dashed line. The processing unit 2110 executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. The memory 2120 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. The memory 2120 stores software 2180 implementing the described techniques.

A computing environment may have additional features. For example, the computing environment 2100 includes storage 2140, one or more input devices 2150, one or more output devices 2160, and one or more communication connections 2170. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment 2100. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 2100, and coordinates activities of the components of the computing environment 2100.

The storage 2140 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment 2100. The storage 2140 stores instructions for the software 2180 implementing the described techniques.

The input device(s) 2150 may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment 2100. For audio, the input device(s) 2150 may be a sound card or similar device that accepts audio input in analog or digital form, or a CD-ROM reader that provides audio samples to the computing environment. The output device(s) 2160 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 2100.

The communication connection(s) 2170 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed audio or video information, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.

The techniques described herein can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment 2100, computer-readable media include memory 2120, storage 2140, communication media, and combinations of any of the above.

The techniques herein can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment.

For the sake of presentation, the detailed description uses terms like “determine,” “calculate,” and “compute,” to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.

In view of the many possible variations of the subject matter described herein, we claim as our invention all such embodiments as may come within the scope of the following claims and equivalents thereto.

Claims

1.-2. (canceled)

3. A method of determining identity using a computing system that implements a biometric analysis tool, the method comprising:

with the computing system that implements the biometric analysis tool, performing biometric analysis on an image of a hand to produce plural feature parameters without requiring a particular orientation of the hand during image capture and without requiring a particular orientation of the hand during the biometric analysis, wherein the biometric analysis includes: determining a silhouette representation from the image of the hand; and using the silhouette representation to determine the plural feature parameters, each of the plural feature parameters indicating geometry of a different part among palm and plural fingers of the hand; and
making an identity decision based at least in part on the plural feature parameters and stored descriptors representing one or more previously-analyzed hand images.

4. The method of claim 3, wherein a different weight is associated with each of the plural feature parameters, respectively.

5. The method of claim 3, wherein the plural feature parameters resulting from the biometric analysis are invariant to rotation, scale and translation.

6. The method of claim 3, wherein the image represents a back view of the hand, and wherein determining the silhouette representation comprises, for each pixel value of plural pixel values of the image:

comparing the pixel value to a threshold;
if the pixel value satisfies the threshold, assigning the pixel value a first binary value; and
otherwise, assigning the pixel value a second binary value.

7. The method of claim 3, wherein the biometric analysis is performed without reference to direct physical measurements of the image of the hand or the silhouette representation, and wherein the biometric analysis is performed without reference to landmark points within the image of the hand or the silhouette representation.

8. The method of claim 3, wherein the biometric analysis further comprises segmenting the silhouette representation, including:

filtering out the plural fingers from the silhouette representation to identify a segment image for the palm;
removing the palm from the silhouette representation to identify segment images for the plural fingers, respectively.

9. The method of claim 3, wherein the identity decision is a verification decision, and wherein the making the identity decision comprises:

selecting a particular descriptor among the stored descriptors; and
verifying identity using the plural feature parameters and the particular descriptor.

10. The method of claim 3, wherein the identity decision is an identification decision, and wherein the making the identity decision comprises:

identifying a matching descriptor among the stored descriptors using the plural feature parameters and each of the stored descriptors, respectively, until the matching descriptor is identified.

11. The method of claim 3, wherein the making the identity decision uses a threshold, the method further comprising setting the threshold depending on security level.

12. The method of claim 3, wherein the making the identity decision is further based at least in part on separate fingerprint data, other biometric data and/or other identity data.

13. The method of claim 3, wherein the making the identity decision uses feature-level fusion and includes:

combining the plural feature parameters into a feature vector, including using a different weight associated with each of the plural feature parameters as part of the combining the plural feature parameters into the feature vector; and
for a given stored descriptor of the stored descriptors, comparing the feature vector with a stored vector for the given stored descriptor, wherein the identity decision is based at least in part on the comparing the feature vector with the stored vector for the given stored descriptor.

14. The method of claim 3, wherein the making the identity decision uses score-level fusion and includes, for a given stored descriptor of the stored descriptors:

for each feature parameter of the plural feature parameters: comparing the feature parameter to a corresponding parameter for the given stored descriptor; and calculating a score value for the feature parameter based at least in part on a different weight associated with the feature parameter and the comparing the feature parameter to the corresponding parameter for the given stored descriptor;
calculating an overall score value based at least in part on the score values for the respective feature parameters; and
comparing the overall score value to a threshold, wherein the identity decision is based at least in part on the comparing the overall score value to the threshold.

15. The method of claim 3, wherein the making the identity decision uses score-level fusion and includes, for a given stored descriptor of the stored descriptors:

for each feature parameter of the plural feature parameters, comparing the feature parameter to a corresponding parameter for the given stored descriptor; and
using support vector machine classification to map results of the comparing for the plural feature parameters, respectively, to the identity decision.

16. The method of claim 3, wherein the making the identity decision uses decision-level fusion and includes, for a given stored descriptor of the stored descriptors:

for each feature parameter of the plural feature parameters: comparing the feature parameter to a corresponding parameter for the given stored descriptor to produce a feature score value; and determining whether the feature score value satisfies one or more thresholds;
wherein the identity decision is based at least in part on the determination of whether the feature score values for the plural feature parameters satisfy the one or more thresholds.

17. The method of claim 3, further comprising adjusting a different weight associated with each of the plural feature parameters, a threshold and/or other setting of the biometric analysis tool based at least in part on the identity decision.

18. A computing system that implements a biometric analysis tool, the computing system including:

a processing unit; and
memory storing computer-executable instructions for causing the computing system to perform biometric analysis operations comprising: performing biometric analysis on an image of a hand to produce plural feature parameters without requiring a particular orientation of the hand during image capture and without requiring a particular orientation of the hand during the biometric analysis, wherein the biometric analysis includes: determining a silhouette representation from the image of the hand; and using the silhouette representation to determine the plural feature parameters, each of the plural feature parameters indicating geometry of a different part among palm and plural fingers of the hand; and making an identity decision based at least in part on the plural feature parameters and stored descriptors representing one or more previously-analyzed hand images.

19. The computing system of claim 18, wherein the image of the hand is received from an image acquisition device that lacks pegs to guide the orientation of the hand.

20. A memory device storing computer-executable instructions for causing a processor, when programmed thereby, to perform biometric analysis operations comprising:

performing biometric analysis on an image of a hand to produce plural feature parameters without requiring a particular orientation of the hand during image capture and without requiring a particular orientation of the hand during the biometric analysis, wherein the biometric analysis includes: determining a silhouette representation from the image of the hand; and using the silhouette representation to determine the plural feature parameters, each of the plural feature parameters indicating geometry of a different part among palm and plural fingers of the hand; and
making an identity decision based at least in part on the plural feature parameters and stored descriptors representing one or more previously-analyzed hand images.

21. The memory device of claim 20, wherein a different weight is associated with each of the plural feature parameters, respectively.

22. The memory device of claim 20, wherein the plural feature parameters resulting from the biometric analysis are invariant to rotation, scale and translation.

Patent History
Publication number: 20150261992
Type: Application
Filed: Mar 30, 2015
Publication Date: Sep 17, 2015
Applicant: Board of Regents of the Nevada System of Higher Education, on Behalf of the University of Nevada (Reno, NV)
Inventors: George Bebis (Reno, NV), Gholamreza Amayeh (Reno, NV)
Application Number: 14/673,593
Classifications
International Classification: G06K 9/00 (20060101);