IRIS IMAGE DATA PROCESSING FOR USE WITH IRIS RECOGNITION SYSTEM

- SENGA ADVISORS, LLC.

Disclosed is an iris recognition method which is one of biometric technologies. According to a non-contact-type human iris recognition method by correction of a rotated iris image, the iris image is acquired by image acquisition equipment using an infrared illuminator. Inner and outer boundaries of the iris are detected by analyzing differences in pixels of a Canny edge detector and the image for the inputted iris image, so as to allow the boundaries of the iris to be more accurately detected from the eye image of a user. Thus, the iris image with a variety of deformation can be processed into a correct iris image, so that there is an advantage in that a false acceptance rate and a false rejection rate can be markedly reduced.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
RELATED APPLICATIONS

This application is a continuation application of Application No. 10/656,921, which is a continuation application under 35 U.S.C. §365 (c) claiming the benefit of the filing date of PCT Application No. PCT/KR01/01302 designating the United States, filed Jul. 1, 2001. The PCT Application was published in English as WO 02/071316 A1 on Sep. 12, 2002, and claims the benefit of the earlier filing date of Korean Patent Application No. 2001/11441, filed Mar. 6, 2001. The contents of the Korean Patent Application No. 2001/11441, the international application No. PCT/KR01/01302 including the publication WO 02/071316 A1 and application Ser. No. 10/656,921 are incorporated herein by reference in their entirety.

BACKGROUND

1. Field

The present disclosure relates to processing iris image data, and more particularly, to a method of identifying an outer boundary of an iris image.

2. Discussion of the Related Art

An iris recognition system is an apparatus for identifying personal identity by distinguishing one's own peculiar iris pattern. The iris recognition system is superior in its accuracy in terms of the personal identification in comparison to the other biometric methods such as voice or fingerprint, and it has a high degree of security. The iris is a region existing between the pupil and the white sclera of an eye. The iris recognition method is a technique for identifying personal identities based on information obtained by analyzing respective one's own iris patterns different from each other.

Generally, the kernel technique of the iris recognition system is to acquire a more accurate eye image by using image acquisition equipment and to efficiently acquire unique characteristic information on the iris from the inputted eye image.

However, in a non-contact type human iris recognition system which acquires an iris image to be taken at a certain distance therefrom, the iris image with a variety of deformation may be acquired in practical. That is, it is unlikely that a complete eye image can be acquired since the eye is not necessarily directed toward a front face of a camera but positioned at a slight angle with respect to the camera. Thus, there may be a case where the information on an eye image rotated at an arbitrary angle with respect to a centerline of the iris is acquired.

The foregoing discussion in the background section is to provide general background information and does not constitute an admission of prior art.

SUMMARY

One aspect of the invention provides a method of processing iris image data, which comprises: providing data of an eye image comprising an iris defined between an inner boundary and an outer boundary; providing information indicative of a center of the inner boundary, a first inner boundary pixel and a second inner boundary pixel, wherein the first and second inner boundary pixels are located on a first imagery line passing the center; computing to locate a first outer boundary pixel on the first imaginary line extending outwardly from the first inner boundary pixel; computing to locate a second outer boundary pixel on the first imaginary line extending outwardly from the second inner boundary pixel; and computing to locate a center of the outer boundary using the first outer boundary pixel and the second outer boundary pixel.

In the foregoing method, computing to locate the center of the outer boundary may comprise computing a bisectional point of the first and second outer boundary pixels. The method may further comprise computing a first distance between the first inner boundary pixel and the first outer boundary pixel. The method may further comprise obtaining a distance between the first inner boundary pixel and the second inner boundary pixel. Computing to locate the center of the outer boundary may use a first distance defined between the first inner boundary pixel and the first outer boundary pixel and a second distance defined between the second inner boundary pixel and the second outer boundary pixel. Computing the center of the outer boundary may further use a distance between the first inner boundary pixel and the second inner boundary pixel. The center of outer boundary may be off the center of the inner boundary.

Still in the foregoing method, the method may further comprise: providing information indicative of a third inner boundary pixel and a fourth inner boundary pixel, wherein portion to obtain a characteristic vector of a an iris pattern. Processing the data may further comprise obtaining a plurality of characteristic vectors for the same eye image.

Another aspect of the invention provides a method of processing iris image data, which comprises: providing data of an eye image comprising an iris defined between an inner boundary and an outer boundary; providing information indicative of a first inner boundary pixel, a second inner boundary pixel, a third inner boundary pixel and a fourth inner boundary pixel, which are located at different positions on the inner boundary; computing to locate a first outer boundary pixel on a first imaginary line extending generally radially from the first inner boundary pixel; computing to locate a second outer boundary pixel on a second imaginary line extending generally radially from the second inner boundary pixel; computing to locate a third outer boundary pixel on a third imaginary line extending generally radially from the third inner boundary pixel; computing to locate a fourth outer boundary pixel on a fourth imaginary line extending generally radially from the fourth inner boundary pixel; and using the first, second, third and fourth outer boundary pixels for further processing.

In the foregoing method, the method may further comprise computing to locate a center of the outer boundary using the first, second, third and fourth outer boundary pixels. The first imaginary line may be substantially perpendicular to the third and fourth imaginary lines, and wherein the second imaginary line may be substantially perpendicular to the third and fourth imaginary lines. A pixel located on the first imaginary line may be determined to be the first outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the first imaginary line becomes the maximum among differences of the image information between two neighboring pixels located on the first imaginary line.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a flowchart explaining the procedures of a normalization process of an iris image according to one embodiment of the present invention.

FIG. 2a is a view showing a result of detection of a pupillary boundary using a Canny edge detector.

FIG. 2b is a view showing center coordinates and diameter of a pupil.

FIG. 2c shows an iris image upon obtainment of a radius and center of an the third and fourth inner boundary pixels are located on a second imagery line passing the center; computing to locate a third outer boundary pixel on the second imaginary line extending outwardly from the third inner boundary pixel; and computing to locate a fourth outer boundary pixel on the second imaginary line extending outwardly from the fourth inner boundary pixel, wherein computing to locate the center of the outer boundary further uses the third outer boundary pixel and the fourth outer boundary pixel. The second imaginary line may be substantially perpendicular to the first imaginary line. Computing to locate the center of the outer boundary may comprise computing a bisectional point of the first and second outer boundary pixels and a bisectional point of the third and fourth outer boundary pixels. Computing to locate the center of the outer boundary may use a first distance defined between the first inner boundary pixel and the first outer boundary pixel, a second distance defined between the second inner boundary pixel and the second outer boundary pixel, a third distance defined between the third inner boundary pixel and the third outer boundary pixel and a fourth distance defined between the fourth inner boundary pixel and the fourth outer boundary pixel. Computing to locate the center of the outer boundary may further use a distance between the first inner boundary pixel and the second inner boundary pixel.

Further in the foregoing method, the first imaginary line may comprise a first line segment extending outwardly from the first inner boundary pixel and a second line segment extending outwardly from the second inner boundary pixel, wherein a pixel located on the first line segment may be determined to be the first outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the first line segment becomes the maximum among differences of the image information between two neighboring pixels located on the first line segment, wherein a pixel located on the second line segment may be determined to be the second outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the second line segment becomes the maximum among differences of the image information between two neighboring pixels located on the second line segment. Providing information indicative of a center of the inner boundary may comprise performing a Canny edge detection method using the eye image data. The method may further comprise: extracting data of a portion of the eye image that is located between the inner boundary and the outer boundary; and processing the data of the outer boundary of an iris according to one embodiment of the present invention.

FIGS. 3(a) to (d) show the procedures of the normalization process of a slanted iris image.

FIGS. 4(a) and (b) show a rotated iris image resulting from the tilting of the user's head.

FIGS. 5(a) and (b) show procedures of a correction process of the rotated iris image shown in FIGS. 4(a) and (b).

DETAILED DESCRIPTION OF EMBODIMENTS

Hereinafter, various embodiments of the present invention will be described in detail with reference to the accompanying drawings.

FIG. 1 is a flowchart explaining procedures of a normalization process of an iris image according to one embodiment of the present invention. Referring to FIG. 1, at step 110, an eye image is acquired by image acquisition equipment using an infrared illuminator and a visible light rejection filter. At this time, a reflective light is caused to be gathered in the pupil of an eye so that information on the iris image is not lost. At step 120, inner and outer boundaries of the iris are detected in order to extract only an iris region from the acquired eye image, and the center of the detected inner and outer boundaries is set. Step 120 is performed by a method for detecting the inner and outer boundaries of the iris using differences in pixels of a Canny edge detector and the image according to one embodiment of the present invention, which will be specifically explained below.

FIG. 2a is a view showing a result of detection of a pupillary boundary, i.e. the inner boundary of the iris, using the Canny edge detector. Referring to FIG. 2a, it is noted that only the pupillary boundary is detected by employing the Canny edge detector. That is, as shown in FIG. 2a, the inner boundary of the iris is detected by using the Canny edge detector that is a kind of boundary detecting filter. The Canny edge detector smoothes an acquired image by using Gaussian filtering and then detects a boundary by using a Sobel operator. The Gaussian filtering process can be expressed as the following Equation 1, and the used Sobel operator can be expressed as the following Equation 2.


IG(x, y)=G(x, yI(x, y)   [Equation 1]

S x = I [ i - 1 ] [ j + 1 ] + 2 I [ i ] [ j + 1 ] + I [ i + 1 ] [ j + 1 ] - I [ i - 1 ] [ j - 1 ] - 2 I [ i ] [ j - 1 ] - I [ i + 1 ] [ j - 1 ] S y = I [ i + 1 ] [ j + 1 ] + 2 I [ i + 1 ] [ j ] + I [ i + 1 ] [ j - 1 ] - I [ i - 1 ] [ j + 1 ] - 2 I [ i - 1 ] [ j ] - I [ i - 1 ] [ j - 1 ] [ Equation 2 ]

In a case where the boundary detecting method employing the Canny edge detector is used, even though a normal eye image is not acquired since the eye of a user is not directed toward a front face of a camera but positioned at a slight angle with respect to the camera, the inner boundary of the iris, i.e. papillary boundary, can be correctly detected and center coordinates and radius of the pupil can also be easily obtained. FIG. 2b shows the center coordinates and diameter of the pupil. Referring to FIG. 2b, the pupil's radius is d/2, and the pupil's center coordinates are (x+d/2, y+d/2).

On the other hand, the outer boundary of the iris in the image can be detected by comparing pixel values while proceeding upward and downward and leftward and rightward from the pupillary boundary, i.e. the inner boundary of the iris, and by finding out maximum values of differences in the pixel values. The detected maximum values are Max{I(x, y)−I(x‘1, y)}, Max{I(x, y)−I(x+1, y)}, Max{I(x, y)−I(x, y−1)}, and Max{I(x, y)−I(x, y+1)}, where I(x, y) is a pixel value of the image at a point of (x, y). The reason why the differences in the pixel values are obtained while proceeding upward and downward and leftward and rightward from the inner boundary of the iris upon detection of the outer boundary of the iris in the image is to make the inner and outer centers different from each other. That is, in a case where a slanted iris image is acquired, since the pupil is located a little upward, downward, leftward or rightward of the image, the inner and outer centers can be set differently from each other.

FIG. 2c shows an iris image upon obtainment of the radius and center of the outer boundary of the iris according to one embodiment of the present invention. In a case where an incomplete eye image is acquired since the eye is not directed toward the front face of the camera but positioned at a slight angle with respect to the camera, a process of setting the centers of the inner/outer boundaries of the iris is required. First, values of distances RL, RR, RU and RD from the inner boundary to the left, right, upper and lower portions of the outer boundary, respectively, and a value of the radius RI of the inner boundary, i.e. pupillary boundary, are calculated. Then, the center of the outer boundary is obtained by finding out bisection points upward and downward and leftward and rightward of the image using the above calculated values.

At step 130, iris patterns are detected only at predetermined portions of the distances from the inner boundary to the outer boundary. At step 140, the detected iris pattern is converted into an iris image in the polar coordinates. At step 150, the converted iris image in the polar coordinates is normalized to obtain an image having predetermined dimensions in its width and height.

The conversion of the extracted iris patterns into the iris image in the polar coordinates can be expressed as the following Equation 3.


I(x(r, θ), y(r, θ))=>I(r, θ)   [Equation 3]

where θ is increased by 0.8 degrees, and r is calculated by using the second Cosine Rule from a distance between the outer center CO and the inner center CI of the iris, the radius RO of the outer boundary, and the value of θ. The iris patterns between the inner and outer boundaries of the iris are extracted using the r and θ. In order to avoid changes in features of the iris according to variations in the size of the pupil, when the iris image between the inner and outer boundaries of the iris is divided into 60 segments and the θ is varied by 0.8 degrees to represent 450 data, the iris image is finally normalized into a 27000 segmented iris image (θ×r=450×60).

FIG. 3(a) shows the slanted iris image, and FIG. 3(b) is the iris image in polar coordinates converted from the slanted iris image. It can be seen from FIG. 3(b) that a lower portion of the converted iris image in the polar coordinates is curved with an irregular shape. In addition, FIG. 3(c) shows an iris image having the dimensions of M pixels in width and N pixels in height, which is normalized from the irregular image of the iris patterns. Hereinafter, the normalization process of the slanted iris image will be described with reference to FIGS. 3(a) to (c). In the portion corresponding to the distance between the inner and outer boundaries of the iris in FIG. 3(a), the iris patterns existing at only a portion corresponding to X % of the distance between the inner and outer boundaries of the iris are taken in order to eliminate interference from the illuminator and acquire a large amount of iris patterns. That is, when the inner and outer boundaries of the iris are detected, the iris patterns are taken and then converted into those in the polar coordinates. However, in a case where reflective light from the illuminator is gathered on the iris, iris patterns existing at only a portion corresponding to 60% of the distance from the inner boundary among the region from the inner boundary (pupillary boundary) of the iris to the outer boundary can be picked up and converted into those in the polar coordinates. The value of 60% selected in this embodiment of the present invention was experimentally determined as a range in which a greatest deal of iris patterns can be picked up while excluding the reflective light gathered on the iris.

In FIG. 3(b), the slanted iris image is converted into the iris image in the polar coordinates. As shown in FIG. 3(b), when the iris patterns are converted into those in the polar coordinates, the lower portion of the converted iris pattern image in the polar coordinates is curved with an irregular shape. Thus, it is necessary to normalize the irregular iris pattern image. In FIG. 3(c), the irregular image of the iris patterns is normalized to obtain the iris image with the dimensions of M pixels in width and N pixels in height.

For reference, the performance of the iris recognition system is evaluated by two factors: a false acceptance rate (FAR) and a false rejection rate (FRR). The FAR means the probability that the iris recognition system incorrectly identifies an impostor as an enrollee and thus allows entrance of the impostor, and the FRR means the probability that the iris recognition system incorrectly identifies the enrollee as an impostor and thus rejects entrance to the enrollee. According to one embodiment of the present invention, when a pre-processing is made by employing the method for detecting the boundaries of the iris and the normalization of the slanted iris image, the FAR was reduced from 5.5% to 2.83% and the FRR is reduced from 5.0% to 2.0% as compared with the iris recognition system employing a conventional method for detecting the boundaries of the iris.

Finally, at step 160, if the iris in the acquired eye image has been rotated at an arbitrary angle with respect to a centerline of the iris, the arrays of pixels of the iris image information are moved and compared in order to correct the rotated iris image.

FIGS. 4(a) to (b) show a rotated iris image resulting from the tilting of the user's head. Upon acquisition of an iris image, the user's head may be tilted a little toward the left or right, under which if the iris image is acquired, the rotated iris image is obtained as shown in FIG. 4(a). That is, if the eye image acquired at step 110 has been rotated at an arbitrary angle with respect to a centerline of the eye, a process of correcting the rotated image is required. FIG. 4(a) shows the iris image rotated by about 15 degrees in a clockwise or counterclockwise direction with respect to the centerline of the eye. When the rotated iris image is converted into an image in the polar coordinates, the iris patterns in the converted image are shifted leftward of rightward as shown in FIG. 4(b), as compared with the normal iris pattern.

FIGS. 5(a) and (b) show procedures of the process of correcting the rotated iris image shown in FIGS. 4(a) and (b). The process of correcting the rotated iris image, which has resulted from the tilting of the user's head, by comparing and moving the arrays of the iris image information will be described below with reference to FIGS. 5(a) and (b).

Referring to FIG. 5(a), from the rotated iris image resulting from the tiling of the user's head, a plurality of arrays Array(n) of the iris image are temporarily generated by means of shifts by an arbitrary angle with respect to an Array(0) of the converted iris image in the polar coordinates. That is, by shifting columns leftward or rightward of the Array(0) based on the Array(0) of the converted iris image in the polar coordinates, 20 arrays of image information from Array(0) to Array(−10) and from Array(0) to Array(10) are temporarily generated.

In order to generate characteristic vectors of the iris corresponding to the plurality of arrays of iris image that have been temporarily generated, wavelet transform is performed. The respective characteristic vectors generated by the wavelet transform are compared with previously registered characteristic vectors to obtain similarities. A characteristic vector corresponding to the maximum similarity among the obtained similarities is accepted as the characteristic vector of the user.

In other words, by generating the arrays Array(n) of image information on the rotated image as mentioned above and performing the wavelet transform for the respective arrays of the image information as shown FIG. 5(b), the characteristic vectors fT(n) of the iris corresponding to the temporarily generated plurality of arrays Array(n) of the iris image are the generated. The characteristic vectors fT(n) are generated from fT(0) to fT(10) and from fT(0) to fT(−10). The respective generated characteristic vectors fT(n) are compared with each of the characteristic vectors fR of the enrollees and thus similarities Sn are obtained. A characteristic vector fT(n) corresponding to the maximum similarity among the obtained similarities Sn is considered as a resulted value in which the rotation effect is corrected, and is accepted as the characteristic vector of the user's iris.

As described above, according to the non-contact type human iris recognition method by the correction of the rotated iris image of one embodiment of the present invention, there is an advantage in that by detecting the inner and outer boundaries of the iris using the differences in pixels of the Canny edge detector and the image, the boundaries of the iris can be more correctly detected from the eye image of the user.

Furthermore, according to the non-contact type human iris recognition method of one embodiment of the present invention, if the iris in the eye image acquired by the image acquisition equipment has been rotated at an arbitrary angle with respect to the centerline of the iris, the rotated image is corrected into the normal iris image. Otherwise, if a lower portion of the converted iris image in the polar coordinates is curved and thus has an irregular shape due to the acquisition of the slanted iris image, the iris image is normalized in predetermined dimensions. Thus, there is another advantage in that the iris image with a variety of deformation is processed into data on a correct iris image so as to markedly reduce a false acceptance rate and a false rejection rate.

It should be noted that the above description exemplifies of the non-contact type human iris recognition method by the correction of the rotated iris image according to embodiments of the present invention, and the present invention is not limited thereto. A person skilled in the art can make various modifications and changes to the embodiments of the present invention without departing from the technical spirit and scope of the present invention defined by the appended claims.

Claims

1. A method of processing iris image data, comprising:

providing data of an eye image comprising an iris defined between an inner boundary and an outer boundary;
providing information indicative of a center of the inner boundary, a first inner boundary pixel and a second inner boundary pixel, wherein the first and second inner boundary pixels are located on a first imagery line passing the center;
computing to locate a first outer boundary pixel on the first imaginary line extending outwardly from the first inner boundary pixel;
computing to locate a second outer boundary pixel on the first imaginary line extending outwardly from the second inner boundary pixel; and
computing to locate a center of the outer boundary using the first outer boundary pixel and the second outer boundary pixel.

2. The method of claim 1, wherein computing to locate the center of the outer boundary comprises computing a bisectional point of the first and second outer boundary pixels.

3. The method of claim 1, further comprising computing a first distance between the first inner boundary pixel and the first outer boundary pixel.

4. The method of claim 1, further comprising obtaining a distance between the first inner boundary pixel and the second inner boundary pixel.

5. The method of claim 1, wherein computing to locate the center of the outer boundary uses a first distance defined between the first inner boundary pixel and the first outer boundary pixel and a second distance defined between the second inner boundary pixel and the second outer boundary pixel.

6. The method of claim 5, wherein computing the center of the outer boundary further uses a distance between the first inner boundary pixel and the second inner boundary pixel.

7. The method of claim 1, wherein the center of outer boundary is off the center of the inner boundary.

8. The method of claim 1, further comprising:

providing information indicative of a third inner boundary pixel and a fourth inner boundary pixel, wherein the third and fourth inner boundary pixels are located on a second imagery line passing the center;
computing to locate a third outer boundary pixel on the second imaginary line extending outwardly from the third inner boundary pixel; and
computing to locate a fourth outer boundary pixel on the second imaginary line extending outwardly from the fourth inner boundary pixel,
wherein computing to locate the center of the outer boundary further uses the third outer boundary pixel and the fourth outer boundary pixel.

9. The method of claim 8, wherein the second imaginary line is substantially perpendicular to the first imaginary line.

10. The method of claim 8, wherein computing to locate the center of the outer boundary comprises computing a bisectional point of the first and second outer boundary pixels and a bisectional point of the third and fourth outer boundary pixels.

11. The method of claim 8, wherein computing to locate the center of the outer boundary uses a first distance defined between the first inner boundary pixel and the first outer boundary pixel, a second distance defined between the second inner boundary pixel and the second outer boundary pixel, a third distance defined between the third inner boundary pixel and the third outer boundary pixel and a fourth distance defined between the fourth inner boundary pixel and the fourth outer boundary pixel.

12. The method of claim 11, wherein computing to locate the center of the outer boundary further uses a distance between the first inner boundary pixel and the second inner boundary pixel.

13. The method of claim 1, wherein the first imaginary line comprises a first line segment extending outwardly from the first inner boundary pixel and a second line segment extending outwardly from the second inner boundary pixel, wherein a pixel located on the first line segment is determined to be the first outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the first line segment becomes the maximum among differences of the image information between two neighboring pixels located on the first line segment, wherein a pixel located on the second line segment is determined to be the second outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the second line segment becomes the maximum among differences of the image information between two neighboring pixels located on the second line segment.

14. The method of claim 1, wherein providing information indicative of a center of the inner boundary comprises performing a Canny edge detection method using the eye image data.

15. The method of claim 1, further comprising:

extracting data of a portion of the eye image that is located between the inner boundary and the outer boundary; and
processing the data of the portion to obtain a characteristic vector of a an iris pattern.

16. The method of claim 15, wherein processing the data further comprises obtaining a plurality of characteristic vectors for the same eye image.

17. A method of processing iris image data, comprising:

providing data of an eye image comprising an iris defined between an inner boundary and an outer boundary;
providing information indicative of a first inner boundary pixel, a second inner boundary pixel, a third inner boundary pixel and a fourth inner boundary pixel, which are located at different positions on the inner boundary;
computing to locate a first outer boundary pixel on a first imaginary line extending generally radially from the first inner boundary pixel;
computing to locate a second outer boundary pixel on a second imaginary line extending generally radially from the second inner boundary pixel;
computing to locate a third outer boundary pixel on a third imaginary line extending generally radially from the third inner boundary pixel;
computing to locate a fourth outer boundary pixel on a fourth imaginary line extending generally radially from the fourth inner boundary pixel; and
using the first, second, third and fourth outer boundary pixels for further processing.

18. The method of claim 17, further comprising computing to locate a center of the outer boundary using the first, second, third and fourth outer boundary pixels.

19. The method of claim 17, wherein the first imaginary line is substantially perpendicular to the third and fourth imaginary lines, and wherein the second imaginary line is substantially perpendicular to the third and fourth imaginary lines.

20. The method of claim 17, wherein a pixel located on the first imaginary line is determined to be the first outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the first imaginary line becomes the maximum among differences of the image information between two neighboring pixels located on the first imaginary line.

Patent History
Publication number: 20080159600
Type: Application
Filed: Nov 1, 2007
Publication Date: Jul 3, 2008
Applicant: SENGA ADVISORS, LLC. (Boston, MA)
Inventor: Seong-Won Cho (Seoul)
Application Number: 11/933,752
Classifications
Current U.S. Class: Using A Characteristic Of The Eye (382/117); Using A Facial Characteristic (382/118)
International Classification: G06K 9/00 (20060101);