IRIS IMAGE DATA PROCESSING FOR USE WITH IRIS RECOGNITION SYSTEM
Disclosed is an iris recognition method which is one of biometric technologies. According to a non-contact-type human iris recognition method by correction of a rotated iris image, the iris image is acquired by image acquisition equipment using an infrared illuminator. Inner and outer boundaries of the iris are detected by analyzing differences in pixels of a Canny edge detector and the image for the inputted iris image, so as to allow the boundaries of the iris to be more accurately detected from the eye image of a user. Thus, the iris image with a variety of deformation can be processed into a correct iris image, so that there is an advantage in that a false acceptance rate and a false rejection rate can be markedly reduced.
Latest SENGA ADVISORS, LLC. Patents:
This application is a continuation application of Application No. 10/656,921, which is a continuation application under 35 U.S.C. §365 (c) claiming the benefit of the filing date of PCT Application No. PCT/KR01/01302 designating the United States, filed Jul. 1, 2001. The PCT Application was published in English as WO 02/071316 A1 on Sep. 12, 2002, and claims the benefit of the earlier filing date of Korean Patent Application No. 2001/11441, filed Mar. 6, 2001. The contents of the Korean Patent Application No. 2001/11441, the international application No. PCT/KR01/01302 including the publication WO 02/071316 A1 and application Ser. No. 10/656,921 are incorporated herein by reference in their entirety.
BACKGROUND1. Field
The present disclosure relates to processing iris image data, and more particularly, to a method of identifying an outer boundary of an iris image.
2. Discussion of the Related Art
An iris recognition system is an apparatus for identifying personal identity by distinguishing one's own peculiar iris pattern. The iris recognition system is superior in its accuracy in terms of the personal identification in comparison to the other biometric methods such as voice or fingerprint, and it has a high degree of security. The iris is a region existing between the pupil and the white sclera of an eye. The iris recognition method is a technique for identifying personal identities based on information obtained by analyzing respective one's own iris patterns different from each other.
Generally, the kernel technique of the iris recognition system is to acquire a more accurate eye image by using image acquisition equipment and to efficiently acquire unique characteristic information on the iris from the inputted eye image.
However, in a non-contact type human iris recognition system which acquires an iris image to be taken at a certain distance therefrom, the iris image with a variety of deformation may be acquired in practical. That is, it is unlikely that a complete eye image can be acquired since the eye is not necessarily directed toward a front face of a camera but positioned at a slight angle with respect to the camera. Thus, there may be a case where the information on an eye image rotated at an arbitrary angle with respect to a centerline of the iris is acquired.
The foregoing discussion in the background section is to provide general background information and does not constitute an admission of prior art.
SUMMARYOne aspect of the invention provides a method of processing iris image data, which comprises: providing data of an eye image comprising an iris defined between an inner boundary and an outer boundary; providing information indicative of a center of the inner boundary, a first inner boundary pixel and a second inner boundary pixel, wherein the first and second inner boundary pixels are located on a first imagery line passing the center; computing to locate a first outer boundary pixel on the first imaginary line extending outwardly from the first inner boundary pixel; computing to locate a second outer boundary pixel on the first imaginary line extending outwardly from the second inner boundary pixel; and computing to locate a center of the outer boundary using the first outer boundary pixel and the second outer boundary pixel.
In the foregoing method, computing to locate the center of the outer boundary may comprise computing a bisectional point of the first and second outer boundary pixels. The method may further comprise computing a first distance between the first inner boundary pixel and the first outer boundary pixel. The method may further comprise obtaining a distance between the first inner boundary pixel and the second inner boundary pixel. Computing to locate the center of the outer boundary may use a first distance defined between the first inner boundary pixel and the first outer boundary pixel and a second distance defined between the second inner boundary pixel and the second outer boundary pixel. Computing the center of the outer boundary may further use a distance between the first inner boundary pixel and the second inner boundary pixel. The center of outer boundary may be off the center of the inner boundary.
Still in the foregoing method, the method may further comprise: providing information indicative of a third inner boundary pixel and a fourth inner boundary pixel, wherein portion to obtain a characteristic vector of a an iris pattern. Processing the data may further comprise obtaining a plurality of characteristic vectors for the same eye image.
Another aspect of the invention provides a method of processing iris image data, which comprises: providing data of an eye image comprising an iris defined between an inner boundary and an outer boundary; providing information indicative of a first inner boundary pixel, a second inner boundary pixel, a third inner boundary pixel and a fourth inner boundary pixel, which are located at different positions on the inner boundary; computing to locate a first outer boundary pixel on a first imaginary line extending generally radially from the first inner boundary pixel; computing to locate a second outer boundary pixel on a second imaginary line extending generally radially from the second inner boundary pixel; computing to locate a third outer boundary pixel on a third imaginary line extending generally radially from the third inner boundary pixel; computing to locate a fourth outer boundary pixel on a fourth imaginary line extending generally radially from the fourth inner boundary pixel; and using the first, second, third and fourth outer boundary pixels for further processing.
In the foregoing method, the method may further comprise computing to locate a center of the outer boundary using the first, second, third and fourth outer boundary pixels. The first imaginary line may be substantially perpendicular to the third and fourth imaginary lines, and wherein the second imaginary line may be substantially perpendicular to the third and fourth imaginary lines. A pixel located on the first imaginary line may be determined to be the first outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the first imaginary line becomes the maximum among differences of the image information between two neighboring pixels located on the first imaginary line.
Further in the foregoing method, the first imaginary line may comprise a first line segment extending outwardly from the first inner boundary pixel and a second line segment extending outwardly from the second inner boundary pixel, wherein a pixel located on the first line segment may be determined to be the first outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the first line segment becomes the maximum among differences of the image information between two neighboring pixels located on the first line segment, wherein a pixel located on the second line segment may be determined to be the second outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the second line segment becomes the maximum among differences of the image information between two neighboring pixels located on the second line segment. Providing information indicative of a center of the inner boundary may comprise performing a Canny edge detection method using the eye image data. The method may further comprise: extracting data of a portion of the eye image that is located between the inner boundary and the outer boundary; and processing the data of the outer boundary of an iris according to one embodiment of the present invention.
Hereinafter, various embodiments of the present invention will be described in detail with reference to the accompanying drawings.
IG(x, y)=G(x, y)×I(x, y) [Equation 1]
In a case where the boundary detecting method employing the Canny edge detector is used, even though a normal eye image is not acquired since the eye of a user is not directed toward a front face of a camera but positioned at a slight angle with respect to the camera, the inner boundary of the iris, i.e. papillary boundary, can be correctly detected and center coordinates and radius of the pupil can also be easily obtained.
On the other hand, the outer boundary of the iris in the image can be detected by comparing pixel values while proceeding upward and downward and leftward and rightward from the pupillary boundary, i.e. the inner boundary of the iris, and by finding out maximum values of differences in the pixel values. The detected maximum values are Max{I(x, y)−I(x‘1, y)}, Max{I(x, y)−I(x+1, y)}, Max{I(x, y)−I(x, y−1)}, and Max{I(x, y)−I(x, y+1)}, where I(x, y) is a pixel value of the image at a point of (x, y). The reason why the differences in the pixel values are obtained while proceeding upward and downward and leftward and rightward from the inner boundary of the iris upon detection of the outer boundary of the iris in the image is to make the inner and outer centers different from each other. That is, in a case where a slanted iris image is acquired, since the pupil is located a little upward, downward, leftward or rightward of the image, the inner and outer centers can be set differently from each other.
At step 130, iris patterns are detected only at predetermined portions of the distances from the inner boundary to the outer boundary. At step 140, the detected iris pattern is converted into an iris image in the polar coordinates. At step 150, the converted iris image in the polar coordinates is normalized to obtain an image having predetermined dimensions in its width and height.
The conversion of the extracted iris patterns into the iris image in the polar coordinates can be expressed as the following Equation 3.
I(x(r, θ), y(r, θ))=>I(r, θ) [Equation 3]
where θ is increased by 0.8 degrees, and r is calculated by using the second Cosine Rule from a distance between the outer center CO and the inner center CI of the iris, the radius RO of the outer boundary, and the value of θ. The iris patterns between the inner and outer boundaries of the iris are extracted using the r and θ. In order to avoid changes in features of the iris according to variations in the size of the pupil, when the iris image between the inner and outer boundaries of the iris is divided into 60 segments and the θ is varied by 0.8 degrees to represent 450 data, the iris image is finally normalized into a 27000 segmented iris image (θ×r=450×60).
In
For reference, the performance of the iris recognition system is evaluated by two factors: a false acceptance rate (FAR) and a false rejection rate (FRR). The FAR means the probability that the iris recognition system incorrectly identifies an impostor as an enrollee and thus allows entrance of the impostor, and the FRR means the probability that the iris recognition system incorrectly identifies the enrollee as an impostor and thus rejects entrance to the enrollee. According to one embodiment of the present invention, when a pre-processing is made by employing the method for detecting the boundaries of the iris and the normalization of the slanted iris image, the FAR was reduced from 5.5% to 2.83% and the FRR is reduced from 5.0% to 2.0% as compared with the iris recognition system employing a conventional method for detecting the boundaries of the iris.
Finally, at step 160, if the iris in the acquired eye image has been rotated at an arbitrary angle with respect to a centerline of the iris, the arrays of pixels of the iris image information are moved and compared in order to correct the rotated iris image.
Referring to
In order to generate characteristic vectors of the iris corresponding to the plurality of arrays of iris image that have been temporarily generated, wavelet transform is performed. The respective characteristic vectors generated by the wavelet transform are compared with previously registered characteristic vectors to obtain similarities. A characteristic vector corresponding to the maximum similarity among the obtained similarities is accepted as the characteristic vector of the user.
In other words, by generating the arrays Array(n) of image information on the rotated image as mentioned above and performing the wavelet transform for the respective arrays of the image information as shown
As described above, according to the non-contact type human iris recognition method by the correction of the rotated iris image of one embodiment of the present invention, there is an advantage in that by detecting the inner and outer boundaries of the iris using the differences in pixels of the Canny edge detector and the image, the boundaries of the iris can be more correctly detected from the eye image of the user.
Furthermore, according to the non-contact type human iris recognition method of one embodiment of the present invention, if the iris in the eye image acquired by the image acquisition equipment has been rotated at an arbitrary angle with respect to the centerline of the iris, the rotated image is corrected into the normal iris image. Otherwise, if a lower portion of the converted iris image in the polar coordinates is curved and thus has an irregular shape due to the acquisition of the slanted iris image, the iris image is normalized in predetermined dimensions. Thus, there is another advantage in that the iris image with a variety of deformation is processed into data on a correct iris image so as to markedly reduce a false acceptance rate and a false rejection rate.
It should be noted that the above description exemplifies of the non-contact type human iris recognition method by the correction of the rotated iris image according to embodiments of the present invention, and the present invention is not limited thereto. A person skilled in the art can make various modifications and changes to the embodiments of the present invention without departing from the technical spirit and scope of the present invention defined by the appended claims.
Claims
1. A method of processing iris image data, comprising:
- providing data of an eye image comprising an iris defined between an inner boundary and an outer boundary;
- providing information indicative of a center of the inner boundary, a first inner boundary pixel and a second inner boundary pixel, wherein the first and second inner boundary pixels are located on a first imagery line passing the center;
- computing to locate a first outer boundary pixel on the first imaginary line extending outwardly from the first inner boundary pixel;
- computing to locate a second outer boundary pixel on the first imaginary line extending outwardly from the second inner boundary pixel; and
- computing to locate a center of the outer boundary using the first outer boundary pixel and the second outer boundary pixel.
2. The method of claim 1, wherein computing to locate the center of the outer boundary comprises computing a bisectional point of the first and second outer boundary pixels.
3. The method of claim 1, further comprising computing a first distance between the first inner boundary pixel and the first outer boundary pixel.
4. The method of claim 1, further comprising obtaining a distance between the first inner boundary pixel and the second inner boundary pixel.
5. The method of claim 1, wherein computing to locate the center of the outer boundary uses a first distance defined between the first inner boundary pixel and the first outer boundary pixel and a second distance defined between the second inner boundary pixel and the second outer boundary pixel.
6. The method of claim 5, wherein computing the center of the outer boundary further uses a distance between the first inner boundary pixel and the second inner boundary pixel.
7. The method of claim 1, wherein the center of outer boundary is off the center of the inner boundary.
8. The method of claim 1, further comprising:
- providing information indicative of a third inner boundary pixel and a fourth inner boundary pixel, wherein the third and fourth inner boundary pixels are located on a second imagery line passing the center;
- computing to locate a third outer boundary pixel on the second imaginary line extending outwardly from the third inner boundary pixel; and
- computing to locate a fourth outer boundary pixel on the second imaginary line extending outwardly from the fourth inner boundary pixel,
- wherein computing to locate the center of the outer boundary further uses the third outer boundary pixel and the fourth outer boundary pixel.
9. The method of claim 8, wherein the second imaginary line is substantially perpendicular to the first imaginary line.
10. The method of claim 8, wherein computing to locate the center of the outer boundary comprises computing a bisectional point of the first and second outer boundary pixels and a bisectional point of the third and fourth outer boundary pixels.
11. The method of claim 8, wherein computing to locate the center of the outer boundary uses a first distance defined between the first inner boundary pixel and the first outer boundary pixel, a second distance defined between the second inner boundary pixel and the second outer boundary pixel, a third distance defined between the third inner boundary pixel and the third outer boundary pixel and a fourth distance defined between the fourth inner boundary pixel and the fourth outer boundary pixel.
12. The method of claim 11, wherein computing to locate the center of the outer boundary further uses a distance between the first inner boundary pixel and the second inner boundary pixel.
13. The method of claim 1, wherein the first imaginary line comprises a first line segment extending outwardly from the first inner boundary pixel and a second line segment extending outwardly from the second inner boundary pixel, wherein a pixel located on the first line segment is determined to be the first outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the first line segment becomes the maximum among differences of the image information between two neighboring pixels located on the first line segment, wherein a pixel located on the second line segment is determined to be the second outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the second line segment becomes the maximum among differences of the image information between two neighboring pixels located on the second line segment.
14. The method of claim 1, wherein providing information indicative of a center of the inner boundary comprises performing a Canny edge detection method using the eye image data.
15. The method of claim 1, further comprising:
- extracting data of a portion of the eye image that is located between the inner boundary and the outer boundary; and
- processing the data of the portion to obtain a characteristic vector of a an iris pattern.
16. The method of claim 15, wherein processing the data further comprises obtaining a plurality of characteristic vectors for the same eye image.
17. A method of processing iris image data, comprising:
- providing data of an eye image comprising an iris defined between an inner boundary and an outer boundary;
- providing information indicative of a first inner boundary pixel, a second inner boundary pixel, a third inner boundary pixel and a fourth inner boundary pixel, which are located at different positions on the inner boundary;
- computing to locate a first outer boundary pixel on a first imaginary line extending generally radially from the first inner boundary pixel;
- computing to locate a second outer boundary pixel on a second imaginary line extending generally radially from the second inner boundary pixel;
- computing to locate a third outer boundary pixel on a third imaginary line extending generally radially from the third inner boundary pixel;
- computing to locate a fourth outer boundary pixel on a fourth imaginary line extending generally radially from the fourth inner boundary pixel; and
- using the first, second, third and fourth outer boundary pixels for further processing.
18. The method of claim 17, further comprising computing to locate a center of the outer boundary using the first, second, third and fourth outer boundary pixels.
19. The method of claim 17, wherein the first imaginary line is substantially perpendicular to the third and fourth imaginary lines, and wherein the second imaginary line is substantially perpendicular to the third and fourth imaginary lines.
20. The method of claim 17, wherein a pixel located on the first imaginary line is determined to be the first outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the first imaginary line becomes the maximum among differences of the image information between two neighboring pixels located on the first imaginary line.
Type: Application
Filed: Nov 1, 2007
Publication Date: Jul 3, 2008
Applicant: SENGA ADVISORS, LLC. (Boston, MA)
Inventor: Seong-Won Cho (Seoul)
Application Number: 11/933,752
International Classification: G06K 9/00 (20060101);