Method and component for image recognition
A method and system for image recognition in a collection of digital images includes training image classifiers and retrieving a sub-set of images from the collection. For each image in the collection, any regions within the image that correspond to a face are identified. For each face region and any associated peripheral region, feature vectors are determined for each of the image classifiers. The feature vectors are stored in association with data relating to the associated face region. At least one reference region including a face to be recognized is/are selected from an image. At least one classifier on which said retrieval is to be based is/are selected from the image classifiers. A respective feature vector for each selected classifier is determined for the reference region. The sub-set of images is retrieved from within the image collection in accordance with the distance between the feature vectors determined for the reference region and the feature vectors for face regions of the image collection.
Latest DigitalOptics Corporation Europe Limited Patents:
- Face recognition performance using additional image features
- Method and system for correcting a distorted input image
- Face searching and detection in a digital image acquisition device
- Digital image processing using face detection and skin tone information
- Automatic face and skin beautification using face detection
This application is a Continuation of U.S. patent application Ser. No. 11/027,001, filed Dec. 29, 2004, now U.S. Pat. No. 7,715,597; which is hereby incorporated by reference.
FIELD OF THE INVENTIONThe invention relates to a method and component for image recognition in a collection of digital images. In particular the invention provides improved image sorting, image retrieval, pattern recognition and pattern combination methods associated with image recognition.
DESCRIPTION OF THE RELATED ARTA useful review of face detection is provided by Yang et al., in IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 1, pages 34-58, January 2002. A review of face recognition techniques is given in Zhang et al., Proceedings of the IEEE, Vol. 85, No. 9, pages 1423-1435, September 1997.
US Application No. 2003/0210808 to Chen et al describes a method of organizing images of human faces in digital images into clusters comprising the steps of locating face regions using a face detector, extracting and normalizing the located face regions and then forming clusters of said face regions, each cluster representing an individual person.
U.S. Pat. No. 6,246,790 to Huang et al discloses image indexing using a colour correlogram technique. A color correlogram is a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database.
U.S. Pat. No. 6,430,312 also to Huang et al discloses distinguishing objects in an image as well as between images in a plurality of images. By intersecting a color correlogram of an image object with correlograms of images to be searched, those images which contain the objects are identified by the intersection correlogram. Many other techniques for colour pattern matching are described in the prior art.
In “Face annotation for family photo album management” to Chen et al published in the International Journal of Image and Graphics, Vol. 3, No. 1 (2003) techniques, including the colour correlogram, are employed to match persons within an image collection and facilitate the annotation of images based on said matching. Chen et al select a single colour region around a person use a combination of multiple colour pattern matching methods to improve the accuracy of the annotation process.
US 2002/0136433 to Lin et al describes an adaptive face recognition system and method. The system includes a database configured to store a plurality of face classes; an image capturing system for capturing images; a detection system, wherein the detection system detects face images by comparing captured images with a generic face image; a search engine for determining if a detected face image belongs to one of a plurality of known face classes; and a system for generating a new face class for the detected face image if the search engine determines that the detected face image does not belong to one of the known face classes. In the event that the search engine determines that the detected face image belongs to one of the known face classes, an adaptive training system adds the detected face to the associated face class.
In the field of multi-classifier pattern matching, U.S. Pat. No. 6,567,775 to Maali et al discloses a method for identifying a speaker in an audio-video source using both audio and video information. An audio-based speaker identification system identifies one or more potential speakers for a given segment using an enrolled speaker database. A video-based speaker identification system identifies one or more potential speakers for a given segment using a face detector/recognizer and an enrolled face database. An audio-video decision fusion process evaluates the individuals identified by the audio-based and video-based speaker identification systems and determines the speaker of an utterance. A linear variation is imposed on the ranked-lists produced using the audio and video information.
The decision fusion scheme of Maali is based on a linear combination of the audio and the video ranked-lists. The line with the higher slope is assumed to convey more discriminative information. The normalized slopes of the two lines are used as the weight of the respective results when combining the scores from the audio-based and video-based speaker analysis. In this manner, the weights are derived from the data itself but assume that the ranks and the scores for each method have linear variation (are points on a line and they estimate the equation of the line).
SUMMARY OF THE INVENTIONAccording to the present invention there is provided a method for image recognition in a collection of digital images that includes training image classifiers and retrieving a sub-set of images from the collection. A system is also provided including a training module and image retrieval module.
The training of the image classifiers preferably includes the following: For each image in the collection, any regions within the image that correspond to a face are identified. For each face region and any associated peripheral region, feature vectors are determined for each of the image classifiers. The feature vectors are stored in association with data relating to the associated face region.
The retrieval of the sub-set of images from the collection preferably includes the following: At least one reference region including a face to be recognized is/are selected from an image. At least one classifier on which said retrieval is to be based is/are selected from the image classifiers. A respective feature vector for each selected classifier is determined for the reference region. The sub-set of images is retrieved from within the image collection in accordance with the distance between the feature vectors determined for the reference region and the feature vectors for face regions of the image collection.
A component for image recognition in a collection of digital images is further provided including a training module for training image classifiers and a retrieval module for retrieving a sub-set of images from the collection.
The training module is preferably configured according to the following: For each image in the collection, any regions are identified in the image that correspond to a face. For each face region and any associated peripheral region, feature vectors are determined for each of the image classifiers. The feature vectors are stored in association with data relating to the associated face region.
The retrieval module is preferably configured according to the following: At least one reference region including a face to be recognized is/are selected from an image. At least one image classifier is/are selected on which the retrieval is to be based. A respective feature vector is determined for each selected classifier of the reference region. A sub-set of images is selected from within the image collection in accordance with the distance between the feature vectors determined for the reference region and the feature vectors for face regions of the image collection.
In a further aspect there is provided a corresponding component for image recognition.
In the embodiment, the training process cycles automatically through each image in an image collection, employing a face detector to determine the location of face regions within an image. It then extracts and normalizes these regions and associated non-face peripheral regions which are indicative of, for example, the hair, clothing and/or pose of the person associated with the determined face region(s). Initial training data is used to determine a basis vector set for each face classifier.
A basis vector set comprises a selected set of attributes and reference values for these attributes for a particular classifier. For example, for a DCT classifier, a basis vector could comprise a selected set of frequencies by which selected image regions are best characterised for future matching and/or discrimination and a reference value for each frequency. For other classifiers, the reference value can simply be the origin (zero value) within a vector space.
Next for each determined, extracted and normalized face region at least one feature vector is generated for at least one face-region based classifier and where an associated non-face region is available, at least one further feature vector is generated for a respective non-face region based classifier.
A feature vector can be thought of as an identified region's coordinates within the basis vector space relative to the reference value.
These data are then associated with the relevant image and face/peripheral region and are stored for future reference.
In the embodiment, image retrieval may either employ a user selected face region or may automatically determine and select face regions in a newly acquired image for comparing with other face regions within the selected image collection. Once at least one face region has been selected, the retrieval process determines (or if the image was previously “trained”, loads) feature vectors associated with at least one face-based classifier and at least one non-face based classifier. A comparison between the selected face region and all other face regions in the current image collection will next yield a set of distance measures for each classifier. Further, while calculating this set of distance measures, mean and variance values associated with the statistical distribution of the distance measures for each classifier are calculated. Finally these distance measures are preferably normalized using the mean and variance data for each classifier and are summed to provide a combined distance measure which is used to generate a final ranked similarity list.
In the preferred embodiment, the classifiers include a combination of wavelet domain PCA (principle component analysis) classifier and 2D-DCT (discrete cosine transform) classifier for recognising face regions.
These classifiers do not require a training stage for each new image that is added to an image collection. For example, techniques such as ICA (independent component analysis) or the Fisher Face technique which employs LDA (linear discriminant analysis) are well known face recognition techniques which adjust the basis vectors during a training stage to cluster similar images and optimize the separation of these clusters.
The combination of these classifiers is robust to different changes in face poses, illumination, face expression and image quality and focus (sharpness).
PCA (principle component analysis) is also known as the eigenface method. A summary of conventional techniques that utilize this method is found in Eigenfaces for Recognition, Journal of Cognitive Neuroscience, 3(1), 1991 to Turk et al., which is hereby incorporated by reference. This method is sensitive to facial expression, small degrees of rotation and different illuminations. In the preferred embodiment, high frequency components from the image that are responsible for slight changes in face appearance are filtered. Features obtained from low pass filtered sub-bands from the wavelet decomposition are significantly more robust to facial expression, small degrees of rotation and different illuminations than conventional PCA.
In general, the steps involved in implementing the PCA/Wavelet technique include: (i) the extracted, normalized face region is transformed into gray scale; (ii) wavelet decomposition in applied using Daubechie wavelets; (iii) histogram equalization is performed on the grayscale LL sub-band representation; next, (iv) the mean LL sub-band is calculated and subtracted from all faces and (v) the 1st level LL sub-band is used for calculating the covariance matrix and the principal components (eigenvectors). The resulting eigenvectors (basis vector set) and the mean face are stored in a file after training so they can be used in determining the principal components for the feature vectors for detected face regions. Alternative embodiments may be discerned from the discussion in H. Lai, P. C. Yuen, and G. C. Feng, “Face recognition using holistic Fourier invariant features” Pattern Recognition, vol. 34, pp. 95-109, 2001, which is hereby incorporated by reference.
In the 2D Discrete Cosine Transform classifier, the spectrum for the DCT transform of the face region can be further processed to obtain more robustness (see also, Application of the DCT Energy Histogram for Face Recognition, in Proceedings of the 2nd International Conference on Information Technology for Application (ICITA 2004) to Tjahyadi et al., hereby incorporated by reference).
The steps involved in this technique are generally as follows: (i) the resized face is transformed to an indexed image using a 256 color gif colormap; (ii) the 2D DCT transform is applied; (iii) the resulting spectrum is used for classification; (iv) for comparing similarity between DCT spectra the Euclidian distance was used.
Examples of non-face based classifiers are based on color histogram, color moment, colour correlogram, banded colour correlogram, and wavelet texture analysis techniques. An implementaton of color histogram is described in “CBIR method based on color-spatial feature,” IEEE Region 10th Ann. Int. Conf 1999 (TENCON'99, Cheju, Korea, 1999). Use of the colour histogram is, however, typically restricted to classification based on the color information contained within a sub-regions of the image.
Color moment may be used to avoid the quantization effects which are found when using the color histogram as a classifier (see also “Similarity of color images,” SPIE Proc. pp. 2420 (1995) to Stricker et al, hereby incorporated by reference). The first three moments (mean, standard deviation and skews) are extracted from the three color channels and therefore form a 9-dimensional feature vector.
The colour auto-correlogram (see, U.S. Pat. No. 6,246,790 to Huang et al, hereby incorporated by reference) provides an image analysis technique that is based on a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database. It is effective in combining the color and texture features together in a single classifier (see also, “Image indexing using color correlograms,” In IEEE Conf. Computer Vision and Pattern Recognition, PP. 762 et seq (1997) to Huang et al., hereby incorporated by reference).
In the preferred embodiment, the color correlogram is implemented by transforming the image from RGB color space, and reducing the image colour map using dithering techniques based on minimum variance quantization. Variations and alternative embodiments may be discerned from Variance based color image quantization for frame buffer display,” Color Res. Applicat., vol. 15, no. 1, pp. 52-58, 1990 to by Wan et al., which is hereby incorporated by reference. Reduced colour maps of 16, 64, 256 colors are achievable. For 16 colors the vga colormap may be used and for 64 and 256 colors, a gif colormap may be used. A maximum distance set D=1; 3; 5; 7 may be used for computing auto-correlogram to build a N×D dimension feature vector where N is the number of colors and D is the maximum distance.
The color autocorrelogram and banded correlogram may be calculated using a fast algorithm (see, e.g., “Image Indexing Using Color Correlograms” from the Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97) to Huang et al., hereby incorporated by reference).
Wavelet texture analysis techniques (see, e.g., “Texture analysis and classification with tree-structured wavelet transform,” IEEE Trans. Image Processing 2(4), 429 (1993) to Chang et al., hereby incorporated by reference) may also be advantageously used. In order to extract the wavelet based texture, the original image is decomposed into 10 de-correlated sub-bands through 3-level wavelet transform. In each subband, the standard deviation of the wavelet coeficients is extracted, resulting in a 10-dimensional feature vector.
-
- The file of this patent contains at least one drawing executed in color. Copies of this patent with color drawing(s) will be provided by the Patent and Trademark Office upon request and payment of the necessary fee.
Embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:
The main preferred embodiment of the present invention will be described in relation to
A second preferred embodiment provides an implementation within an embedded imaging appliance such as a digital camera.
Main Embodiment Software Modules on a Desktop Computer
In this principle embodiment, the present invention is described in the context of a desktop computer environment and may either be run as a stand-alone program, or alternatively may be integrated in existing applications or operating system (OS) system components to improve their functionality.
1. Main Image Analysis Module 156
This module cycles through a set of images 170-1 . . . 180-2 and determines, extracts, normalizes and analyzes face regions and associated peripheral regions to determine feature vectors for a plurality of face and non-face classifiers. The module then records this extracted information in an image data set record. The operation of the module is next described in
Face region normalization techniques can range from a simple re-sizing of a face region to more sophisticated 2D rotational and affine transformation techniques and to highly sophisticated 3D face modeling methods.
Both the face region and a full body region may also be employed for color/texture analysis and can be used as additional classifiers for the sorting/retrieval process (see also Chen et al in “Face annotation for family photo album management”, published in the International Journal of Image and Graphics Vol. 3, No. 1 (2003), hereby incorporated by reference).
Other examples of associated peripheral regions are given in
Returning to
In
We also remark that if a face region is near the edge of an image it may not be possible to properly define peripheral regions such as the body region or the hair region [216]. In this case a flag is modified in the image data record to indicate this. During the sorting/retrieval process (described later), if the user selects a search method which includes body or hair regions than the faces without those regions are either not considered in the search or are given statistically determined maximal feature vector values for these regions during the classification process.
2. Image Collection Training Process
Before the modules 162 can perform their main function of image sorting and retrieval, it is first necessary to initiate a training process on an image collection. In this principle embodiment we will assume that an exemplary image collection is a set of images contained within a subdirectory of the file system on a desktop PC. Thus, when a process controlling the modules 162 is active and a user switches into a subdirectory containing images, the module 156 must load this new image collection and determine firstly if there are images which have not contributed to the training process and secondly if the number of such unutilized images warrants a full retraining of the image collection or if, alternatively, an incremental training process can be successfully employed.
In this preferred embodiment all of the face and non-face recognition techniques employed can be combined linearly which allows incremental training even for quite large additional subsets of new images which are added to a previously trained main image collection. However the present invention does not preclude the use of alternative face or non-face recognition methods which may not support linear combination, or may only support such combinations over small incremental steps. If it is determined that incremental training is possible then the training mode determination step exits to the incremental training step [110] which is further described in
A system in accordance with a preferred embodiment represents an improvement over the system described at US published application number 2002/0136433 to Lin, which is hereby incorporated by references, and which describes an “adaptive facial recognition system”. The approach described by Lin requires the determination of feature vectors based on a fixed set of basis vectors and a “generic” or “mean” face previously determined through offline training. The present invention allows for incremental retraining based on the automatic determination of face regions within newly acquired images or sets of such images.
A further improvement is that the facial regions determined and normalized by the module 156 are preferably re-utilized in subsequent re-training operations. As the automated determination of valid face regions within an image and the normalization of such regions is the most time-consuming part of the training process—typically representing 90-95% of the time required for training a typical image collection—this means that subsequent combining of several image collections into a “super-collection” and re-training of this “super-collection” can be achieved with a substantially reduced time lag.
2.1 Full Training Mode Workflow
In full training mode, it may not be as easy to complete all steps in the feature vector extraction process [220] in the main image analysis module [200], because the relevant basis vector may not yet be determined. In the preferred embodiment, the Wavelet/PCA classifier method [220-2b] is less easily completed until all images have been analyzed. A couple of alternatives are as follows. First, the main image analysis module may be called a second time to repeat these steps [220-2b] which may not have been completed on the first pass. Second, the incomplete feature vector extraction steps may be performed externally to the main image analysis module.
The latter case has been illustrated in
2.2 Incremental Training Mode Workflow
Normally an image collection will only need to go through this (automated) full training procedure once. After initial training, it will normally be possible to add and analyze new images using the determined basis vector set for the classifier, for example, PCA. When a larger subset of new images is added to a collection, in the case of PCA/Wavelet face recognition classifiers, it will generally be possible to incrementally modify existing basis vectors by only training the newly added image subset and subsequently modifying the existing location of the mean face and the previously determined basis vector set.
It begins by a determination from the workflow of
Note that if the size of the new image subset (plus any previous subsets which were unused for training (and marked accordingly)) is small relative to the size of the main image collection (say <10%) then these steps may optionally be deferred [244] and the images in the image subset are temporarily marked as “unused for training” [246]. Subsequently when a larger set of images is available, the incremental training module will take all of these images marked as “unused for training” and perform incremental training using a larger combined image superset. In that case the next step is to calculate the incremental change in the previously determined mean face location which will be produced by combining the new image (super)set with the previously determined training data [234a]. Once the new mean face location is determined, the incremental changes in the basis vector set for this classifier should next be determined [236a].
If either incremental change is greater than a predetermined threshold [250] and further illustrated [502, 505] in
If these incremental changes are less than their predetermined thresholds, then the effects of completing incremental training will be minimal and it does not make sense to do so. In this case the current subset is marked as “unused for training” and the determined incremental changes are also recorded in the global collection data set [252], which is further described in
In a variation on the above workflow the determining of step [244] can be limited to the current subset (i.e. no account is taken of additional subsets which were not yet used in training) and the additional set of steps marked “alternative” can be used. In this case, if the incremental change determined from the current subset is below the predetermined threshold, then the workflow moves to block [251] which determines if additional unused image subsets are available. If this is not the case the workflow continues as before, moving to step [252]. However, when additional subsets are available these are combined with the current image subset and the combined incremental change in mean face is determined [234b] followed by a determination of the combined incremental change in the basis vector set for this classifier [236b]. The workflow next returns to the determining step [250], repeating the previous analysis for the combined image superset comprising the current image subset and any previously unused image subsets. In this manner the incremental training module can reduce the need for retraining except when it will significantly affect the recognition process.
In other embodiments, it may be desirable to combine previously trained image collections into a “super-collection” comprising of at least two such collections. In this case it is desirable to re-use image collection data which is fixed, i.e. data which is not dependent on the actual set of images. In particular this includes the determined locations of face/peripheral regions within each image and the normalization data pertaining to each such predetermined face/peripheral region. The determination and normalization of such regions is, typically, very time consuming for a consumer image collection taking 90-95% of the time required by the training process. For a collection of several hundred images, with an average size of 3 megapixels, this can take of the order of tens of minutes, whereas the time required by the actual training engines which extract classifier data from the determined face regions will normally require of the order of several seconds per training engine.
In particular, this makes a system in accordance with a preferred embodiment suitable for use with interactive imaging browsing software which in turn employs the modules 162. Through a user interface, the user selects different groups of images, for example, through interaction with a folder structure, either by selecting one or more folders, each containing images or selecting groups of images within a folder. As these images will have been incrementally added to the storage source (local 170 or remote 180) which the user is accessing, it is likely that face and non-face region information will already have been detected and determined by the module 156 or another copy running remotely. The user can select candidate region within an image and then selectively determine which types of classifiers are to be used for sorting and retrieving images from the selected groups of images. Then generating either basis vector and/or feature vector information for all images within the selected group of images as well as the candidate region prior to sorting/retrieval can be performed relatively quickly and in line with user response expectations of an interactive application.
A modified variant of the main image analysis module [286], suitable for use in such an embodiment is illustrated in
The remainder of the analysis process is similar to that described in the main image analysis module of
3. Image Sorting and Retrieval
Now that the training process for an image collection has been described we must now consider how the image sorting/retrieval module functions.
3.1 Image Selection Process
3.2 Main Image Sorting/Retrieval Process
The workflow for this module is described in
After a reference region comprising the face and/or peripheral regions to be used in the retrieval process is selected (or determined automatically) the main retrieval process is initiated [310] either by user interaction or automatically in the case where search criteria are determined automatically from a configuration file. The main retrieval process is described in step [312] and comprises three main sub-processes which are iteratively performed for each classifier to be used in the sorting/retrieval process:
-
- (i) Distances are calculated in the current classifier space between the feature vector for the reference region and corresponding feature vector(s) for the face/peripheral regions for all images in the image collection to be searched [312-1]. In the preferred embodiment, the Euclidean distance is used to calculate these distances which serve as a measure of similarity between the reference region and face/peripheral regions in the image collection.
- (ii) The statistical mean and standard deviation of the distribution of these calculated distances is determined and stored temporarily [312-2].
- (iii) The determined distances between the reference region and the face/peripheral regions in the image collection are next normalized [312-3] using the mean and standard deviation determined in step [312-2].
These normalized data sets may now be combined in a decision fusion process [314] which generates a ranked output list of images. These may then be displayed by a UI module [316].
An additional perspective on the process steps [312-1, 312-2 and 312-3] is given in
The result of performing step [312-1] on the classifier space of
We remark that the distances from the feature vector for the reference region [504-2a] and [509-2a] to the feature vectors for all other face regions in
4. Methods for Combining Classifier Similarity Measures
4.1 Statistical Normalization Method
The process is described for a set of multiple classifiers, C1, C2 . . . CN and is based on a statistical determination of the distribution of the distances of all patterns relevant to the current classifier (face or peripheral regions in our embodiment) from the selected reference region. For most classifiers, this statistical analysis typically yields a normal distribution with a mean value MCn and a variance VCn as shown in
The combining of classifier similarity ranking measures (or, distances) is then determined by normalizing each classifier by this determined mean similarity ranking measure (distance) for that classifier, based on the reference region.
Thus the combined similarity ranking measure can now be determined quite simply as:
Dtot=D1/MC1+D2/MC2+Dn/MCn
A more sophisticated determination may optionally incorporate the standard deviation of the statistical distribution into the normalization process.
4.2 Determining Similarity Measures for Heterogenous Classifier Sets
So far we have been primarily concerned with cases where all classifiers are available for each reference region. In the context of our principle embodiment this implies that both face recognition classifiers, top-body correlogram classifier and the hair region correlogram classifier are available. However this is not always the case. We can say that the face region classifiers should always be available once a face region is successfully detected. Hereafter we refer to such classifiers as primary classifiers. In contrast the hair and clothing classifiers are not always available for close-up shots or where a face regions is towards the periphery of an image. Hereafter we refer to such classifiers as secondary classifiers.
Thus when the decision fusion process [824] performs a similarity determination across all stored patterns using all available classifiers, some patterns may not have associated secondary classifiers.
This may be dealt with in one of several ways:
-
- (i) stored patterns without an associated secondary classifier may have the missing similarity measure for that classifier replaced with the maximum measure determined for that classifier; or
- (ii) such stored patterns may have said similarity measure replaced with the determined statistical mean measure for said classifier; or
- (iii) such patterns may be simply ignored in the search.
In case (i) these patterns will appear after patterns which contain all classifiers; in (ii) the effect of the missing classifier does not affect the ranking of the pattern which may appear interspersed with patterns which contain all classifiers while in (iii) these patterns will not appear in the ranked list determined by the decision fusion process.
A selection between these alternatives may be based on pre-determined criteria, on a user selection, or on statistical analysis of the distribution of the classifier across the pattern set.
4.3 Determining Similarity Measures for Multiple Reference regions
A second modification of the decision fusion process arises when we wish to search for a combination of two, or more, reference regions co-occurring within the same image. In this case we process the first reference region according to the previously described methods to obtain a first set of similarity measures. The second reference region is then processed to yield a second set of similarity measures. This process yields multiple sets of similarity measures.
We next cycle through each image and determine the closest pattern to the first reference region; if only one pattern exists within an image then that image will not normally be considered. For each image where at least two patterns are present we next determine the closest pattern to the second reference region. These two similarity measures are next combined as illustrated in
4.4 Employing User Input in the Combination Process
From the descriptions in 4.2 and 4.3 of the various methods of combining the normalized classifiers it is clear that, once the normalized classifiers for each pattern are determined, the main decision fusion process can combine these classifiers in a variety of ways and that the resulting images (pattern groupings) can be correspondingly sorted in a variety of ways with differing results.
Accordingly we illustrate in
Those skilled in the art will realize that alternative user interface embodiments are possible. Further, the activation buttons for these exemplary classifiers [1002, 1003, 1004 and 1005] may operate in a combinative manner. Thus, if multiple user interface components [1002, 1003, 1004 and 1005] are selected together, the decision fusion process within the image browser application can be modified to combine these classifiers accordingly. Further, additional UI components, such as sliders or percentage scales, can be used to determine weightings between the selected classifiers to allow the user additional flexibility in sorting & retrieving images.
5. User Interface Aspects
Next, in
In
First Alternative Embodiment: Integration into OS Components
An alternative embodiment involving UI aspects is illustrated in
However, if the user selects a mode to sort images based on the faces occurring in them [1002], or the faces & clothing/hair features [1003] or the full body clothing [1004] or the body pose of a person [1005] the training mode may then switch to a foreground process in order to accelerate completion of the training process for the selected subdirectory (image collection). The image regions associated with each of these exemplary classifiers are shown as [1012], [1013], [1014] and [1015] in
Once the training process is completed the face regions for the currently selected image become active as UI elements and a user may now select one or more persons from the image by clicking on their face. The sorted images are then displayed as thumbnails [1010] and the user may combine (or eliminate) additional classifiers from the UI by selecting/deselecting [1002], [1003], [1004] and [1005].
The image browser application illustrated in
The browser application supports two distinct means of searching multiple collections to find the nearest match to one or more face regions selected within the main browser image [1012]. In the context of this embodiment of the invention that may be achieved by selecting multiple image collections in the left-hand window of the image browser [1001].
In the first method the user selects multiple collections from the left-hand browser window [1001]. The selected face regions within the main image are next analyzed and feature vectors are extracted for each classifier based on the basis sets determined within the first selected image collection. Similarity measures are determined between the one or more selected face regions of the main image and each of the face regions within said first selected image collection for each of the classifier sets for which basis vectors are provided within that image collection data set. Normalization measures are determined and combined similarity measures are next determined for each face region within said first selected image collection. A list of these normalized combined similarity measures is stored temporarily.
This process is repeated for each of the selected image collections and an associated list of normalized combined similarity measures is determined. These lists are next combined and all images from the selected image collections are displayed according to their relative similarity measures in the bottom right-hand window [1010] of the image browser.
A second method of searching multiple collections combines these image collections into a new “super-collection”. The collection data sets for each of the selected image collections are then loaded and merged to form a combined collection data set for this “super-collection”. Certain data from the combined data set will now be invalid because it is dependent on the results of the training process. This is illustrated in
The modified retraining process for such a “super-collection” is described above with reference to
Thus, upon a user selection of multiple image collections the present invention allows a fast retraining of the combined image “super-collection”. In this case the primary selection image presented in the main browser window [1012] will be from the combined image “super-collection” and the sorted images presented in the lower right-hand window [1010] are also taken from this combined “super-collection”.
Second Alternative Embodiment: In-Camera Implementation
As imaging appliances continue to increase in computing power, memory and non-volatile storage, it will be evident to those skilled in the art of digital camera design that many aspects of the present invention could be advantageously embodied as an in-camera image sorting sub-system. An exemplary embodiment is illustrated in
Following the main image acquisition process [1202] a copy of the acquired image is saved to the main image collection [1212] which will typically be stored on a removable compact-flash or multimedia data card [1214]. The acquired image may also be passed to an image subsampler [1232] which generates an optimized subsampled copy of the main image and stores it in a subsampled image collection [1216]. These subsampled images may advantageously be employed in the analysis of the acquired image.
The acquired image (or a subsampled copy thereof) is also passed to a face detector module [1204] followed by a face and peripheral region extraction module [1206] and a region normalization module [1207]. The extracted, normalized regions are next passed to the main image analysis module [1208] which generates an image data record [409] for the current image. The main image analysis module may also be called from the training module [1230] and the image sorting/retrieval module [1218].
A UI module [1220] facilitates the browsing & selection of images [1222], the selection of one or more face regions [1224] to use in the sorting/retrieval process [1218]. In addition classifiers may be selected and combined [1226] from the UI Module [1220].
Those skilled in the art will realize that various combinations are possible where certain modules are implemented in a digital camera and others are implemented on a desktop computer.
Claims
1. A digital image acquisition device, including a lens, an image sensor and a processor, and having an operating system including a component embodied within a processor-readable medium for programming the processor to perform an image recognition method a) training a plurality of image classifiers, including:
- for a plurality of images in the collection, identifying one or more regions corresponding to a face region;
- for each image identified as having multiple face regions, for each of a plurality of image classifiers, determining combination feature vectors corresponding to the multiple face regions; and
- storing said combination feature vectors in association with certain recognizable data relating to at least one of the multiple face regions, and
- b) retrieving a sub-set of images from said collection or a different collection that includes one or more images including both a face associated with certain recognizable data and a second face, or a subset of said collection, or a combination thereof, including: selecting from said plurality of image classifiers at least one classifier on which said retrieving is to be based, said at least one classifier being configured for programming the processor to select images containing at least two reference face regions including a first face to be recognized and a second face; determining, for said at least two reference face regions, a respective feature vector for one or more selected classifiers; and retrieving said sub-set of images from within said collection or said different collection that includes one or more images including both said face associated with certain recognizable data and said second face, or said subset of said collection, or said combination thereof, in accordance with the distance between the feature vectors determined for said reference region and the feature vectors for face regions of said image collection; and wherein said determining comprises: a) for each face region, extracting respective features representative of the region; b) for each of said plurality of image classifiers, determining respective basis vectors according to said extracted features; and c) for the extracted features for each region, for each classifier, determining said feature vectors, based on each determined basis vector.
2. A method for image recognition in a collection of digital images comprising:
- a) training a plurality of image classifiers, including:
- for a plurality of images in the collection, identifying one or more regions corresponding to a face region;
- for each image identified as having multiple face regions, for each of a plurality of image classifiers, determining combination feature vectors corresponding to the multiple face regions; and
- storing said combination feature vectors in association with certain recognizable data relating to at least one of the multiple face regions, and
- b) retrieving a sub-set of images from said collection or a different collection that includes one or more images including both a face associated with certain recognizable data and a second face, or a subset of said collection, or a combination thereof, including:
- selecting from said plurality of image classifiers at least one classifier on which said retrieving is to be based, said at least one classifier being configured for programming the processor to select images containing at least two reference face regions including a first face to be recognized and a second face;
- determining, for said at least two reference face regions, a respective feature vector for one or more selected classifiers; and
- retrieving said sub-set of images from within said collection or said different collection that includes one or more images including both said face associated with certain recognizable data and said second face, or said subset of said collection, or said combination thereof, in accordance with the distance between the feature vectors determined for said reference region and the feature vectors for face regions of said image collection; and
- wherein said determining comprises: a) for each face region, extracting respective features representative of the region; b) for each of said plurality of image classifiers determining respective basis vectors according to said extracted features; and c) for the extracted features for each region, for each classifier, determining said feature vectors, based on each determined basis vector.
3. A method as claimed in claim 2, wherein said determining further comprises:
- a) for each associated peripheral region for said each face region, extracting respective features representative of the peripheral region;
- b) for each of said plurality of image classifiers, determining respective basis vectors according to said extracted features; and
- c) for the extracted features for each peripheral region, for each classifier, determining said feature vectors, based on each determined basis vector.
4. A method as claimed in claim 2 wherein each basis vector for a classifier comprises a selected set of attributes and respective reference values for these attributes.
5. A method as claimed in claim 2, wherein said determining feature vectors for said reference region, is responsive to determining that feature vectors have previously been determined for said reference region for said classifier, for retrieving said feature vectors from storage.
6. A method as claimed in claim 2, wherein retrieving said sub-set of images comprises, for each classifier, comparing feature vectors for the selected face region with feature vectors for face regions in the image collection to provide a set of distance measures.
7. A method as in claim 2, further comprising calculating for each set of distance measures, mean and variance values.
8. A component embodied within a non-transitory processor-readable medium for programming a processor to perform an image recognition method including image recognition in a collection of digital images, wherein the method comprises:
- a) training a plurality of image classifiers, including: for a plurality of images in the collection, identifying one or more regions corresponding to a face region; for each image identified as having multiple face regions, for each of a plurality of image classifiers, determining combination feature vectors corresponding to the multiple face regions; and storing said combination feature vectors in association with certain recognizable data relating to at least one of the multiple face regions, and
- b) retrieving a sub-set of images from said collection or a different collection that includes one or more images including both a face associated with certain recognizable data and a second face, or a subset of said collection, or a combination thereof, including: selecting from said plurality of image classifiers at least one classifier on which said retrieving is to be based, said at least one classifier being configured for programming the processor to select images containing at least two reference face regions including a first face to be recognized and a second face; determining, for said at least two reference face regions, a respective feature vector for one or more selected classifiers; and retrieving said sub-set of images from within said collection or said different collection that includes one or more images including both said face associated with certain recognizable data and said second face, or said subset of said collection, or said combination thereof, in accordance with the distance between the feature vectors determined for said reference region and the feature vectors for face regions of said image collection; and
- wherein said determining comprises: a) for each face region, extracting respective features representative of the region; b) for each of said plurality of image classifiers, determining respective basis vectors according to said extracted features; and c) for the extracted features for each region, for each classifier, determining said feature vectors, based on each determined basis vector.
9. A component as claimed in claim 8, wherein said determining further comprises:
- d) for each associated peripheral region for said each face region, extracting respective features representative of the peripheral region;
- e) for each of said plurality of image classifiers, determining respective basis vectors according to said extracted features; and
- f) for the extracted features for each peripheral region, for each classifier, determining said feature vectors, based on each determined basis vector.
10. A component as claimed in claim 8, wherein each basis vector for a classifier comprises a selected set of attributes and respective reference values for these attributes.
11. A component as claimed in claim 8, wherein said determining feature vectors for said reference region, is responsive to determining that feature vectors have previously been determined for said reference region for said classifier, for retrieving said feature vectors from storage.
12. A component as claimed in claim 8, wherein retrieving said sub-set of images comprises, for each classifier, comparing feature vectors for the selected face region with feature vectors for face regions in the image collection to provide a set of distance measures.
13. A component as claimed in claim 8, wherein the method further comprises calculating for each set of distance measures, mean and variance values.
14. A device as claimed in claim 1, wherein the method further comprises calculating for each set of distance measures, mean and variance values.
15. A component as claimed in claim 8, wherein said determining further comprises:
- g) for each face region and any associated peripheral region, extracting respective features representative of the region;
- h) for each of said plurality of image classifiers, determining respective basis vectors according to said extracted features; and
- i) for the extracted features for each region, for each classifier, determining said feature vectors, based on each determined basis vector.
16. A device as claimed in claim 1, wherein each basis vector for a classifier comprises a selected set of attributes and respective reference values for these attributes.
17. A device as claimed in claim 1, wherein said determining feature vectors for said reference region, is responsive to determining that feature vectors have previously been determined for said reference region for said classifier, for retrieving said feature vectors from storage.
18. A device as claimed in claim 1, wherein retrieving said sub-set of images comprises, for each classifier, comparing feature vectors for the selected face region with feature vectors for face regions in the image collection to provide a set of distance measures.
4047187 | September 6, 1977 | Mashimo et al. |
4317991 | March 2, 1982 | Stauffer |
4376027 | March 8, 1983 | Smith et al. |
RE31370 | September 6, 1983 | Mashimo et al. |
4638364 | January 20, 1987 | Hiramatsu |
5018017 | May 21, 1991 | Sasaki et al. |
RE33682 | September 3, 1991 | Hiramatsu |
5063603 | November 5, 1991 | Burt |
5164831 | November 17, 1992 | Kuchta et al. |
5164992 | November 17, 1992 | Turk et al. |
5227837 | July 13, 1993 | Terashita |
5280530 | January 18, 1994 | Trew et al. |
5291234 | March 1, 1994 | Shindo et al. |
5311240 | May 10, 1994 | Wheeler |
5384912 | January 1995 | Ogrinc et al. |
5430809 | July 4, 1995 | Tomitaka |
5432863 | July 11, 1995 | Benati et al. |
5488429 | January 30, 1996 | Kojima et al. |
5496106 | March 5, 1996 | Anderson |
5500671 | March 19, 1996 | Andersson et al. |
5576759 | November 19, 1996 | Kawamura et al. |
5633678 | May 27, 1997 | Parulski et al. |
5638136 | June 10, 1997 | Kojima et al. |
5642431 | June 24, 1997 | Poggio et al. |
5680481 | October 21, 1997 | Prasad et al. |
5684509 | November 4, 1997 | Hatanaka et al. |
5706362 | January 6, 1998 | Yabe |
5710833 | January 20, 1998 | Moghaddam et al. |
5724456 | March 3, 1998 | Boyack et al. |
5744129 | April 28, 1998 | Dobbs et al. |
5745668 | April 28, 1998 | Poggio et al. |
5774129 | June 30, 1998 | Poggio et al. |
5774747 | June 30, 1998 | Ishihara et al. |
5774754 | June 30, 1998 | Ootsuka |
5781650 | July 14, 1998 | Lobo et al. |
5802208 | September 1, 1998 | Podilchuk et al. |
5812193 | September 22, 1998 | Tomitaka et al. |
5818975 | October 6, 1998 | Goodwin et al. |
5835616 | November 10, 1998 | Lobo et al. |
5842194 | November 24, 1998 | Arbuckle |
5844573 | December 1, 1998 | Poggio et al. |
5852823 | December 22, 1998 | De Bonet |
5870138 | February 9, 1999 | Smith et al. |
5911139 | June 8, 1999 | Jain et al. |
5911456 | June 15, 1999 | Tsubouchi et al. |
5978519 | November 2, 1999 | Bollman et al. |
5991456 | November 23, 1999 | Rahman et al. |
6035072 | March 7, 2000 | Read |
6053268 | April 25, 2000 | Yamada |
6072904 | June 6, 2000 | Desai et al. |
6097470 | August 1, 2000 | Buhr et al. |
6101271 | August 8, 2000 | Yamashita et al. |
6128397 | October 3, 2000 | Baluja et al. |
6142876 | November 7, 2000 | Cumbers |
6148092 | November 14, 2000 | Qian |
6188777 | February 13, 2001 | Darrell et al. |
6192149 | February 20, 2001 | Eschbach et al. |
6234900 | May 22, 2001 | Cumbers |
6246790 | June 12, 2001 | Huang et al. |
6249315 | June 19, 2001 | Holm |
6263113 | July 17, 2001 | Abdel-Mottaleb et al. |
6268939 | July 31, 2001 | Klassen et al. |
6282317 | August 28, 2001 | Luo et al. |
6301370 | October 9, 2001 | Steffens et al. |
6332033 | December 18, 2001 | Qian |
6349373 | February 19, 2002 | Sitka et al. |
6351556 | February 26, 2002 | Loui et al. |
6389181 | May 14, 2002 | Shaffer et al. |
6393148 | May 21, 2002 | Bhaskar |
6400470 | June 4, 2002 | Takaragi et al. |
6400830 | June 4, 2002 | Christian et al. |
6404900 | June 11, 2002 | Qian et al. |
6407777 | June 18, 2002 | DeLuca |
6418235 | July 9, 2002 | Morimoto et al. |
6421468 | July 16, 2002 | Ratnakar et al. |
6430307 | August 6, 2002 | Souma et al. |
6430312 | August 6, 2002 | Huang et al. |
6438264 | August 20, 2002 | Gallagher et al. |
6456732 | September 24, 2002 | Kimbell et al. |
6459436 | October 1, 2002 | Kumada et al. |
6473199 | October 29, 2002 | Gilman et al. |
6501857 | December 31, 2002 | Gotsman et al. |
6502107 | December 31, 2002 | Nishida |
6504942 | January 7, 2003 | Hong et al. |
6504951 | January 7, 2003 | Luo et al. |
6516154 | February 4, 2003 | Parulski et al. |
6526161 | February 25, 2003 | Yan |
6554705 | April 29, 2003 | Cumbers |
6556708 | April 29, 2003 | Christian et al. |
6564225 | May 13, 2003 | Brogliatti et al. |
6567775 | May 20, 2003 | Maali et al. |
6567983 | May 20, 2003 | Shiimori |
6606398 | August 12, 2003 | Cooper |
6633655 | October 14, 2003 | Hong et al. |
6661907 | December 9, 2003 | Ho et al. |
6697503 | February 24, 2004 | Matsuo et al. |
6697504 | February 24, 2004 | Tsai |
6754389 | June 22, 2004 | Dimitrova et al. |
6760465 | July 6, 2004 | McVeigh et al. |
6765612 | July 20, 2004 | Anderson et al. |
6783459 | August 31, 2004 | Cumbers |
6801250 | October 5, 2004 | Miyashita |
6826300 | November 30, 2004 | Liu et al. |
6850274 | February 1, 2005 | Silverbrook et al. |
6876755 | April 5, 2005 | Taylor et al. |
6879705 | April 12, 2005 | Tao et al. |
6928231 | August 9, 2005 | Tajima |
6940545 | September 6, 2005 | Ray et al. |
6965684 | November 15, 2005 | Chen et al. |
6993157 | January 31, 2006 | Oue et al. |
7003135 | February 21, 2006 | Hsieh et al. |
7020337 | March 28, 2006 | Viola et al. |
7027619 | April 11, 2006 | Pavlidis et al. |
7035456 | April 25, 2006 | Lestideau |
7035467 | April 25, 2006 | Nicponski |
7038709 | May 2, 2006 | Verghese |
7038715 | May 2, 2006 | Flinchbaugh |
7042505 | May 9, 2006 | DeLuca |
7046339 | May 16, 2006 | Stanton et al. |
7050607 | May 23, 2006 | Li et al. |
7064776 | June 20, 2006 | Sumi et al. |
7082212 | July 25, 2006 | Liu et al. |
7092555 | August 15, 2006 | Lee et al. |
7099510 | August 29, 2006 | Jones et al. |
7110575 | September 19, 2006 | Chen et al. |
7113641 | September 26, 2006 | Eckes et al. |
7119838 | October 10, 2006 | Zanzucchi et al. |
7120279 | October 10, 2006 | Chen et al. |
7151843 | December 19, 2006 | Rui et al. |
7158680 | January 2, 2007 | Pace |
7162076 | January 9, 2007 | Liu |
7162101 | January 9, 2007 | Itokawa et al. |
7171023 | January 30, 2007 | Kim et al. |
7171025 | January 30, 2007 | Rui et al. |
7175528 | February 13, 2007 | Cumbers |
7187786 | March 6, 2007 | Kee |
7190829 | March 13, 2007 | Zhang et al. |
7200249 | April 3, 2007 | Okubo et al. |
7206461 | April 17, 2007 | Steinberg et al. |
7218759 | May 15, 2007 | Ho et al. |
7227976 | June 5, 2007 | Jung et al. |
7254257 | August 7, 2007 | Kim et al. |
7269292 | September 11, 2007 | Steinberg |
7274822 | September 25, 2007 | Zhang et al. |
7274832 | September 25, 2007 | Nicponski |
7295233 | November 13, 2007 | Steinberg et al. |
7308156 | December 11, 2007 | Steinberg et al. |
7310450 | December 18, 2007 | Steinberg et al. |
7315630 | January 1, 2008 | Steinberg et al. |
7315631 | January 1, 2008 | Corcoran et al. |
7315658 | January 1, 2008 | Steinberg et al. |
7317815 | January 8, 2008 | Steinberg et al. |
7317816 | January 8, 2008 | Ray et al. |
7324670 | January 29, 2008 | Kozakaya et al. |
7330570 | February 12, 2008 | Sogo et al. |
7336821 | February 26, 2008 | Ciuc et al. |
7340109 | March 4, 2008 | Steinberg et al. |
7352394 | April 1, 2008 | DeLuca et al. |
7357717 | April 15, 2008 | Cumbers |
7362368 | April 22, 2008 | Steinberg et al. |
7369712 | May 6, 2008 | Steinberg et al. |
7403643 | July 22, 2008 | Ianculescu et al. |
7424170 | September 9, 2008 | Steinberg et al. |
7436998 | October 14, 2008 | Steinberg et al. |
7440593 | October 21, 2008 | Steinberg et al. |
7440594 | October 21, 2008 | Takenaka |
7460694 | December 2, 2008 | Corcoran et al. |
7460695 | December 2, 2008 | Steinberg et al. |
7466866 | December 16, 2008 | Steinberg |
7469055 | December 23, 2008 | Corcoran et al. |
7469071 | December 23, 2008 | Drimbarean et al. |
7471846 | December 30, 2008 | Steinberg et al. |
7474341 | January 6, 2009 | DeLuca et al. |
7506057 | March 17, 2009 | Bigioi et al. |
7515740 | April 7, 2009 | Corcoran et al. |
7536036 | May 19, 2009 | Steinberg et al. |
7536060 | May 19, 2009 | Steinberg et al. |
7536061 | May 19, 2009 | Steinberg et al. |
7545995 | June 9, 2009 | Steinberg et al. |
7551754 | June 23, 2009 | Steinberg et al. |
7551755 | June 23, 2009 | Steinberg et al. |
7551800 | June 23, 2009 | Corcoran et al. |
7555148 | June 30, 2009 | Steinberg et al. |
7558408 | July 7, 2009 | Steinberg et al. |
7564994 | July 21, 2009 | Steinberg et al. |
7565030 | July 21, 2009 | Steinberg et al. |
7574016 | August 11, 2009 | Steinberg et al. |
7587068 | September 8, 2009 | Steinberg et al. |
7587085 | September 8, 2009 | Steinberg et al. |
7590305 | September 15, 2009 | Steinberg et al. |
7599577 | October 6, 2009 | Ciuc et al. |
7606417 | October 20, 2009 | Steinberg et al. |
7616233 | November 10, 2009 | Steinberg et al. |
7619665 | November 17, 2009 | DeLuca |
7620218 | November 17, 2009 | Steinberg et al. |
7630006 | December 8, 2009 | DeLuca et al. |
7630527 | December 8, 2009 | Steinberg et al. |
7634109 | December 15, 2009 | Steinberg et al. |
7636486 | December 22, 2009 | Steinberg et al. |
7639888 | December 29, 2009 | Steinberg et al. |
7639889 | December 29, 2009 | Steinberg et al. |
7660478 | February 9, 2010 | Steinberg et al. |
7676108 | March 9, 2010 | Steinberg et al. |
7676110 | March 9, 2010 | Steinberg et al. |
7680342 | March 16, 2010 | Steinberg et al. |
7683946 | March 23, 2010 | Steinberg et al. |
7684630 | March 23, 2010 | Steinberg |
7685341 | March 23, 2010 | Steinberg et al. |
7689009 | March 30, 2010 | Corcoran et al. |
7692696 | April 6, 2010 | Steinberg et al. |
7693311 | April 6, 2010 | Steinberg et al. |
7694048 | April 6, 2010 | Steinberg et al. |
7697778 | April 13, 2010 | Steinberg et al. |
7702136 | April 20, 2010 | Steinberg et al. |
7702236 | April 20, 2010 | Steinberg et al. |
7715597 | May 11, 2010 | Costache et al. |
7738015 | June 15, 2010 | Steinberg et al. |
7746385 | June 29, 2010 | Steinberg et al. |
7747596 | June 29, 2010 | Bigioi et al. |
7773118 | August 10, 2010 | Florea et al. |
7783085 | August 24, 2010 | Perlmutter et al. |
7787022 | August 31, 2010 | Steinberg et al. |
7792335 | September 7, 2010 | Steinberg et al. |
7792970 | September 7, 2010 | Bigioi et al. |
7796816 | September 14, 2010 | Steinberg et al. |
7796822 | September 14, 2010 | Steinberg et al. |
7804531 | September 28, 2010 | DeLuca et al. |
7804983 | September 28, 2010 | Steinberg et al. |
7809162 | October 5, 2010 | Steinberg et al. |
7822234 | October 26, 2010 | Steinberg et al. |
7822235 | October 26, 2010 | Steinberg et al. |
7844076 | November 30, 2010 | Corcoran et al. |
7844135 | November 30, 2010 | Steinberg et al. |
7847839 | December 7, 2010 | DeLuca et al. |
7847840 | December 7, 2010 | DeLuca et al. |
7848549 | December 7, 2010 | Steinberg et al. |
7852384 | December 14, 2010 | DeLuca et al. |
7853043 | December 14, 2010 | Steinberg et al. |
7855737 | December 21, 2010 | Petrescu et al. |
7860274 | December 28, 2010 | Steinberg et al. |
7864990 | January 4, 2011 | Corcoran et al. |
7865036 | January 4, 2011 | Ciuc et al. |
7868922 | January 11, 2011 | Ciuc et al. |
7869628 | January 11, 2011 | Corcoran et al. |
20010028731 | October 11, 2001 | Covell et al. |
20010031129 | October 18, 2001 | Tajima |
20010031142 | October 18, 2001 | Whiteside |
20020105662 | August 8, 2002 | Patton et al. |
20020106114 | August 8, 2002 | Yan et al. |
20020113879 | August 22, 2002 | Battle et al. |
20020114535 | August 22, 2002 | Luo |
20020132663 | September 19, 2002 | Cumbers |
20020136433 | September 26, 2002 | Lin |
20020141586 | October 3, 2002 | Margalit et al. |
20020154793 | October 24, 2002 | Hillhouse et al. |
20020168108 | November 14, 2002 | Loui et al. |
20020172419 | November 21, 2002 | Lin et al. |
20030025812 | February 6, 2003 | Slatter |
20030035573 | February 20, 2003 | Duta et al. |
20030043160 | March 6, 2003 | Elfving et al. |
20030048926 | March 13, 2003 | Watanabe |
20030048950 | March 13, 2003 | Savakis et al. |
20030052991 | March 20, 2003 | Stavely et al. |
20030059107 | March 27, 2003 | Sun et al. |
20030059121 | March 27, 2003 | Savakis et al. |
20030084065 | May 1, 2003 | Lin et al. |
20030086134 | May 8, 2003 | Enomoto |
20030086593 | May 8, 2003 | Liu et al. |
20030107649 | June 12, 2003 | Flickner et al. |
20030118216 | June 26, 2003 | Goldberg |
20030118218 | June 26, 2003 | Wendt et al. |
20030122839 | July 3, 2003 | Matraszek et al. |
20030128877 | July 10, 2003 | Nicponski |
20030156202 | August 21, 2003 | Van Zee |
20030158838 | August 21, 2003 | Okusa |
20030198368 | October 23, 2003 | Kee |
20030210808 | November 13, 2003 | Chen et al. |
20040008258 | January 15, 2004 | Aas et al. |
20040136574 | July 15, 2004 | Kozakaya et al. |
20040145660 | July 29, 2004 | Kusaka |
20040207722 | October 21, 2004 | Koyama et al. |
20040210763 | October 21, 2004 | Jonas |
20040213454 | October 28, 2004 | Lai et al. |
20040223063 | November 11, 2004 | DeLuca et al. |
20040264780 | December 30, 2004 | Zhang et al. |
20050013479 | January 20, 2005 | Xiao et al. |
20050031224 | February 10, 2005 | Prilutsky et al. |
20050036676 | February 17, 2005 | Heisele |
20050063569 | March 24, 2005 | Colbert et al. |
20050069208 | March 31, 2005 | Morisada |
20050129278 | June 16, 2005 | Rui et al. |
20050140801 | June 30, 2005 | Prilutsky et al. |
20050226509 | October 13, 2005 | Maurer et al. |
20060006077 | January 12, 2006 | Mosher et al. |
20060018521 | January 26, 2006 | Avidan |
20060093238 | May 4, 2006 | Steinberg et al. |
20060104488 | May 18, 2006 | Bazakos et al. |
20060120599 | June 8, 2006 | Steinberg et al. |
20060140055 | June 29, 2006 | Ehrsam et al. |
20060140455 | June 29, 2006 | Costache et al. |
20060177100 | August 10, 2006 | Zhu et al. |
20060177131 | August 10, 2006 | Porikli |
20060204034 | September 14, 2006 | Steinberg et al. |
20060204053 | September 14, 2006 | Mori et al. |
20060228040 | October 12, 2006 | Simon et al. |
20060239515 | October 26, 2006 | Zhang et al. |
20060251292 | November 9, 2006 | Gokturk et al. |
20070011651 | January 11, 2007 | Wagner |
20070053335 | March 8, 2007 | Onyon et al. |
20070091203 | April 26, 2007 | Peker et al. |
20070098303 | May 3, 2007 | Gallagher et al. |
20070154095 | July 5, 2007 | Cao et al. |
20070154096 | July 5, 2007 | Cao et al. |
20070160307 | July 12, 2007 | Steinberg et al. |
20070253638 | November 1, 2007 | Steinberg et al. |
20070269108 | November 22, 2007 | Steinberg et al. |
20070296833 | December 27, 2007 | Corcoran et al. |
20080013798 | January 17, 2008 | Ionita et al. |
20080013799 | January 17, 2008 | Steinberg et al. |
20080043121 | February 21, 2008 | Prilutsky et al. |
20080049970 | February 28, 2008 | Ciuc et al. |
20080075385 | March 27, 2008 | David et al. |
20080089561 | April 17, 2008 | Zhang |
20080112599 | May 15, 2008 | Nanu et al. |
20080137919 | June 12, 2008 | Kozakaya et al. |
20080143854 | June 19, 2008 | Steinberg et al. |
20080144966 | June 19, 2008 | Steinberg et al. |
20080175481 | July 24, 2008 | Petrescu et al. |
20080186389 | August 7, 2008 | DeLuca et al. |
20080205712 | August 28, 2008 | Ionita et al. |
20080219517 | September 11, 2008 | Blonk et al. |
20080219518 | September 11, 2008 | Steinberg et al. |
20080219581 | September 11, 2008 | Albu et al. |
20080220750 | September 11, 2008 | Steinberg et al. |
20080232711 | September 25, 2008 | Prilutsky et al. |
20080240555 | October 2, 2008 | Nanu et al. |
20080266419 | October 30, 2008 | Drimbarean et al. |
20080267461 | October 30, 2008 | Ianculescu et al. |
20080292193 | November 27, 2008 | Bigioi et al. |
20080309769 | December 18, 2008 | Albu et al. |
20080309770 | December 18, 2008 | Florea et al. |
20080316327 | December 25, 2008 | Steinberg et al. |
20080316328 | December 25, 2008 | Steinberg et al. |
20080317339 | December 25, 2008 | Steinberg et al. |
20080317357 | December 25, 2008 | Steinberg et al. |
20080317378 | December 25, 2008 | Steinberg et al. |
20080317379 | December 25, 2008 | Steinberg et al. |
20090002514 | January 1, 2009 | Steinberg et al. |
20090003661 | January 1, 2009 | Ionita et al. |
20090003708 | January 1, 2009 | Steinberg et al. |
20090040342 | February 12, 2009 | Drimbarean et al. |
20090080713 | March 26, 2009 | Bigioi et al. |
20090080796 | March 26, 2009 | Capata et al. |
20090080797 | March 26, 2009 | Nanu et al. |
20090115915 | May 7, 2009 | Steinberg et al. |
20090123063 | May 14, 2009 | Ciuc |
20090167893 | July 2, 2009 | Susanu et al. |
20090179998 | July 16, 2009 | Steinberg et al. |
20090179999 | July 16, 2009 | Albu et al. |
20090185753 | July 23, 2009 | Albu et al. |
20090189997 | July 30, 2009 | Stec et al. |
20090189998 | July 30, 2009 | Nanu et al. |
20090190803 | July 30, 2009 | Neghina et al. |
20090196466 | August 6, 2009 | Capata et al. |
20090238410 | September 24, 2009 | Corcoran et al. |
20090238419 | September 24, 2009 | Steinberg et al. |
20090263022 | October 22, 2009 | Petrescu et al. |
20090303342 | December 10, 2009 | Corcoran et al. |
20090303343 | December 10, 2009 | Drimbarean et al. |
20090304278 | December 10, 2009 | Steinberg et al. |
20100014721 | January 21, 2010 | Steinberg et al. |
20100026831 | February 4, 2010 | Ciuc et al. |
20100026832 | February 4, 2010 | Ciuc et al. |
20100026833 | February 4, 2010 | Ciuc et al. |
20100039520 | February 18, 2010 | Nanu et al. |
20100039525 | February 18, 2010 | Steinberg et al. |
20100053362 | March 4, 2010 | Nanu et al. |
20100053367 | March 4, 2010 | Nanu et al. |
20100053368 | March 4, 2010 | Nanu et al. |
20100054533 | March 4, 2010 | Steinberg et al. |
20100054549 | March 4, 2010 | Steinberg et al. |
20100054592 | March 4, 2010 | Nanu et al. |
20100060727 | March 11, 2010 | Steinberg et al. |
20100066822 | March 18, 2010 | Steinberg et al. |
20100141786 | June 10, 2010 | Bigioi et al. |
20100141787 | June 10, 2010 | Bigioi et al. |
20100141798 | June 10, 2010 | Steinberg et al. |
20100146165 | June 10, 2010 | Steinberg et al. |
20100165140 | July 1, 2010 | Steinberg |
20100165150 | July 1, 2010 | Steinberg et al. |
20100182458 | July 22, 2010 | Steinberg et al. |
20100194895 | August 5, 2010 | Steinberg |
20100201826 | August 12, 2010 | Steinberg et al. |
20100201827 | August 12, 2010 | Steinberg et al. |
20100220899 | September 2, 2010 | Steinberg et al. |
20100231727 | September 16, 2010 | Steinberg et al. |
20100238309 | September 23, 2010 | Florea et al. |
20100259622 | October 14, 2010 | Steinberg et al. |
20100260414 | October 14, 2010 | Ciuc |
20100271499 | October 28, 2010 | Steinberg et al. |
20100272363 | October 28, 2010 | Steinberg et al. |
20100295959 | November 25, 2010 | Steinberg et al. |
20100321537 | December 23, 2010 | Zamfir |
20100328472 | December 30, 2010 | Steinberg et al. |
20100328486 | December 30, 2010 | Steinberg et al. |
20100329549 | December 30, 2010 | Steinberg et al. |
20100329582 | December 30, 2010 | Albu et al. |
20110002506 | January 6, 2011 | Ciuc et al. |
20110002545 | January 6, 2011 | Steinberg et al. |
20110007174 | January 13, 2011 | Bacivarov et al. |
20110013043 | January 20, 2011 | Corcoran et al. |
20110013044 | January 20, 2011 | Steinberg et al. |
20110025859 | February 3, 2011 | Steinberg et al. |
20110025886 | February 3, 2011 | Steinberg et al. |
20110026780 | February 3, 2011 | Corcoran et al. |
20110033112 | February 10, 2011 | Steinberg et al. |
20110043648 | February 24, 2011 | Albu et al. |
20110050919 | March 3, 2011 | Albu et al. |
20110053654 | March 3, 2011 | Petrescu et al. |
20110055354 | March 3, 2011 | Bigioi et al. |
2370438 | June 2002 | GB |
5260360 | October 1993 | JP |
WO2007142621 | December 2007 | WO |
WO2008015586 | February 2008 | WO |
WO2008107112 | September 2008 | WO |
WO2008109622 | September 2008 | WO |
WO2008107112 | January 2009 | WO |
WO2010063463 | June 2010 | WO |
WO2010063463 | July 2010 | WO |
- Ahonen T., et al., “Face description with local binary patterns: Application to face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI, vol. 28, pp. 2037-2041, Dec. 2006.
- Belhumeur P.N., et al., Eigenfaces vs. Fisherfaces: Recognition using Class Specific Linear Projection, Proceedings of the 4th European Conference on Computer Vision, ECCV'96, Apr. 15-18, 1996, Cambridge, UK, pp. 45-58.
- Belle V., “Detection and Recognition of Human Faces using Random Forests for a Mobile Robot” [Online] Apr. 2008, pp. 1-104, RWTH Aachen, DE Master of Science Thesis, [retrieved on Apr. 29, 2010], Retrieved from the Internet: URL:http://thomas.deselaers.de/teaching/fi les/belle—master.pdf> Section 5.7 Chapters 3-5 .
- Beymer, David, “Pose-Invariant face Recognition Using Real and Virtual Views, A.I. Technical Report No. 1574”, Massachusetts Institute of Technology Artificial Intelligence Laboratory, 1996, pp. 1-176.
- Boom B., et al., “Investigating the boosting framework for face recognition,” Proceedings of the 28th Symposium on Information Theory in the Benelux, Enschede, The Netherlands, 2007, pp. 1-8.
- Bourdev L., et al., “Robust Object Detection via Soft Cascade,” In: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, Jun. 20, 2005 to Jun. 26, 2005 IEEE, Piscataway, NJ, USA, 2005, vol. 2, pp. 236-243.
- Bradski Gary et al., “Learning-Based Computer Vision with Intel's Open Source Computer Vision Library”, Intel Technology, 2005, pp. 119-130, vol. 9—Issue 2.
- Chang, T., “Texture Analysis and Classification with Tree-Structured Wavelet Transform”, IEEE Transactions on Image Processing, 1993, pp. 429-441, vol. 2—Issue 4.
- Chen et al., “Face annotation for family photo album management”, International Journal of Image and Graphics, 2003, vol. 3—Issue 1.
- Clippingdale S., et al., “A unified approach to video face detection, tracking and recognition,” Image Processing, Proceedings. 1999 International Conference on—Kobe, 1999, vol. 1, pp. 662-666.
- Corcoran, P. et al., “Automatic Indexing of Consumer Image Collections Using Person Recognition Techniques”, Digest of Technical Papers. International Conference on Consumer Electronics, 2005, pp. 127-128.
- Corcoran P., et al., Automatic System for In-Camera Person Indexing of Digital Image Collections, Conference Proceedings, GSPx 2006, Santa Clara, Ca., Oct. 2006.
- Corcoran P., et al., Improved hmm based face recognition system. International Conference on Optimization of Electrical and Electronic Equipment, Brasov, Romania, May 2006.
- Corcoran P., et al., Pose-invariant face recognition using AAMs, International Conference on Optimization of Electrical and Electronic Equipment, Brasov, Romania, May 2006.
- Corcoran, Peter et al., “Automated sorting of consumer image collections using face and peripheral region image classifiers”, IEEE Transactions on Consumer Electronics, 2005, pp. 747-754, vol. 51—Issue 3.
- Corcoran Peter et al., “Combining PCA-based Datasets without Retraining of the Basis Vector Set”, IEEE PC, 2007.
- Costache G., et al., In-camera person-indexing of digital images, Consumer Electronics ICCE '06 Digest of Technical Papers. International Conference on, Jan. 7-11, 2006.
- Costache, G. et al., “In-Camera Person-Indexing of Digital Images”, Digest of Technical Papers. International Conference on Consumer Electronics, 2006, pp. 339-340.
- Demirkir, C. et al., “Face detection using boosted tree classifier stages”, Proceedings of the IEEE 12th Signal Processing and Communications Applications Conference, 2004, pp. 575-578.
- Drimbarean, A.F. et al., “Image Processing Techniques to Detect and Filter Objectionable Images based on Skin Tone and Shape Recognition”, International Conference on Consumer Electronics, 2001, pp. 278-279.
- EPO Communication pursuant to Article 94(3) EPC, for European application No. 08716106.3, dated Jul. 2, 2010, 6 Pages.
- EPO Communication Regarding the Transmission of the European Search Report, European Search Opinion and Supplementary European Search Report, for European Patent Application No. 08743677.0, Report dated Feb. 14, 2011, 6 pages.
- Final Office Action mailed Jan. 6, 2012 for U.S. Appl. No. 12/042,104, filed Mar. 4, 2008.
- Final Office Action mailed Jun. 17, 2011 for U.S. Appl. No. 12/506,124, filed Jul. 20, 2009.
- Final Office Action mailed Oct. 17, 2008 for U.S. Appl. No. 10/764,335, filed Jan. 22, 2004.
- Georghiades A.S., et al., “From few to many: illumination cone models for face recognition under variable lighting and pose,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, vol. 23, No. 6, pp. 643-660.
- Google Picassa (n.d.), Retrieved from the Internet on Apr. 24, 2011, URL:http://picasa.google.com, 13 pages.
- Hall, P. et al., “Adding and Subtracting eigenspaces”, Proceedings of the British Machine Vision Conference, 1999, pp. 453-462, vol. 2.
- Hall, P. et al., “Adding and subtracting eigenspaces with eigenvalue decomposition and singular value decomposition”, Image and Vision Computing, 2002, pp. 1009-1016, vol. 20—Issue 13-14.
- Hall, P. et al., Merging and Splitting Eigenspace Models, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Service Center, Los Alamitos, CA US, vol. 22, No. 9, Sep. 1, 2000, pp. 1042-1049, XP008081056, ISSN: 0162-8828.
- Hall, Peter et al., “Incremental Eigenanalysis for Classification, XP008091807”, British Machine Vision Conference, pp. 286-295.
- Huang C., et al., “Boosting Nested Cascade Detector for Multi View Face Detection,” Proceeding, ICPR'04 Proceedings of the Pattern Recognition, 17 th International Conference on (ICPR'04), vol. 2, IEEE Computer Society Washington, DC, USA 2004, 4 pages.
- Huang et al., “Image Indexing Using Color Correlograms”, Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97), 1997, pp. 762.
- International Preliminary Report on Patentability for PCT Application No. PCT/EP2009/008603, mailed on Jun. 7, 2011, 9 pages.
- International Search Report and Written Opinion for PCT Application No. PCT/EP2009/008603, mailed on Jun. 7, 2010, 11 pages.
- Jebara, Tony S. et al., “3D Pose Estimation and Normalization for Face Recognition, A Thesis submitted to the Faculty of Graduate Studies and Research in Partial fulfillment of the requirements of the degree of Bachelor of Engineering”, Department of Electrical Engineering, 1996, pp. 1-121, McGill University.
- Kouzani, A.Z., “Illumination-Effects Compensation in Facial Images Systems”, Man and Cybernetics, IEEE SMC '99 Conference Proceedings, 1999, pp. VI-840-VI-844, vol. 6.
- Kusumoputro, B. et al., “Development of 3D Face Databases by Using Merging and Splitting Eigenspace Models, retrieved from URL: http://www.wseas.us/e-library/conferences/digest2003IpapersI466-272.pdf on Sep. 16, 2008”, WSEAS Trans. on Computers, 2003, pp. 203-209, vol. 2—Issue 1.
- Lai, J.H. et al., “Face recognition using holistic Fourier in variant features, http://digitalimaging.inf.brad.ac.uk/publication/pr34-1.pdf.”, Pattern Recognition, 2001, pp. 95-109, vol. 34.
- Land E.H., “An alternative technique for the computation of the designator in the retinex theory of color vision,” Academy of Sciences, Physics, USA, vol. 83, pp. 3078-3080, May 1986.
- Lee K., et al., “Nine Points of Light: Acquiring Subspaces for Face Recognition under Variable Lighting,” in Proceedings of CVPR, 2001, vol. 1, pp. 519-526.
- Lei et al., “A CBIR Method Based on Color-Spatial Feature”, IEEE Region 10th Ann. Int. Conf., 1999.
- Lienhart, R. et al., “A Detector Tree of Boosted Classifiers for Real-Time Object Detection and Tracking”, Proceedings of the 2003 International Conference on Multimedia and Expo, 2003, pp. 277-280, vol. 1, IEEE Computer Society.
- Liu, X. et al., “Eigenspace updating for non-stationary Process and its application to face recognition”, Pattern Recognition, 2003, pp. 1945-1959, vol. 36—Issue 9, Elsevier.
- Lowe D.G., et al., “Distinctive image features from scale-invariant keypoints,” Kluwer Academic Publishers, 2004, 226 International Journal of Computer Vision, vol. 60 (2), pp. 91-110.
- Melenchon, Javier et al., “Efficiently Downdating, Composing and Splitting Singular Value Decompositions Preserving the Mean Information”, Pattern Recognition and Image Analysis Lecture Notes in Computer Science, 1990, pp. 436-443, vol. 4478, Springer-Verlag.
- Mitra, S. et al., “Gaussian Mixture Models Based on the Frequency Spectra for Human Identification and Illumination Classification”, Proceedings of the Fourth IEEE Workshop on Automatic Identification Advanced Technologies, 2005, pp. 245-250.
- Nefian A.V., et al., “Hidden Markov Models for Face Recognition,” Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP'98, vol. 5, May 12-15, 1998, Seattle, Washington, USA, pp. 2721-2724.
- Non-Final Office Action mailed Oct. 11, 2011 for U.S. Appl. No. 12/437,464, filed May 7, 2009.
- Non-Final Office Action mailed Oct. 11, 2011 for U.S. Appl. No. 12/913,772, filed Oct. 28, 2010.
- Non-Final Office Action mailed Mar. 14, 2007 for U.S. Appl. No. 10/764,335, filed Jan. 22, 2004.
- Non-Final Office Action mailed Mar. 17, 2008 for U.S. Appl. No. 10/764,335, filed Jan. 22, 2004.
- Non-Final Office Action mailed Jun. 22, 2011 for U.S. Appl. No. 12/042,104, filed Mar. 4, 2008.
- Non-Final Office Action mailed Sep. 29, 2008 for U.S. Appl. No. 10/764,336, filed Jan. 22, 2004.
- Non-Final Office Action mailed Apr. 20, 2011, for U.S. Appl. No. 12/506,124, filed Jul. 20, 2009.
- Non-Final Office Action mailed May 24, 2010, for U.S. Appl. No. 12/506,124, filed Jul. 20, 2009.
- Notice of Allowance mailed Feb. 9, 2009 for U.S. Appl. No. 10/764,274, filed Jan. 22, 2004.
- Notice of Allowance mailed Mar. 9, 2009 for U.S. Appl. No. 10/764,274, filed Jan. 22, 2004.
- Notice of Allowance mailed Mar. 16, 2009 for U.S. Appl. No. 10/764,336, filed Jan. 22, 2004.
- Notice of Allowance mailed Mar. 20, 2009 for U.S. Appl. No. 10/763,801, filed Jan. 22, 2004.
- Notice of Allowance mailed Aug. 23, 2011 for U.S. Appl. No. 12/418,987, filed Apr. 6, 2009.
- Notice of Allowance mailed Feb. 25, 2009 for U.S. Appl. No. 10/764,339, filed Jan. 22, 2004.
- Notice of Allowance mailed Jan. 29, 2009 for U.S. Appl. No. 10/764,339, filed Jan. 22, 2004.
- Notice of Allowance mailed Apr. 30, 2009 for U.S. Appl. No. 10/164,335, filed Jan. 22, 2004.
- Ojala T., et al., A generalized Local Binary Pattern operator for multiresolution gray scale and rotation invariant texture classification, Advances in Pattern Recognition, ICAPR 2001 Proceedings, Springer, 397-406, 2001.
- PCT International Preliminary Report on Patentability for PCT Application No. PCT/IB2007/0003985, mailed on Feb. 3, 2009, 9 pages.
- PCT International Preliminary Report on Patentability Chapter I, for PCT Application No. PCT/US2008/055831, dated Sep. 8, 2008, 5 pages.
- PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/IB2007/003985, dated Jun. 17, 2008, 20 pages.
- PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/US2007/75136, dated Oct. 1, 2008, 9 pages.
- PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/US2008/055831, dated Jun. 24, 2008, 7 Pages.
- Phillips P.J., et al., Face Recognition Vendor Test 2002, Proceedings of the IEEE International Workshop on Analysis and Modeling of Faces and Gestures, p. 44, Oct. 17, 2003.
- Pizer S., et al., “Adaptive histogram equalization and its variations,” Computer Vision, Graphics, and Image Processing, vol. 39, pp. 355-368, 1987.
- Podilchukc., et al., Face recognition using DCT-based feature vectors, Proceeding of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'96), vol. 4, pp. 2144-2147, May 1996.
- Rainer Lienhart, Chapter 6 Video OCR: A Survey and Practitioner's Guide, Video Mining, Video mining by Azriel Rosenfeld, David Scott Doermann, Daniel Dementhon, Mining (Kluwer International Series in Video Computing), pp. 155-183, Springer, 2003, XP009046500.
- Shah, Agam, “CES: Digital Imaging Market Set to Explode, panel says, The Industry Standard, Internet article www.thestandard.com/article.php?story=20040108174644982”, 2004, 2 pages.
- Shakhnarovich G., et al., “A unified learning framework for real time face detection and classification,” Automatic Face and Gesture Recognition, Proceedings. Fifth IEEE International Conference on, 20020520 IEEE, Piscataway, NJ, USA, 2002, pp. 16-23.
- Shakhnarovich G., et al., “Chapter 7. Face Recognition in Subspaces” In: Handbook of Face Recognition, Li S.Z., et al. (Eds), 2005, Springer, New York, ISBN: 9780387405957, Section 2.1, pp. 141-168.
- Sim T., et al., “The CMU Pose, Illumination, and Expression (PIE) database,” Automatic Face and Gesture Recognition, 2002, Proceedings, Fifth IEEE International Conference on, IEEE, Piscataway, NJ, USA, May 20, 2002, pp. 53-58, XP010949335, ISBN: 978-0-76.
- Smith W.A.P. et al., Single image estimation of facial albedo maps. BVAI, pp. 517-526.
- Soriano, M. et al., “Making Saturated Facial Images Useful Again, XP002325961, ISSN: 0277-786X”, Proceedings of the SPIE, 1999, pp. 113-121, vol. 3826.
- Stricker et al., “Similarity of color images”, SPIE Proc, 1995, pp. 1-12, vol. 2420.
- Tessera OptiML FaceTools (2010), Retrieved from the Internet on Mar. 25, 2011, URL: http://tessera.com/technologies/imagingandoptics/Documents/OptiML—faceTools.pdf,4 pages.
- Tjahyadi et al., “Application of the DCT Energy Histogram for Face Recognition”, Proceedings of the 2nd International Conference on Information Technology for Application, 2004, pp. 305-310.
- Turk M.A., et al., “Face Recognition using Eigenfaces,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 586-591, Jun. 1991.
- Turk, Matthew et al., “Eigenfaces for Recognition”, Journal of Cognitive Neuroscience, 1991, 17 pgs, vol. 3—Issue 1.
- US Final Office Action dated Oct. 16, 2007, in co-pending related U.S. Appl. No. 10/764,335. 47 pgs.
- US Office Action dated Oct. 3, 2008, in co-pending related U.S. Appl. No. 10/764,274, 53 pgs.
- US Office Action dated Sep. 25, 2008, in co-pending related U.S. Appl. No. 10/763,801, 50 pgs.
- US Office Action dated Sep. 29, 2008, in co-pending related U.S. Appl. No. 10/764,339, 46 pgs.
- Viola, P. et al., “Rapid Object Detection using a Boosted Cascade of Simple Features”, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001, pp. I-511-I-518, vol. 1.
- Viola, P. et al., “Robust Real-Time Face Detection”, International Journal of Computer Vision, 2004, pp. 137-154, vol. 57—Issue 2, Kluwer Academic Publishers.
- Wan, S.J. et al., “Variance-based color image quantization for frame buffer display”, S. K. M. Wong Color Research & Application, 1990, pp. 52-58, vol. 15—Issue 1.
- Wiskott L., et al., “Face recognition by elastic bunch graph matching,” Image Processing, Proceedings., International Conference on Santa Barbara, CA, USA, 1997, vol. 1, pp. 129-132.
- Xin He et al., “Real-Time Human Face Detection in Color Image”, International Conference on Machine Learning and Cybernetics, 2003, pp. 2915-2920, vol. 5.
- Yang, Ming-Hsuan et al., “Detecting Faces in Images: A Survey, ISSN:0162-8828, http://portal.acm.org/citation.cfm?id=505621&coll=GUIDE&dl=GUIDE&CFID=680-9268&CFTOKEN=82843223.”, IEEE Transactions on Pattern Analysis and Machine Intelligence archive, 2002, pp. 34-58, vol. 24—Issue 1, IEEE Computer Society.
- Zhang, Jun et al., “Face Recognition: Eigenface, Elastic Matching, and Neural Nets”, Proceedings of the IEEE, 1997, pp. 1423-1435, vol. 85—Issue 9.
- Zhao, W. et al., “Face recognition: A literature survey, ISSN: 0360-0300, http://portal.acm.org/citation.cfm?id=954342&coll=GUIDE&dl=GUIDE&CFID=680-9268&CFTOKEN=82843223.”, ACM Computing Surveys (CSUR) archive, 2003, pp. 399-458, vol. 35—Issue 4, ACM Press.
- Zhu Qiang et al., “Fast Human Detection Using a Cascade of Histograms of Oriented Gradients”, Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2006, pp. 1491-1498, IEEE Computer Society.
Type: Grant
Filed: Apr 21, 2010
Date of Patent: Dec 18, 2012
Patent Publication Number: 20100202707
Assignees: DigitalOptics Corporation Europe Limited (Galway), National University of Ireland (Galway)
Inventors: Gabriel Costache (Galway), Peter Corcoran (Claregalway), Rhys Mulryan (Corrandulla), Eran Steinberg (San Francisco, CA)
Primary Examiner: Brian Le
Attorney: Andrew V. Smith
Application Number: 12/764,650
International Classification: G06K 9/00 (20060101); G06K 9/62 (20060101);