Patents by Inventor Florent C. Perronnin
Florent C. Perronnin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10635949Abstract: A system and method enable semantic comparisons to be made between word images and concepts. Training word images and their concept labels are used to learn parameters of a neural network for embedding word images and concepts in a semantic subspace in which comparisons can be made between word images and concepts without the need for transcribing the text content of the word image. The training of the neural network aims to minimize a ranking loss over the training set where non relevant concepts for an image which are ranked more highly than relevant ones penalize the ranking loss.Type: GrantFiled: July 7, 2015Date of Patent: April 28, 2020Assignee: XEROX CORPORATIONInventors: Albert Gordo Soldevila, Jon Almazán Almazán, Naila Murray, Florent C. Perronnin
-
Patent number: 10331976Abstract: In image classification, each class of a set of classes is embedded in an attribute space where each dimension of the attribute space corresponds to a class attribute. The embedding generates a class attribute vector for each class of the set of classes. A set of parameters of a prediction function operating in the attribute space respective to a set of training images annotated with classes of the set of classes is optimized such that the prediction function with the optimized set of parameters optimally predicts the annotated classes for the set of training images. The prediction function with the optimized set of parameters is applied to an input image to generate at least one class label for the input image. The image classification does not include applying a class attribute classifier to the input image.Type: GrantFiled: June 21, 2013Date of Patent: June 25, 2019Assignee: XEROX CORPORATIONInventors: Zeynep Akata, Florent C. Perronnin, Zaid Harchaoui, Cordelia L. Schmid
-
Patent number: 9792492Abstract: A method for extracting a representation from an image includes inputting an image to a pre-trained neural network. The gradient of a loss function is computed with respect to parameters of the neural network, for the image. A gradient representation is extracted for the image based on the computed gradients, which can be used, for example, for classification or retrieval.Type: GrantFiled: July 7, 2015Date of Patent: October 17, 2017Assignee: XEROX CORPORATIONInventors: Albert Gordo Soldevila, Adrien Gaidon, Florent C. Perronnin
-
Patent number: 9762393Abstract: Authentication methods are disclosed for determining whether a person or object to be authenticated is a member of a set of authorized persons or objects. A query signature is acquired comprising a vector whose elements store values of an ordered set of features for the person or object to be authenticated. The query signature is compared with an aggregate signature comprising a vector whose elements store values of the ordered set of features for the set of authorized persons or objects. The individual signatures for the authorized persons or objects are not stored; only the aggregate signature. It is determined whether the person or object to be authenticated is a member of the set of authorized persons or objects based on the comparison. The comparing may comprise computing an inner product of the query signature and the aggregate signature, with the determining being based on the inner product.Type: GrantFiled: March 19, 2015Date of Patent: September 12, 2017Assignee: Conduent Business Services, LLCInventors: Albert Gordo Soldevila, Naila Murray, Florent C. Perronnin
-
Patent number: 9697439Abstract: An object detection method includes for each of a set of patches of an image, encoding features of the patch with a non-linear mapping function, and computing per-patch statistics based on the encoded features for approximating a window-level non-linear operation by a patch-level operation. Then, windows are extracted from the image, each window comprising a sub-set of the set of patches. Each of the windows is scored based on the computed patch statistics of the respective sub-set of patches. Objects, if any, can then be detected in the image, based on the window scores. The method and system allow the non-linear operations to be performed only at the patch level, reducing the computation time of the method, since there are generally many more windows than patches, while not impacting performance unduly, as compared to a system which performs non-linear operations at the window level.Type: GrantFiled: October 2, 2014Date of Patent: July 4, 2017Assignee: XEROX CORPORATIONInventors: Adrien Gaidon, Diane Larlus-Larrondo, Florent C. Perronnin
-
Patent number: 9639806Abstract: A system and method for evaluating iconicity of an image are provided. In the method, at least one test image is received, each test image including an object in a selected class. Properties related to iconicity are computed for each test image. The properties may include one or more of: a) a direct measure of iconicity, which is computed with a direct iconicity prediction model which has been learned on a set of training images, each training image labeled with an iconicity score; b) one or more class-independent properties; and c) one or more class-dependent properties. A measure of iconicity of each of the test images is computed, based on the computed properties. By combining a set of complementary properties, an iconicity measure which shows good agreement with human evaluations of iconicity can be obtained.Type: GrantFiled: April 15, 2014Date of Patent: May 2, 2017Assignee: XEROX CORPORATIONInventors: Yangmuzi Zhang, Diane Larlus-Larrondo, Florent C. Perronnin
-
Patent number: 9607245Abstract: A method includes adapting the universal generative model of local descriptors to a first camera to obtain a first camera-dependent generative model. The same universal generative model is also adapted to a second camera to obtain a second camera-dependent generative model. From a first image captured by the first camera, a first image-level descriptor is extracted, using the first camera-dependent generative model. From a second image captured by the second camera, a second image-level descriptor is extracted using the second camera-dependent generative model. A similarity is computed between the first image-level descriptor and the second image-level descriptor. Information is output, based on the computed similarity. The adaptation allows differences between the image-level descriptors to be shifted towards deviations in image content, rather than the imaging conditions.Type: GrantFiled: December 2, 2014Date of Patent: March 28, 2017Assignee: XEROX CORPORATIONInventors: Usman Tariq, José Antonio Rodriguez Serrano, Florent C. Perronnin
-
Patent number: 9600738Abstract: A system and method enable similarity measures to be computed between pairs of images and between a color name and an image in a common feature space. Reference image representations are generated by embedding color name descriptors for each reference image in the common feature space. Color name representations for different color names are generated by embedding synthesized color name descriptors in the common feature space. For a query including a color name, a similarity is computed between its color name representation and one or more of the reference image representations. For a query which includes a query image, a similarity is computed between a representation of the query image and one or more of reference image representations. The method also enables combined queries which include both a query image and a color name to be performed. One or more retrieved reference images, or information based thereon, is then output.Type: GrantFiled: April 7, 2015Date of Patent: March 21, 2017Assignee: XEROX CORPORATIONInventors: Naila Murray, Florent C. Perronnin
-
Patent number: 9589231Abstract: A method for diagnosis assistance exploits similarity between a new medical case and existing medical cases and experts when embedded in a common embedding space. Different types of queries are provided for, including a query-by-cases and a query-by-experts. These may be associated with different cost structures that encourage the requester to use the query-by-cases first and seek expert assistance if this proves unsuccessful. Depending on whether the query-by-cases or query-by-experts is requested, a subset of the existing cases or experts is identified based on the similarity of their representations, in the embedding space, with a representation of the new case in the embedding space. There may then be provision for communicating the new case to a selected one or more of the subset of experts for the expert to attempt to provide a diagnosis.Type: GrantFiled: April 28, 2014Date of Patent: March 7, 2017Assignee: XEROX CORPORATIONInventors: Gabriela Csurka, Florent C. Perronnin
-
Publication number: 20170011279Abstract: A system and method enable semantic comparisons to be made between word images and concepts. Training word images and their concept labels are used to learn parameters of a neural network for embedding word images and concepts in a semantic subspace in which comparisons can be made between word images and concepts without the need for transcribing the text content of the word image. The training of the neural network aims to minimize a ranking loss over the training set where non relevant concepts for an image which are ranked more highly than relevant ones penalize the ranking loss.Type: ApplicationFiled: July 7, 2015Publication date: January 12, 2017Applicant: Xerox CorporationInventors: Albert Gordo Soldevila, Jon Almazán Almazán, Naila Murray, Florent C. Perronnin
-
Publication number: 20170011280Abstract: A method for extracting a representation from an image includes inputting an image to a pre-trained neural network. The gradient of a loss function is computed with respect to parameters of the neural network, for the image. A gradient representation is extracted for the image based on the computed gradients, which can be used, for example, for classification or retrieval.Type: ApplicationFiled: July 7, 2015Publication date: January 12, 2017Applicant: Xerox CorporationInventors: Albert Gordo Soldevila, Adrien Gaidon, Florent C. Perronnin
-
Patent number: 9514391Abstract: In an image classification method, a feature vector representing an input image is generated by unsupervised operations including extracting local descriptors from patches distributed over the input image, and a classification value for the input image is generated by applying a neural network (NN) to the feature vector. Extracting the feature vector may include encoding the local descriptors extracted from each patch using a generative model, such as Fisher vector encoding, aggregating the encoded local descriptors to form a vector, projecting the vector into a space of lower dimensionality, for example using Principal Component Analysis (PCA), and normalizing the feature vector of lower dimensionality to produce the feature vector representing the input image. A set of mid-level features representing the input image may be generated as the output of an intermediate layer of the NN.Type: GrantFiled: April 20, 2015Date of Patent: December 6, 2016Assignee: XEROX CORPORATIONInventors: Florent C. Perronnin, Diane Larlus-Larrondo
-
Publication number: 20160307071Abstract: In an image classification method, a feature vector representing an input image is generated by unsupervised operations including extracting local descriptors from patches distributed over the input image, and a classification value for the input image is generated by applying a neural network (NN) to the feature vector. Extracting the feature vector may include encoding the local descriptors extracted from each patch using a generative model, such as Fisher vector encoding, aggregating the encoded local descriptors to form a vector, projecting the vector into a space of lower dimensionality, for example using Principal Component Analysis (PCA), and normalizing the feature vector of lower dimensionality to produce the feature vector representing the input image. A set of mid-level features representing the input image may be generated as the output of an intermediate layer of the NN.Type: ApplicationFiled: April 20, 2015Publication date: October 20, 2016Inventors: Florent C. Perronnin, Diane Larlus-Larrondo
-
Publication number: 20160300118Abstract: A system and method enable similarity measures to be computed between pairs of images and between a color name and an image in a common feature space. Reference image representations are generated by embedding color name descriptors for each reference image in the common feature space. Color name representations for different color names are generated by embedding synthesized color name descriptors in the common feature space. For a query including a color name, a similarity is computed between its color name representation and one or more of the reference image representations. For a query which includes a query image, a similarity is computed between a representation of the query image and one or more of reference image representations. The method also enables combined queries which include both a query image and a color name to be performed. One or more retrieved reference images, or information based thereon, is then output.Type: ApplicationFiled: April 7, 2015Publication date: October 13, 2016Inventors: Naila Murray, Florent C. Perronnin
-
Publication number: 20160277190Abstract: Authentication methods are disclosed for determining whether a person or object to be authenticated is a member of a set of authorized persons or objects. A query signature is acquired comprising a vector whose elements store values of an ordered set of features for the person or object to be authenticated. The query signature is compared with an aggregate signature comprising a vector whose elements store values of the ordered set of features for the set of authorized persons or objects. The individual signatures for the authorized persons or objects are not stored; only the aggregate signature. It is determined whether the person or object to be authenticated is a member of the set of authorized persons or objects based on the comparison. The comparing may comprise computing an inner product of the query signature and the aggregate signature, with the determining being based on the inner product.Type: ApplicationFiled: March 19, 2015Publication date: September 22, 2016Inventors: Albert Gordo Soldevila, Naila Murray, Florent C. Perronnin
-
Patent number: 9443164Abstract: A system and method for object instance localization in an image are disclosed. In the method, keypoints are detected in a target image and candidate regions are detected by matching the detected keypoints to keypoints detected in a set of reference images. Similarity measures between global descriptors computed for the located candidate regions and global descriptors for the reference images are computed and labels are assigned to at least some of the candidate regions based on the computed similarity measures. Performing the region detection based on keypoint matching while performing the labeling based on global descriptors improves object instance detection.Type: GrantFiled: December 2, 2014Date of Patent: September 13, 2016Inventors: Milan Sulc, Albert Gordo Soldevila, Diane Larlus Larrondo, Florent C. Perronnin
-
Patent number: 9424492Abstract: A method for generating an image representation includes generating a set of embedded descriptors, comprising, for each of a set of patches of an image, extracting a patch descriptor which is representative of the pixels in the patch and embedding the patch descriptor in a multidimensional space to form an embedded descriptor. An image representation is generated by aggregating the set of embedded descriptors. In the aggregation, each descriptor is weighted with a respective weight in a set of weights, the set of weights being computed based on the patch descriptors for the image. Information based on the image representation is output. At least one of the extracting of the patch descriptors, embedding the patch descriptors, and generating the image representation is performed with a computer processor.Type: GrantFiled: December 27, 2013Date of Patent: August 23, 2016Assignee: XEROX CORPORATIONInventors: Naila Murray, Florent C. Perronnin
-
Patent number: 9412031Abstract: A method for recognition of an identifier such as a license plate includes storing first visual signatures, each extracted from a first image of a respective object, such as a vehicle, captured at a first location, and first information associated with the first captured image, such as a time stamp. A second visual signature is extracted from a second image of a second object captured at a second location and second information associated with the second captured image is acquired. A measure of similarity is computed between the second visual signature and at least some of the first visual signatures to identify a matching one. A test is performed, which is a function of the first and the second information associated with the matching signatures. Only when it is confirmed that the test has been met, identifier recognition is performed to identify the identifier of the second object.Type: GrantFiled: October 16, 2013Date of Patent: August 9, 2016Assignee: XEROX CORPORATIONInventors: Jose Antonio Rodriguez-Serrano, Florent C. Perronnin, Herve Poirier, Frederic Roulland, Victor Ciriza
-
Patent number: 9384423Abstract: A system and method for computing confidence in an output of a text recognition system includes performing character recognition on an input text image with a text recognition system to generate a candidate string of characters. A first representation is generated, based on the candidate string of characters, and a second representation is generated based on the input text image. A confidence in the candidate string of characters is computed based on a computed similarity between the first and second representations in a common embedding space.Type: GrantFiled: May 28, 2013Date of Patent: July 5, 2016Assignee: XEROX CORPORATIONInventors: Jose Antonio Rodriguez-Serrano, Florent C. Perronnin
-
Patent number: 9367763Abstract: A method for text-to-image matching includes generating representations of text images, such as license plate images, by embedding each text image into a first vectorial space with a first embedding function. With a second embedding function, a character string, such as a license plate number to be matched, is embedded into a second vectorial space to generate a character string representation. A compatibility is computed between the character string representation and one or more of the text image representations to identify a matching one. The compatibility is computed with a function that uses a transformation which is learned on a training set of labeled images. The learning uses a loss function that aggregates a text-to-image-loss and an image-to-text loss over the training set. The image-to-text loss penalizes the transformation when it correctly ranks a pair of character string representations, given an image representation corresponding to one of them.Type: GrantFiled: January 12, 2015Date of Patent: June 14, 2016Assignee: XEROX CORPORATIONInventors: Albert Gordo Soldevila, Florent C. Perronnin