Patents by Inventor Cedric Archambeau

Cedric Archambeau has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9684650
    Abstract: A penalized loss is optimized using a corpus of language samples respective to a set of parameters of a language model. The penalized loss includes a function measuring predictive accuracy of the language model respective to the corpus of language samples and a penalty comprising a tree-structured norm. The trained language model with optimized values for the parameters generated by the optimizing is applied to predict a symbol following sequence of symbols of the language modeled by the language model. In some embodiments the penalty comprises a tree-structured lp-norm, such as a tree-structured l2-norm or a tree-structured l?-norm. In some embodiments a tree-structured l?-norm operates on a collapsed suffix trie in which any series of suffixes of increasing lengths which are always observed in the same context are collapsed into a single node. The optimizing may be performed using a proximal step algorithm.
    Type: Grant
    Filed: September 10, 2014
    Date of Patent: June 20, 2017
    Assignee: XEROX CORPORATION
    Inventors: Anil Kumar Nelakanti, Guillaume M. Bouchard, Cedric Archambeau, Francis Bach, Julien Mairal
  • Publication number: 20160070697
    Abstract: A penalized loss is optimized using a corpus of language samples respective to a set of parameters of a language model. The penalized loss includes a function measuring predictive accuracy of the language model respective to the corpus of language samples and a penalty comprising a tree-structured norm. The trained language model with optimized values for the parameters generated by the optimizing is applied to predict a symbol following sequence of symbols of the language modeled by the language model. In some embodiments the penalty comprises a tree-structured lp-norm, such as a tree-structured l2-norm or a tree-structured l?-norm. In some embodiments a tree-structured l?-norm operates on a collapsed suffix trie in which any series of suffixes of increasing lengths which are always observed in the same context are collapsed into a single node. The optimizing may be performed using a proximal step algorithm.
    Type: Application
    Filed: September 10, 2014
    Publication date: March 10, 2016
    Inventors: Anil Kumar Nelakanti, Guillaume M. Bouchard, Cedric Archambeau, Francis Bach, Julien Mairel
  • Patent number: 9069736
    Abstract: A method for performing data processing through a pipeline of components includes receiving a set of training observations, each including partial user feedback relating to error in data output by the pipeline for respective input data. Some pipeline components commit errors for at least some of the input data, contributing to an error in the respective output data. A prediction model models a probability of a pipeline component committing an error, given input data. Model parameters are learned using the training observations. For a new observation which includes input data and, optionally, partial user feedback indicating that an error has occurred in processing the new input data, without specifying which pipeline component(s) contributed to the observed error in the output data, a prediction is made as to which of the pipeline components contributed to the error in the output (if any).
    Type: Grant
    Filed: July 9, 2013
    Date of Patent: June 30, 2015
    Assignee: XEROX CORPORATION
    Inventors: William Michael Darling, Guillaume M. Bouchard, Cedric Archambeau
  • Publication number: 20150019912
    Abstract: A method for performing data processing through a pipeline of components includes receiving a set of training observations, each including partial user feedback relating to error in data output by the pipeline for respective input data. Some pipeline components commit errors for at least some of the input data, contributing to an error in the respective output data. A prediction model models a probability of a pipeline component committing an error, given input data. Model parameters are learned using the training observations. For a new observation which includes input data and, optionally, partial user feedback indicating that an error has occurred in processing the new input data, without specifying which pipeline component(s) contributed to the observed error in the output data, a prediction is made as to which of the pipeline components contributed to the error in the output (if any).
    Type: Application
    Filed: July 9, 2013
    Publication date: January 15, 2015
    Inventors: William Michael Darling, Guillaume M. Bouchard, Cedric Archambeau
  • Patent number: 8924315
    Abstract: Multi-task regression or classification includes optimizing parameters of a Bayesian model representing relationships between D features and P tasks, where D?1 and P?1, respective to training data comprising sets of values for the D features annotated with values for the P tasks. The Bayesian model includes a matrix-variate prior having features and tasks dimensions of dimensionality D and P respectively. The matrix-variate prior is partitioned into a plurality of blocks, and the optimizing of parameters of the Bayesian model includes inferring prior distributions for the blocks of the matrix-variate prior that induce sparseness of the plurality of blocks. Values of the P tasks are predicted for a set of input values for the D features using the optimized Bayesian model. The optimizing also includes decomposing the matrix-variate prior into a product of matrices including a matrix of reduced rank in the tasks dimension that encodes correlations between tasks.
    Type: Grant
    Filed: December 13, 2011
    Date of Patent: December 30, 2014
    Assignee: Xerox Corporation
    Inventors: Cedric Archambeau, Shengbo Guo, Onno Zoeter, Jean-Marc Andreoli
  • Patent number: 8880439
    Abstract: In a recommender method, Bayesian Matrix Factorization (BMF) is performed on a matrix having user and item dimensions and matrix elements containing user ratings for items made by users in order to train a probabilistic collaborative filtering model. A recommendation is generated for a user using the probabilistic collaborative filtering model. The recommendation may comprise a predicted item rating, or an identification of one or more recommended items. The recommender method is suitably performed by an electronic data processing device. The BMF may employ non-Gaussian priors, such as Student-t priors. The BMF may additionally or alternatively employ a heteroscedastic noise model comprising priors that include (1) a row dependent variance component that depends upon the matrix row and (2) a column dependent variance component that depends upon the matrix column.
    Type: Grant
    Filed: February 27, 2012
    Date of Patent: November 4, 2014
    Assignee: Xerox Corporation
    Inventors: Cedric Archambeau, Guillaume Bouchard, Balaji Lakshminarayanan
  • Publication number: 20140156231
    Abstract: A multi-relational data set is represented by a probabilistic multi-relational data model in which each entity of the multi-relational data set is represented by a D-dimensional latent feature vector. The probabilistic multi-relational data model is trained using a collection of observations of relations between entities of the multi-relational data set. The collection of observations includes observations of at least two different relation types. A prediction is generated for an observation of a relation between two or more entities of the multi-relational data set based on a dot product of the optimized D-dimensional latent feature vectors representing the two or more entities. The training may comprise optimizing the D-dimensional latent feature vectors to maximize likelihood of the collection of observations, for example by Bayesian inference performed using Gibbs sampling.
    Type: Application
    Filed: November 30, 2012
    Publication date: June 5, 2014
    Applicant: XEROX CORPORATION
    Inventors: Shengbo Guo, Boris Chidlovskii, Cedric Archambeau, Guillaume Bouchard, Dawei Yin
  • Publication number: 20130226839
    Abstract: In a recommender method, Bayesian Matrix Factorization (BMF) is performed on a matrix having user and item dimensions and matrix elements containing user ratings for items made by users in order to train a probabilistic collaborative filtering model. A recommendation is generated for a user using the probabilistic collaborative filtering model. The recommendation may comprise a predicted item rating, or an identification of one or more recommended items. The recommender method is suitably performed by an electronic data processing device. The BMF may employ non-Gaussian priors, such as Student-t priors. The BMF may additionally or alternatively employ a heteroscedastic noise model comprising priors that include (1) a row dependent variance component that depends upon the matrix row and (2) a column dependent variance component that depends upon the matrix column.
    Type: Application
    Filed: February 27, 2012
    Publication date: August 29, 2013
    Applicant: Xerox Corporation
    Inventors: Cedric Archambeau, Guillaume Bouchard, Balaji Lakshminarayanan
  • Publication number: 20130151441
    Abstract: Multi-task regression or classification includes optimizing parameters of a Bayesian model representing relationships between D features and P tasks, where D?1 and P?1, respective to training data comprising sets of values for the D features annotated with values for the P tasks. The Bayesian model includes a matrix-variate prior having features and tasks dimensions of dimensionality D and P respectively. The matrix-variate prior is partitioned into a plurality of blocks, and the optimizing of parameters of the Bayesian model includes inferring prior distributions for the blocks of the matrix-variate prior that induce sparseness of the plurality of blocks. Values of the P tasks are predicted for a set of input values for the D features using the optimized Bayesian model. The optimizing also includes decomposing the matrix-variate prior into a product of matrices including a matrix of reduced rank in the tasks dimension that encodes correlations between tasks.
    Type: Application
    Filed: December 13, 2011
    Publication date: June 13, 2013
    Applicant: Xerox Corporation
    Inventors: Cedric Archambeau, Shengbo Guo, Onno Zoeter, Jean-Marc Andreoli