Patents by Inventor Chris J. C. Burges

Chris J. C. Burges has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10346453
    Abstract: Methods and systems for multi-tiered information retrieval training are disclosed. A method includes identifying results in a ranked ordering of results that can be swapped without changing a score determined using a first ranking quality measure, determining a first vector and at least one other vector for each identified swappable result in the ranked ordering of results based on the first ranking quality measure and at least one other ranking quality measure respectively, and adding the first vector and the at least one other vector for each identified swappable result in the ranked ordering of results to obtain a function of the first vector and the at least one other vector. Access is provided to the function of the first vector and the at least one other vector for use in the multi-tiered information retrieval training.
    Type: Grant
    Filed: December 21, 2010
    Date of Patent: July 9, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Chris J. C. Burges, Krysta M. Svore, Maksims Volkovs
  • Patent number: 8977537
    Abstract: The described implementations relate to natural language processing, and more particularly to training a language prior model using a model structure. The language prior model can be trained using parameterized representations of lexical structures such as training sentences, as well as parameterized representations of lexical units such as words or n-grams. During training, the parameterized representations of the lexical structures and the lexical units can be adjusted using the model structure. When the language prior model is trained, the parameterized representations of the lexical structures can reflect how the lexical units were used in the lexical structures.
    Type: Grant
    Filed: June 24, 2011
    Date of Patent: March 10, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Chris J. C. Burges, Andrzej Pastusiak
  • Publication number: 20120330647
    Abstract: The described implementations relate to natural language processing, and more particularly to training a language prior model using a model structure. The language prior model can be trained using parameterized representations of lexical structures such as training sentences, as well as parameterized representations of lexical units such as words or n-grams. During training, the parameterized representations of the lexical structures and the lexical units can be adjusted using the model structure. When the language prior model is trained, the parameterized representations of the lexical structures can reflect how the lexical units were used in the lexical structures.
    Type: Application
    Filed: June 24, 2011
    Publication date: December 27, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Chris J.C. Burges, Andrzej Pastusiak
  • Patent number: 8255412
    Abstract: Model adaptation may be performed to take a general model trained with a set of training data (possibly large), and adapt the model using a set of domain-specific training data (possibly small). The parameters, structure, or configuration of a model trained in one domain (called the background domain) may be adapted to a different domain (called the adaptation domain), for which there may be a limited amount of training data. The adaption may be performed using the Boosting Algorithm to select an optimal basis function that optimizes a measure of error of the model as it is being iteratively refined, i.e., adapted.
    Type: Grant
    Filed: December 17, 2008
    Date of Patent: August 28, 2012
    Assignee: Microsoft Corporation
    Inventors: Jianfeng Gao, Yi Su, Qiang Wu, Chris J. C. Burges, Krysta Svore, Elbio Renato Torres Abib
  • Publication number: 20120158710
    Abstract: Methods and systems for multi-tiered information retrieval training are disclosed. A method includes identifying results in a ranked ordering of results that can be swapped without changing a score determined using a first ranking quality measure, determining a first vector and at least one other vector for each identified swappable result in the ranked ordering of results based on the first ranking quality measure and at least one other ranking quality measure respectively, and adding the first vector and the at least one other vector for each identified swappable result in the ranked ordering of results to obtain a function of the first vector and the at least one other vector. Access is provided to the function of the first vector and the at least one other vector for use in the multi-tiered information retrieval training.
    Type: Application
    Filed: December 21, 2010
    Publication date: June 21, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Chris J.C. Burges, Krysta M. Svore, Maksims Volkovs
  • Patent number: 7840569
    Abstract: A neural network is used to process a set of ranking features in order to determine the relevancy ranking for a set of documents or other items. The neural network calculates a predicted relevancy score for each document and the documents can then be ordered by that score. Alternate embodiments apply a set of data transformations to the ranking features before they are input to the neural network. Training can be used to adapt both the neural network and certain of the data transformations to target environments.
    Type: Grant
    Filed: October 18, 2007
    Date of Patent: November 23, 2010
    Assignee: Microsoft Corporation
    Inventors: Dmitriy Meyerzon, Yauhen Shnitko, Chris J. C. Burges, Michael James Taylor
  • Publication number: 20100153315
    Abstract: Model adaptation may be performed to take a general model trained with a set of training data (possibly large), and adapt the model using a set of domain-specific training data (possibly small). The parameters, structure, or configuration of a model trained in one domain (called the background domain) may be adapted to a different domain (called the adaptation domain), for which there may be a limited amount of training data. The adaption may be performed using the Boosting Algorithm to select an optimal basis function that optimizes a measure of error of the model as it is being iteratively refined, i.e., adapted.
    Type: Application
    Filed: December 17, 2008
    Publication date: June 17, 2010
    Applicant: Microsoft Corporation
    Inventors: Jianfeng Gao, Yi Su, Qiang Wu, Chris J.C. Burges, Krysta Svore, Elbio Renato Torres Abib
  • Publication number: 20090106223
    Abstract: A neural network is used to process a set of ranking features in order to determine the relevancy ranking for a set of documents or other items. The neural network calculates a predicted relevancy score for each document and the documents can then be ordered by that score. Alternate embodiments apply a set of data transformations to the ranking features before they are input to the neural network. Training can be used to adapt both the neural network and certain of the data transformations to target environments.
    Type: Application
    Filed: October 18, 2007
    Publication date: April 23, 2009
    Applicant: Microsoft Corporation
    Inventors: Dmitriy Meyerzon, Yauhen Shnitko, Chris J.C. Burges, Michael James Taylor
  • Publication number: 20040260550
    Abstract: An audio processing system and method for classifying speakers in audio data using a discriminatively-trained classifier. In general, the audio processing system inputs audio data containing unknown speakers and outputs frame tags whereby each tag represents an individual speaker. The audio processing system includes a training system for training a discriminatively-trained classifier (such as a time-delay neural network) and a speaker classification system for using the classifier to segment and classify the speakers. The audio processing method includes two phases. A training phase discriminatively trains the classifier on a speaker training set containing known speakers and produces fixed classifier data. A use phase uses the fixed classifier data in the discriminatively-trained classifier to produce anchor model outputs for every frame of speech in the audio data. The anchor model outputs are mapped to frame tags to that all speech corresponding to a single frame tag comes from a single speaker.
    Type: Application
    Filed: June 20, 2003
    Publication date: December 23, 2004
    Inventors: Chris J.C. Burges, John C. Platt
  • Patent number: 6704718
    Abstract: A system and method for performing trainable nonlinear prediction of transform coefficients in data compression such that the number of bits required to represent the data is reduced. The nonlinear prediction data compression system includes a nonlinear predictor for generating predicted transform coefficients, a nonlinear prediction encoder that uses the predicted transform coefficients to encode original data, and a nonlinear prediction decoder that uses the predicted transform coefficients to decode the encoded bitstream and reconstruct the original data. The nonlinear predictor may be trained using training techniques, including a novel in-loop training technique of the present invention. The present invention also includes a method for using a nonlinear predictor to encode and decode data. The method also includes improving the performance of the nonlinear prediction data compression and decompression using several novel speedup techniques.
    Type: Grant
    Filed: June 5, 2001
    Date of Patent: March 9, 2004
    Assignee: Microsoft Corporation
    Inventors: Chris J. C. Burges, Patrice Y. Simard, Henrique S. Malvar
  • Publication number: 20020184272
    Abstract: A system and method for performing trainable nonlinear prediction of transform coefficients in data compression such that the number of bits required to represent the data is reduced. The nonlinear prediction data compression system includes a nonlinear predictor for generating predicted transform coefficients, a nonlinear prediction encoder that uses the predicted transform coefficients to encode original data, and a nonlinear prediction decoder that uses the predicted transform coefficients to decode the encoded bitstream and reconstruct the original data. The nonlinear predictor may be trained using training techniques, including a novel in-loop training technique of the present invention. The present invention also includes a method for using a nonlinear predictor to encode and decode data. The method also includes improving the performance of the nonlinear prediction data compression and decompression using several novel speedup techniques.
    Type: Application
    Filed: June 5, 2001
    Publication date: December 5, 2002
    Inventors: Chris J.C. Burges, Patrice Y. Simard, Henrique S. Malvar