Patents by Inventor Chris J. C. Burges
Chris J. C. Burges has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10346453Abstract: Methods and systems for multi-tiered information retrieval training are disclosed. A method includes identifying results in a ranked ordering of results that can be swapped without changing a score determined using a first ranking quality measure, determining a first vector and at least one other vector for each identified swappable result in the ranked ordering of results based on the first ranking quality measure and at least one other ranking quality measure respectively, and adding the first vector and the at least one other vector for each identified swappable result in the ranked ordering of results to obtain a function of the first vector and the at least one other vector. Access is provided to the function of the first vector and the at least one other vector for use in the multi-tiered information retrieval training.Type: GrantFiled: December 21, 2010Date of Patent: July 9, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Chris J. C. Burges, Krysta M. Svore, Maksims Volkovs
-
Patent number: 8977537Abstract: The described implementations relate to natural language processing, and more particularly to training a language prior model using a model structure. The language prior model can be trained using parameterized representations of lexical structures such as training sentences, as well as parameterized representations of lexical units such as words or n-grams. During training, the parameterized representations of the lexical structures and the lexical units can be adjusted using the model structure. When the language prior model is trained, the parameterized representations of the lexical structures can reflect how the lexical units were used in the lexical structures.Type: GrantFiled: June 24, 2011Date of Patent: March 10, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Chris J. C. Burges, Andrzej Pastusiak
-
Publication number: 20120330647Abstract: The described implementations relate to natural language processing, and more particularly to training a language prior model using a model structure. The language prior model can be trained using parameterized representations of lexical structures such as training sentences, as well as parameterized representations of lexical units such as words or n-grams. During training, the parameterized representations of the lexical structures and the lexical units can be adjusted using the model structure. When the language prior model is trained, the parameterized representations of the lexical structures can reflect how the lexical units were used in the lexical structures.Type: ApplicationFiled: June 24, 2011Publication date: December 27, 2012Applicant: MICROSOFT CORPORATIONInventors: Chris J.C. Burges, Andrzej Pastusiak
-
Patent number: 8255412Abstract: Model adaptation may be performed to take a general model trained with a set of training data (possibly large), and adapt the model using a set of domain-specific training data (possibly small). The parameters, structure, or configuration of a model trained in one domain (called the background domain) may be adapted to a different domain (called the adaptation domain), for which there may be a limited amount of training data. The adaption may be performed using the Boosting Algorithm to select an optimal basis function that optimizes a measure of error of the model as it is being iteratively refined, i.e., adapted.Type: GrantFiled: December 17, 2008Date of Patent: August 28, 2012Assignee: Microsoft CorporationInventors: Jianfeng Gao, Yi Su, Qiang Wu, Chris J. C. Burges, Krysta Svore, Elbio Renato Torres Abib
-
Publication number: 20120158710Abstract: Methods and systems for multi-tiered information retrieval training are disclosed. A method includes identifying results in a ranked ordering of results that can be swapped without changing a score determined using a first ranking quality measure, determining a first vector and at least one other vector for each identified swappable result in the ranked ordering of results based on the first ranking quality measure and at least one other ranking quality measure respectively, and adding the first vector and the at least one other vector for each identified swappable result in the ranked ordering of results to obtain a function of the first vector and the at least one other vector. Access is provided to the function of the first vector and the at least one other vector for use in the multi-tiered information retrieval training.Type: ApplicationFiled: December 21, 2010Publication date: June 21, 2012Applicant: MICROSOFT CORPORATIONInventors: Chris J.C. Burges, Krysta M. Svore, Maksims Volkovs
-
Patent number: 7840569Abstract: A neural network is used to process a set of ranking features in order to determine the relevancy ranking for a set of documents or other items. The neural network calculates a predicted relevancy score for each document and the documents can then be ordered by that score. Alternate embodiments apply a set of data transformations to the ranking features before they are input to the neural network. Training can be used to adapt both the neural network and certain of the data transformations to target environments.Type: GrantFiled: October 18, 2007Date of Patent: November 23, 2010Assignee: Microsoft CorporationInventors: Dmitriy Meyerzon, Yauhen Shnitko, Chris J. C. Burges, Michael James Taylor
-
Publication number: 20100153315Abstract: Model adaptation may be performed to take a general model trained with a set of training data (possibly large), and adapt the model using a set of domain-specific training data (possibly small). The parameters, structure, or configuration of a model trained in one domain (called the background domain) may be adapted to a different domain (called the adaptation domain), for which there may be a limited amount of training data. The adaption may be performed using the Boosting Algorithm to select an optimal basis function that optimizes a measure of error of the model as it is being iteratively refined, i.e., adapted.Type: ApplicationFiled: December 17, 2008Publication date: June 17, 2010Applicant: Microsoft CorporationInventors: Jianfeng Gao, Yi Su, Qiang Wu, Chris J.C. Burges, Krysta Svore, Elbio Renato Torres Abib
-
Publication number: 20090106223Abstract: A neural network is used to process a set of ranking features in order to determine the relevancy ranking for a set of documents or other items. The neural network calculates a predicted relevancy score for each document and the documents can then be ordered by that score. Alternate embodiments apply a set of data transformations to the ranking features before they are input to the neural network. Training can be used to adapt both the neural network and certain of the data transformations to target environments.Type: ApplicationFiled: October 18, 2007Publication date: April 23, 2009Applicant: Microsoft CorporationInventors: Dmitriy Meyerzon, Yauhen Shnitko, Chris J.C. Burges, Michael James Taylor
-
Publication number: 20040260550Abstract: An audio processing system and method for classifying speakers in audio data using a discriminatively-trained classifier. In general, the audio processing system inputs audio data containing unknown speakers and outputs frame tags whereby each tag represents an individual speaker. The audio processing system includes a training system for training a discriminatively-trained classifier (such as a time-delay neural network) and a speaker classification system for using the classifier to segment and classify the speakers. The audio processing method includes two phases. A training phase discriminatively trains the classifier on a speaker training set containing known speakers and produces fixed classifier data. A use phase uses the fixed classifier data in the discriminatively-trained classifier to produce anchor model outputs for every frame of speech in the audio data. The anchor model outputs are mapped to frame tags to that all speech corresponding to a single frame tag comes from a single speaker.Type: ApplicationFiled: June 20, 2003Publication date: December 23, 2004Inventors: Chris J.C. Burges, John C. Platt
-
Patent number: 6704718Abstract: A system and method for performing trainable nonlinear prediction of transform coefficients in data compression such that the number of bits required to represent the data is reduced. The nonlinear prediction data compression system includes a nonlinear predictor for generating predicted transform coefficients, a nonlinear prediction encoder that uses the predicted transform coefficients to encode original data, and a nonlinear prediction decoder that uses the predicted transform coefficients to decode the encoded bitstream and reconstruct the original data. The nonlinear predictor may be trained using training techniques, including a novel in-loop training technique of the present invention. The present invention also includes a method for using a nonlinear predictor to encode and decode data. The method also includes improving the performance of the nonlinear prediction data compression and decompression using several novel speedup techniques.Type: GrantFiled: June 5, 2001Date of Patent: March 9, 2004Assignee: Microsoft CorporationInventors: Chris J. C. Burges, Patrice Y. Simard, Henrique S. Malvar
-
Publication number: 20020184272Abstract: A system and method for performing trainable nonlinear prediction of transform coefficients in data compression such that the number of bits required to represent the data is reduced. The nonlinear prediction data compression system includes a nonlinear predictor for generating predicted transform coefficients, a nonlinear prediction encoder that uses the predicted transform coefficients to encode original data, and a nonlinear prediction decoder that uses the predicted transform coefficients to decode the encoded bitstream and reconstruct the original data. The nonlinear predictor may be trained using training techniques, including a novel in-loop training technique of the present invention. The present invention also includes a method for using a nonlinear predictor to encode and decode data. The method also includes improving the performance of the nonlinear prediction data compression and decompression using several novel speedup techniques.Type: ApplicationFiled: June 5, 2001Publication date: December 5, 2002Inventors: Chris J.C. Burges, Patrice Y. Simard, Henrique S. Malvar