Patents by Inventor Kristina Nikolova Toutanova

Kristina Nikolova Toutanova has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10255269
    Abstract: Long short term memory units that accept a non-predefined number of inputs are used to provide natural language relation extraction over a user-specified range on content. Content written for human consumption is parsed with distant supervision in segments (e.g., sentences, paragraphs, chapters) to determine relationships between various words within and between those segments.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: April 9, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Christopher Brian Quirk, Kristina Nikolova Toutanova, Wen-tau Yih, Hoifung Poon, Nanyun Peng
  • Publication number: 20180189269
    Abstract: Long short term memory units that accept a non-predefined number of inputs are used to provide natural language relation extraction over a user-specified range on content. Content written for human consumption is parsed with distant supervision in segments (e.g., sentences, paragraphs, chapters) to determine relationships between various words within and between those segments.
    Type: Application
    Filed: December 30, 2016
    Publication date: July 5, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Christopher Brian Quirk, Kristina Nikolova Toutanova, Wen-tau Yih, Hoifung Poon, Nanyun Peng
  • Patent number: 8452585
    Abstract: A discriminatively trained word order model is used to identify a most likely word order from a set of word orders for target words translated from a source sentence. For each set of word orders, the discriminatively trained word order model uses features based on information in a source dependency tree and a target dependency tree and features based on the order of words in the word order. The discriminatively trained statistical model is trained by determining a translation metric for each of a set of N-best word orders for a set of target words. Each of the N-best word orders are projective with respect to a target dependency tree and the N-best word orders are selected using a combination of an n-gram language model and a local tree order model.
    Type: Grant
    Filed: April 2, 2008
    Date of Patent: May 28, 2013
    Assignee: Microsoft Corporation
    Inventors: Kristina Nikolova Toutanova, Pi-Chuan Chang
  • Patent number: 8275607
    Abstract: A word is selected from a received text and features are identified from the word. The features are applied to a model to identify probabilities for sets of part-of-speech tags. The probabilities for the sets of part-of-speech tags are used to weight scores for possible part-of-speech tags for the selected word to form weighted scores. The weighted scores are used to select a part-of-speech tag for the word and the selected part of speech tag is stored or output. The scores for the possible part-of-speech tags are based on variational approximation parameters trained from a sparse prior over probability distributions describing the probability of a part-of-speech tag given a word.
    Type: Grant
    Filed: December 12, 2007
    Date of Patent: September 25, 2012
    Assignee: Microsoft Corporation
    Inventors: Kristina Nikolova Toutanova, Mark Edward Johnson
  • Publication number: 20090326916
    Abstract: Described is using a generative model in processing an unsegmented sentence into a segmented sentence. A segmenter includes the generative model, which given an unsegmented sentence (e.g., in Chinese) provides candidate segmented sentences to a probability-based decoder that selects the segmented sentence. For example, the segmented (e.g., Chinese-language) sentence may be provided to a statistical machine translator that outputs a translated (e.g., English-language) sentence. The generative model may include a word sub-model that generates hidden words using a word model, a spelling sub-model that generates characters from the hidden words, and an alignment sub-model that generates translated words and alignment data from the characters. The word sub-model may correspond to a unigram model having words and associated frequency data therein, and the alignment sub-model may correspond to a word aligned corpus having source sentence, translated target sentence pairings therein. Training is also described.
    Type: Application
    Filed: June 27, 2008
    Publication date: December 31, 2009
    Applicant: Microsoft Corporation
    Inventors: Jianfeng Gao, Kristina Nikolova Toutanova, Jia Xu
  • Publication number: 20090157384
    Abstract: A word is selected from a received text and features are identified from the word. The features are applied to a model to identify probabilities for sets of part-of-speech tags. The probabilities for the sets of part-of-speech tags are used to weight scores for possible part-of-speech tags for the selected word to form weighted scores. The weighted scores are used to select a part-of-speech tag for the word and the selected part of speech tag is stored or output. The scores for the possible part-of-speech tags are based on variational approximation parameters trained from a sparse prior over probability distributions describing the probability of a part-of-speech tag given a word.
    Type: Application
    Filed: December 12, 2007
    Publication date: June 18, 2009
    Applicant: MICROSOFT CORPORATION
    Inventors: Kristina Nikolova Toutanova, Mark Edward Johnson
  • Publication number: 20080319736
    Abstract: A discriminatively trained word order model is used to identify a most likely word order from a set of word orders for target words translated from a source sentence. For each set of word orders, the discriminatively trained word order model uses features based on information in a source dependency tree and a target dependency tree and features based on the order of words in the word order. The discriminatively trained statistical model is trained by determining a translation metric for each of a set of N-best word orders for a set of target words. Each of the N-best word orders are projective with respect to a target dependency tree and the N-best word orders are selected using a combination of an n-gram language model and a local tree order model.
    Type: Application
    Filed: April 2, 2008
    Publication date: December 25, 2008
    Applicant: Microsoft Corporation
    Inventors: Kristina Nikolova Toutanova, Pi-Chuan Chang