Patents by Inventor Larry Paul Heck

Larry Paul Heck has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150310862
    Abstract: One or more aspects of the subject disclosure are directed towards performing a semantic parsing task, such as classifying text corresponding to a spoken utterance into a class. Feature data representative of input data is provided to a semantic parsing mechanism that uses a deep model trained at least in part via unsupervised learning using unlabeled data. For example, if used in a classification task, a classifier may use an associated deep neural network that is trained to have an embeddings layer corresponding to at least one of words, phrases, or sentences. The layers are learned from unlabeled data, such as query click log data.
    Type: Application
    Filed: April 24, 2014
    Publication date: October 29, 2015
    Applicant: Microsoft Corporation
    Inventors: Yann Nicolas Dauphin, Dilek Z. Hakkani-Tur, Gokhan Tur, Larry Paul Heck
  • Publication number: 20150248886
    Abstract: A model-based approach for on-screen item selection and disambiguation is provided. An utterance may be received by a computing device in response to a display of a list of items for selection on a display screen. A disambiguation model may then be applied to the utterance. The disambiguation model may be utilized to determine whether the utterance is directed to at least one of the list of displayed items, extract referential features from the utterance and identify an item from the list corresponding to the utterance, based on the extracted referential features. The computing device may then perform an action which includes selecting the identified item associated with utterance.
    Type: Application
    Filed: March 3, 2014
    Publication date: September 3, 2015
    Applicant: Microsoft Corporation
    Inventors: Ruhi Sarikaya, Fethiye Asli Celikyilmaz, Zhaleh Feizollahi, Larry Paul Heck, Dilek Z. Hakkani-Tur
  • Publication number: 20150178273
    Abstract: A relation detection model training solution. The relation detection model training solution mines freely available resources from the World Wide Web to train a relationship detection model for use during linguistic processing. The relation detection model training system searches the web for pairs of entities extracted from a knowledge graph that are connected by a specific relation. Performance is enhanced by clipping search snippets to extract patterns that connect the two entities in a dependency tree and refining the annotations of the relations according to other related entities in the knowledge graph. The relation detection model training solution scales to other domains and languages, pushing the burden from natural language semantic parsing to knowledge base population. The relation detection model training solution exhibits performance comparable to supervised solutions, which require design, collection, and manual labeling of natural language data.
    Type: Application
    Filed: December 20, 2013
    Publication date: June 25, 2015
    Applicant: MICROSOFT CORPORATION
    Inventors: Dilek Z. Hakkani-Tur, Gokhan Tur, Larry Paul Heck
  • Publication number: 20150179168
    Abstract: A dialog system for use in a multi-user, multi-domain environment. The dialog system understands user requests when multiple users are interacting with each other as well as the dialog system. The dialog system uses multi-human conversational context to improve domain detection. Using interactions between multiple users allows the dialog system to better interpret machine directed conversational inputs in multi-user conversational systems. The dialog system employs topic segmentation to chunk conversations for determining context boundaries. Using general topic segmentation methods, as well as the specific domain detector trained with conversational inputs collected by a single user system, allows the dialog system to better determine the relevant context. The use of conversational context helps reduce the domain detection error rate, especially in certain domains, and allows for better interactions with users when the machine addressed turns are not recognized or are ambiguous.
    Type: Application
    Filed: December 20, 2013
    Publication date: June 25, 2015
    Applicant: MICROSOFT CORPORATION
    Inventors: Dilek Hakkani-Tur, Gokhan Tur, Larry Paul Heck, Dong Wang
  • Patent number: 9064006
    Abstract: Natural language query translation may be provided. A statistical model may be trained to detect domains according to a plurality of query click log data. Upon receiving a natural language query, the statistical model may be used to translate the natural language query into an action. The action may then be performed and at least one result associated with performing the action may be provided.
    Type: Grant
    Filed: August 23, 2012
    Date of Patent: June 23, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dilek Zeynep Hakkani-Tur, Gokhan Tur, Rukmini Iyer, Larry Paul Heck
  • Publication number: 20140330570
    Abstract: Techniques are described herein that are capable of satisfying specified intent(s) based on multimodal request(s). A multimodal request is a request that includes at least one request of a first type and at least one request of a second type that is different from the first type. Example types of request include but are not limited to a speech request, a text command, a tactile command, and a visual command. A determination is made that one or more entities in visual content are selected in accordance with an explicit scoping command from a user. In response, speech understanding functionality is automatically activated, and audio signals are automatically monitored for speech requests from the user to be processed using the speech understanding functionality.
    Type: Application
    Filed: July 21, 2014
    Publication date: November 6, 2014
    Inventors: Lisa J. Stifelman, Anne K. Sullivan, Adam D. Elman, Larry Paul Heck, Stephanos Tryphonas, Kamran Rajabi Zargahi, Ken H. Thai
  • Patent number: 8788269
    Abstract: Techniques are described herein that are capable of satisfying specified intent(s) based on multimodal request(s). A multimodal request is a request that includes at least one request of a first type and at least one request of a second type that is different from the first type. Example types of request include but are not limited to a speech request, a text command, a tactile command, and a visual command. A determination is made that one or more entities in visual content are selected in accordance with an explicit scoping command from a user. In response, speech understanding functionality is automatically activated, and audio signals are automatically monitored for speech requests from the user to be processed using the speech understanding functionality.
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: July 22, 2014
    Assignee: Microsoft Corporation
    Inventors: Lisa J. Stifelman, Anne K. Sullivan, Adam D. Elman, Larry Paul Heck, Stephanos Tryphonas, Kamran Rajabi Zargahi, Ken H. Thai
  • Publication number: 20140059030
    Abstract: Natural language query translation may be provided. A statistical model may be trained to detect domains according to a plurality of query click log data. Upon receiving a natural language query, the statistical model may be used to translate the natural language query into an action. The action may then be performed and at least one result associated with performing the action may be provided.
    Type: Application
    Filed: August 23, 2012
    Publication date: February 27, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Dilek Zeynep Hakkani-Tur, Gokhan Tur, Rukmini Iyer, Larry Paul Heck
  • Publication number: 20140019462
    Abstract: Within the field of computing, many scenarios involve queries formulated by users resulting in query results presented by a device. The user may request to adjust the query, but many devices can only process requests specified in a well-structured manner, such as a set of recognized keywords, specific verbal commands, or a specific manual gesture. The user thus communicates the adjustment request in the constraints of the device, even if the query is specified in a natural language. Presented herein are techniques for enabling users to specify query adjustments with natural action input (e.g., natural-language speech, vocal inflection, and natural manual gestures). The device may be configured to evaluate the natural action input, identify the user's intended query adjustments, generate an adjusted query, and present an adjusted query result, thus enabling the user to interact with the device in a similar manner as communicating with an individual.
    Type: Application
    Filed: July 15, 2012
    Publication date: January 16, 2014
    Applicant: Microsoft Corporation
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, Rukmini Iyer
  • Publication number: 20130218836
    Abstract: Task list linking may be provided. Upon receiving an input from a user, the input may be translated into at least one actionable item. The at least one actionable item may be linked to a data source and displayed to the user.
    Type: Application
    Filed: February 22, 2012
    Publication date: August 22, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Anne K. Sullivan, Lisa Stifelman, Kathy Lee, Matt Klee, Larry Paul Heck, Gokhan Tur, Dilek Hakkani-Tur
  • Publication number: 20130158980
    Abstract: Techniques are described herein that are capable of suggesting intent frame(s) for user request(s). For instance, the intent frame(s) may be suggested to elicit a request from a user. An intent frame is a natural language phrase (e.g., a sentence) that includes at least one carrier phrase and at least one slot. A slot in an intent frame is a placeholder that is identified as being replaceable by one or more words that identify an entity and/or an action to indicate an intent of the user. A carrier phrase in an intent frame includes one or more words that suggest a type of entity and/or action that is to be identified by the one or more words that may replace the corresponding slot. In accordance with these techniques, the intent frame(s) are suggested in response to determining that natural language functionality of a processing system is activated.
    Type: Application
    Filed: December 15, 2011
    Publication date: June 20, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Shane J. Landry, Anne K. Sullivan, Lisa J. Stifelman, Adam D. Elman, Larry Paul Heck, Sarangarajan Parthasarathy
  • Publication number: 20130159001
    Abstract: Techniques are described herein that are capable of satisfying specified intent(s) based on multimodal request(s). A multimodal request is a request that includes at least one request of a first type and at least one request of a second type that is different from the first type. Example types of request include but are not limited to a speech request, a text command, a tactile command, and a visual command. A determination is made that one or more entities in visual content are selected in accordance with an explicit scoping command from a user. In response, speech understanding functionality is automatically activated, and audio signals are automatically monitored for speech requests from the user to be processed using the speech understanding functionality.
    Type: Application
    Filed: December 15, 2011
    Publication date: June 20, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Lisa J. Stifelman, Anne K. Sullivan, Adam D. Elman, Larry Paul Heck, Stephanos Tryphonas, Kamran Rajabi Zargahi, Ken H. Thai
  • Publication number: 20120290509
    Abstract: Training for a statistical dialog manager may be provided. A plurality of log data associated with an intent may be received, and at least one step associated with completing the intent according to the plurality of log data may be identified. An understanding model associated with the intent may be created, including a plurality of queries mapped to the intent. In response to receiving a natural language query from a user that is associated with the intent a response to the user may be provided according to the understanding model.
    Type: Application
    Filed: September 16, 2011
    Publication date: November 15, 2012
    Applicant: Microsoft Corporation
    Inventors: Larry Paul Heck, Dilek Hakkani-Tur, Rukmini Iyer, Gokhan Tur
  • Publication number: 20120290293
    Abstract: Domain detection training in a spoken language understanding system may be provided. Log data associated with a search engine, each associated with a search query, may be received. A domain label for each search query may be identified and the domain label and link data may be provided to a training set for a spoken language understanding model.
    Type: Application
    Filed: September 16, 2011
    Publication date: November 15, 2012
    Applicant: Microsoft Corporation
    Inventors: Dilek Hakkani-Tur, Larry Paul Heck, Gokhan Tur
  • Publication number: 20120290290
    Abstract: Sentence simplification may be provided. A spoken phrase may be received and converted to a text phrase. An intent associated with the text phrase may be identified. The text phrase may then be reformatted according to the identified intent and a task may be performed according to the reformatted text phrase.
    Type: Application
    Filed: May 12, 2011
    Publication date: November 15, 2012
    Applicant: Microsoft Corporation
    Inventors: Gokhan Tur, Dilek Hakkani-Tur, Larry Paul Heck, Sarangarajan Parthasarathy
  • Publication number: 20120253791
    Abstract: Identification of user intents may be provided. A plurality of network applications may be identified, and an ontology associated with each of the plurality of applications may be defined. If a phrase received from a user is associated with at least one of the defined ontologies, an action associated with the network application may be executed.
    Type: Application
    Filed: March 31, 2011
    Publication date: October 4, 2012
    Applicant: Microsoft Corporation
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20120253789
    Abstract: Conversational dialog learning and correction may be provided. Upon receiving a natural language phrase from a first user, at least one second user associated with the natural language phrase may be identified. A context state may be created according to the first user and the at least one second user. The natural language phrase may then be translated into an agent action according to the context state.
    Type: Application
    Filed: March 31, 2011
    Publication date: October 4, 2012
    Applicant: Microsoft Corporation
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20120253802
    Abstract: Location-based conversational understanding may be provided. Upon receiving a query from a user, an environmental context associated with the query may be generated. The query may be interpreted according to the environmental context. The interpreted query may be executed and at least one result associated with the query may be provided to the user.
    Type: Application
    Filed: March 31, 2011
    Publication date: October 4, 2012
    Applicant: Microsoft Corporation
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20120254227
    Abstract: An augmented conversational understanding architecture may be provided. Upon receiving a natural language phrase from a user, the phrase may be translated into a search phrase and a search action may be performed on the search phrase.
    Type: Application
    Filed: March 31, 2011
    Publication date: October 4, 2012
    Applicant: Microsoft Corporation
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20120253788
    Abstract: An augmented conversational understanding agent may be provided. Upon receiving, by an agent, at least one natural language phrase from a user, a context associated with the at least one natural language phrase may be identified. The natural language phrase may be associated, for example, with a conversation between the user and a second user. An agent action associated with the identified context may be performed according to the at least one natural language phrase and a result associated with performing the action may be displayed.
    Type: Application
    Filed: March 31, 2011
    Publication date: October 4, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: LARRY PAUL HECK, MADHUSUDAN CHINTHAKUNTA, DAVID MITBY, LISA STIFELMAN