Patents by Inventor Larry Paul Heck

Larry Paul Heck has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9244984
    Abstract: Location-based conversational understanding may be provided. Upon receiving a query from a user, an environmental context associated with the query may be generated. The query may be interpreted according to the environmental context. The interpreted query may be executed and at least one result associated with the query may be provided to the user.
    Type: Grant
    Filed: March 31, 2011
    Date of Patent: January 26, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20160004707
    Abstract: Natural language query translation may be provided. A statistical model may be trained to detect domains according to a plurality of query click log data. Upon receiving a natural language query, the statistical model may be used to translate the natural language query into an action. The action may then be performed and at least one result associated with performing the action may be provided.
    Type: Application
    Filed: June 8, 2015
    Publication date: January 7, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Dilek Zeynep Hakkani-Tur, Gokhan Tur, Rukmini Lyer, Larry Paul Heck
  • Publication number: 20150365448
    Abstract: Individuals may utilize devices to engage in conversations about topics respectively associated with a location (e.g., restaurants where the individuals may meet for dinner). Often, the individual momentarily withdraws from the conversation in order to issue commands to the device to retrieve and present such information, and may miss parts of the conversation while interacting with the device. Additionally, the individual often explores such topics individually on a device and conveys such information to the other individuals through messages, which is inefficient and error-prone. Presented herein are techniques enabling devices to facilitate conversations by monitoring the conversation for references, by one individual to another (rather than as a command to the device), to a topic associated with a location. In the absence of a command from an individual, the device may automatically present a map alongside a conversation interface showing the location(s) of the topic(s) referenced in the conversation.
    Type: Application
    Filed: June 17, 2014
    Publication date: December 17, 2015
    Inventors: Lisa Stifelman, Madhusudan Chinthakunta, Julian James Odell, Larry Paul Heck, Daniel Dole
  • Patent number: 9201859
    Abstract: Techniques are described herein that are capable of suggesting intent frame(s) for user request(s). For instance, the intent frame(s) may be suggested to elicit a request from a user. An intent frame is a natural language phrase (e.g., a sentence) that includes at least one carrier phrase and at least one slot. A slot in an intent frame is a placeholder that is identified as being replaceable by one or more words that identify an entity and/or an action to indicate an intent of the user. A carrier phrase in an intent frame includes one or more words that suggest a type of entity and/or action that is to be identified by the one or more words that may replace the corresponding slot. In accordance with these techniques, the intent frame(s) are suggested in response to determining that natural language functionality of a processing system is activated.
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: December 1, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Shane J. Landry, Anne K. Sullivan, Lisa J. Stifelman, Adam D. Elman, Larry Paul Heck, Sarangarajan Parthasarathy
  • Publication number: 20150317302
    Abstract: Aspects of the present invention provide a technique to validate the transfer of intents or entities between existing natural language model domains (hereafter “domain” or “NLU”) using click logs, a knowledge graph, or both. At least two different types of transfers are possible. Intents from a first domain may be transferred to a second domain. Alternatively or additionally, entities from the second domain may be transferred to an existing intent in the first domain. Either way, additional intent/entity pairs can be generated and validated. Before the new intent/entity pair is added to a domain, aspects of the present invention validate that the intent or entity is transferable between domains. Validation techniques that are consistent with aspects of the invention can use a knowledge graph, search query click logs, or both to validate a transfer of intents or entities from one domain to another.
    Type: Application
    Filed: April 30, 2014
    Publication date: November 5, 2015
    Inventors: XIAOHU LIU, ALI MAMDOUH ELKAHKY, RUHI SARIKAYA, GOKHAN TUR, DILEK HAKKANI-TUR, LARRY PAUL HECK
  • Publication number: 20150310862
    Abstract: One or more aspects of the subject disclosure are directed towards performing a semantic parsing task, such as classifying text corresponding to a spoken utterance into a class. Feature data representative of input data is provided to a semantic parsing mechanism that uses a deep model trained at least in part via unsupervised learning using unlabeled data. For example, if used in a classification task, a classifier may use an associated deep neural network that is trained to have an embeddings layer corresponding to at least one of words, phrases, or sentences. The layers are learned from unlabeled data, such as query click log data.
    Type: Application
    Filed: April 24, 2014
    Publication date: October 29, 2015
    Applicant: Microsoft Corporation
    Inventors: Yann Nicolas Dauphin, Dilek Z. Hakkani-Tur, Gokhan Tur, Larry Paul Heck
  • Publication number: 20150248886
    Abstract: A model-based approach for on-screen item selection and disambiguation is provided. An utterance may be received by a computing device in response to a display of a list of items for selection on a display screen. A disambiguation model may then be applied to the utterance. The disambiguation model may be utilized to determine whether the utterance is directed to at least one of the list of displayed items, extract referential features from the utterance and identify an item from the list corresponding to the utterance, based on the extracted referential features. The computing device may then perform an action which includes selecting the identified item associated with utterance.
    Type: Application
    Filed: March 3, 2014
    Publication date: September 3, 2015
    Applicant: Microsoft Corporation
    Inventors: Ruhi Sarikaya, Fethiye Asli Celikyilmaz, Zhaleh Feizollahi, Larry Paul Heck, Dilek Z. Hakkani-Tur
  • Publication number: 20150179168
    Abstract: A dialog system for use in a multi-user, multi-domain environment. The dialog system understands user requests when multiple users are interacting with each other as well as the dialog system. The dialog system uses multi-human conversational context to improve domain detection. Using interactions between multiple users allows the dialog system to better interpret machine directed conversational inputs in multi-user conversational systems. The dialog system employs topic segmentation to chunk conversations for determining context boundaries. Using general topic segmentation methods, as well as the specific domain detector trained with conversational inputs collected by a single user system, allows the dialog system to better determine the relevant context. The use of conversational context helps reduce the domain detection error rate, especially in certain domains, and allows for better interactions with users when the machine addressed turns are not recognized or are ambiguous.
    Type: Application
    Filed: December 20, 2013
    Publication date: June 25, 2015
    Applicant: MICROSOFT CORPORATION
    Inventors: Dilek Hakkani-Tur, Gokhan Tur, Larry Paul Heck, Dong Wang
  • Publication number: 20150178273
    Abstract: A relation detection model training solution. The relation detection model training solution mines freely available resources from the World Wide Web to train a relationship detection model for use during linguistic processing. The relation detection model training system searches the web for pairs of entities extracted from a knowledge graph that are connected by a specific relation. Performance is enhanced by clipping search snippets to extract patterns that connect the two entities in a dependency tree and refining the annotations of the relations according to other related entities in the knowledge graph. The relation detection model training solution scales to other domains and languages, pushing the burden from natural language semantic parsing to knowledge base population. The relation detection model training solution exhibits performance comparable to supervised solutions, which require design, collection, and manual labeling of natural language data.
    Type: Application
    Filed: December 20, 2013
    Publication date: June 25, 2015
    Applicant: MICROSOFT CORPORATION
    Inventors: Dilek Z. Hakkani-Tur, Gokhan Tur, Larry Paul Heck
  • Patent number: 9064006
    Abstract: Natural language query translation may be provided. A statistical model may be trained to detect domains according to a plurality of query click log data. Upon receiving a natural language query, the statistical model may be used to translate the natural language query into an action. The action may then be performed and at least one result associated with performing the action may be provided.
    Type: Grant
    Filed: August 23, 2012
    Date of Patent: June 23, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dilek Zeynep Hakkani-Tur, Gokhan Tur, Rukmini Iyer, Larry Paul Heck
  • Publication number: 20140330570
    Abstract: Techniques are described herein that are capable of satisfying specified intent(s) based on multimodal request(s). A multimodal request is a request that includes at least one request of a first type and at least one request of a second type that is different from the first type. Example types of request include but are not limited to a speech request, a text command, a tactile command, and a visual command. A determination is made that one or more entities in visual content are selected in accordance with an explicit scoping command from a user. In response, speech understanding functionality is automatically activated, and audio signals are automatically monitored for speech requests from the user to be processed using the speech understanding functionality.
    Type: Application
    Filed: July 21, 2014
    Publication date: November 6, 2014
    Inventors: Lisa J. Stifelman, Anne K. Sullivan, Adam D. Elman, Larry Paul Heck, Stephanos Tryphonas, Kamran Rajabi Zargahi, Ken H. Thai
  • Patent number: 8788269
    Abstract: Techniques are described herein that are capable of satisfying specified intent(s) based on multimodal request(s). A multimodal request is a request that includes at least one request of a first type and at least one request of a second type that is different from the first type. Example types of request include but are not limited to a speech request, a text command, a tactile command, and a visual command. A determination is made that one or more entities in visual content are selected in accordance with an explicit scoping command from a user. In response, speech understanding functionality is automatically activated, and audio signals are automatically monitored for speech requests from the user to be processed using the speech understanding functionality.
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: July 22, 2014
    Assignee: Microsoft Corporation
    Inventors: Lisa J. Stifelman, Anne K. Sullivan, Adam D. Elman, Larry Paul Heck, Stephanos Tryphonas, Kamran Rajabi Zargahi, Ken H. Thai
  • Publication number: 20140059030
    Abstract: Natural language query translation may be provided. A statistical model may be trained to detect domains according to a plurality of query click log data. Upon receiving a natural language query, the statistical model may be used to translate the natural language query into an action. The action may then be performed and at least one result associated with performing the action may be provided.
    Type: Application
    Filed: August 23, 2012
    Publication date: February 27, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Dilek Zeynep Hakkani-Tur, Gokhan Tur, Rukmini Iyer, Larry Paul Heck
  • Publication number: 20140019462
    Abstract: Within the field of computing, many scenarios involve queries formulated by users resulting in query results presented by a device. The user may request to adjust the query, but many devices can only process requests specified in a well-structured manner, such as a set of recognized keywords, specific verbal commands, or a specific manual gesture. The user thus communicates the adjustment request in the constraints of the device, even if the query is specified in a natural language. Presented herein are techniques for enabling users to specify query adjustments with natural action input (e.g., natural-language speech, vocal inflection, and natural manual gestures). The device may be configured to evaluate the natural action input, identify the user's intended query adjustments, generate an adjusted query, and present an adjusted query result, thus enabling the user to interact with the device in a similar manner as communicating with an individual.
    Type: Application
    Filed: July 15, 2012
    Publication date: January 16, 2014
    Applicant: Microsoft Corporation
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, Rukmini Iyer
  • Publication number: 20130218836
    Abstract: Task list linking may be provided. Upon receiving an input from a user, the input may be translated into at least one actionable item. The at least one actionable item may be linked to a data source and displayed to the user.
    Type: Application
    Filed: February 22, 2012
    Publication date: August 22, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Anne K. Sullivan, Lisa Stifelman, Kathy Lee, Matt Klee, Larry Paul Heck, Gokhan Tur, Dilek Hakkani-Tur
  • Publication number: 20130158980
    Abstract: Techniques are described herein that are capable of suggesting intent frame(s) for user request(s). For instance, the intent frame(s) may be suggested to elicit a request from a user. An intent frame is a natural language phrase (e.g., a sentence) that includes at least one carrier phrase and at least one slot. A slot in an intent frame is a placeholder that is identified as being replaceable by one or more words that identify an entity and/or an action to indicate an intent of the user. A carrier phrase in an intent frame includes one or more words that suggest a type of entity and/or action that is to be identified by the one or more words that may replace the corresponding slot. In accordance with these techniques, the intent frame(s) are suggested in response to determining that natural language functionality of a processing system is activated.
    Type: Application
    Filed: December 15, 2011
    Publication date: June 20, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Shane J. Landry, Anne K. Sullivan, Lisa J. Stifelman, Adam D. Elman, Larry Paul Heck, Sarangarajan Parthasarathy
  • Publication number: 20130159001
    Abstract: Techniques are described herein that are capable of satisfying specified intent(s) based on multimodal request(s). A multimodal request is a request that includes at least one request of a first type and at least one request of a second type that is different from the first type. Example types of request include but are not limited to a speech request, a text command, a tactile command, and a visual command. A determination is made that one or more entities in visual content are selected in accordance with an explicit scoping command from a user. In response, speech understanding functionality is automatically activated, and audio signals are automatically monitored for speech requests from the user to be processed using the speech understanding functionality.
    Type: Application
    Filed: December 15, 2011
    Publication date: June 20, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Lisa J. Stifelman, Anne K. Sullivan, Adam D. Elman, Larry Paul Heck, Stephanos Tryphonas, Kamran Rajabi Zargahi, Ken H. Thai
  • Publication number: 20120290509
    Abstract: Training for a statistical dialog manager may be provided. A plurality of log data associated with an intent may be received, and at least one step associated with completing the intent according to the plurality of log data may be identified. An understanding model associated with the intent may be created, including a plurality of queries mapped to the intent. In response to receiving a natural language query from a user that is associated with the intent a response to the user may be provided according to the understanding model.
    Type: Application
    Filed: September 16, 2011
    Publication date: November 15, 2012
    Applicant: Microsoft Corporation
    Inventors: Larry Paul Heck, Dilek Hakkani-Tur, Rukmini Iyer, Gokhan Tur
  • Publication number: 20120290293
    Abstract: Domain detection training in a spoken language understanding system may be provided. Log data associated with a search engine, each associated with a search query, may be received. A domain label for each search query may be identified and the domain label and link data may be provided to a training set for a spoken language understanding model.
    Type: Application
    Filed: September 16, 2011
    Publication date: November 15, 2012
    Applicant: Microsoft Corporation
    Inventors: Dilek Hakkani-Tur, Larry Paul Heck, Gokhan Tur
  • Publication number: 20120290290
    Abstract: Sentence simplification may be provided. A spoken phrase may be received and converted to a text phrase. An intent associated with the text phrase may be identified. The text phrase may then be reformatted according to the identified intent and a task may be performed according to the reformatted text phrase.
    Type: Application
    Filed: May 12, 2011
    Publication date: November 15, 2012
    Applicant: Microsoft Corporation
    Inventors: Gokhan Tur, Dilek Hakkani-Tur, Larry Paul Heck, Sarangarajan Parthasarathy