Patents by Inventor Larry P. Heck

Larry P. Heck has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220222491
    Abstract: A method includes performing, using at least one processor of an electronic device, semantic probing on a pre-trained model using one or more textual utterances. Performing the semantic probing includes processing each of the one or more textual utterances to determine a performance score for one or more targeted hidden layers of the pre-trained model. Performing the semantic probing also includes selecting a subset of the targeted hidden layers based on a comparison of the performance score to a predetermined threshold. The method also includes reconstructing, using the at least one processor, the pre-trained model based on the semantic probing to generate a reconstructed model.
    Type: Application
    Filed: August 6, 2021
    Publication date: July 14, 2022
    Inventors: JongHo Shin, Larry P. Heck
  • Patent number: 11341945
    Abstract: A method includes receiving a non-linguistic input associated with an input musical content. The method also includes, using a model that embeds multiple musical features describing different musical content and relationships between the different musical content in a latent space, identifying one or more embeddings based on the input musical content. The method further includes at least one of: (i) identifying stored musical content based on the one or more identified embeddings or (ii) generating derived musical content based on the one or more identified embeddings. In addition, the method includes presenting at least one of: the stored musical content or the derived musical content. The model is generated by training a machine learning system having one or more first neural network components and one or more second neural network components such that embeddings of the musical features in the latent space have a predefined distribution.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: May 24, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Peter M. Bretan, Larry P. Heck, Hongxia Jin
  • Publication number: 20210049989
    Abstract: A method includes receiving a non-linguistic input associated with an input musical content. The method also includes, using a model that embeds multiple musical features describing different musical content and relationships between the different musical content in a latent space, identifying one or more embeddings based on the input musical content. The method further includes at least one of: (i) identifying stored musical content based on the one or more identified embeddings or (ii) generating derived musical content based on the one or more identified embeddings. In addition, the method includes presenting at least one of: the stored musical content or the derived musical content. The model is generated by training a machine learning system having one or more first neural network components and one or more second neural network components such that embeddings of the musical features in the latent space have a predefined distribution.
    Type: Application
    Filed: December 5, 2019
    Publication date: February 18, 2021
    Inventors: Peter M. Bretan, Larry P. Heck, Hongxia Jin
  • Patent number: 10055686
    Abstract: A deep structured semantic module (DSSM) is described herein which uses a model that is discriminatively trained based on click-through data, e.g., such that a conditional likelihood of clicked documents, given respective queries, is maximized, and a condition likelihood of non-clicked documents, given the queries, is reduced. In operation, after training is complete, the DSSM maps an input item into an output item expressed in a semantic space, using the trained model. To facilitate training and runtime operation, a dimensionality-reduction module (DRM) can reduce the dimensionality of the input item that is fed to the DSSM. A search engine may use the above-summarized functionality to convert a query and a plurality of documents into the common semantic space, and then determine the similarity between the query and documents in the semantic space. The search engine may then rank the documents based, at least in part, on the similarity measures.
    Type: Grant
    Filed: July 12, 2016
    Date of Patent: August 21, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alejandro Acero, Larry P. Heck
  • Patent number: 9997157
    Abstract: Systems and methods are provided for improving language models for speech recognition by personalizing knowledge sources utilized by the language models to specific users or user-population characteristics. A knowledge source, such as a knowledge graph, is personalized for a particular user by mapping entities or user actions from usage history for the user, such as query logs, to the knowledge source. The personalized knowledge source may be used to build a personal language model by training a language model with queries corresponding to entities or entity pairs that appear in usage history. In some embodiments, a personalized knowledge source for a specific user can be extended based on personalized knowledge sources of similar users.
    Type: Grant
    Filed: May 16, 2014
    Date of Patent: June 12, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Murat Akbacak, Dilek Z. Hakkani-Tur, Gokhan Tur, Larry P. Heck, Benoit Dumoulin
  • Patent number: 9870356
    Abstract: Functionality is described herein for determining the intents of linguistic items (such as queries), to produce intent output information. For some linguistic items, the functionality deterministically assigns intents to the linguistic items based on known intent labels, which, in turn, may be obtained or derived from a knowledge graph or other type of knowledge resource. For other linguistic items, the functionality infers the intents of the linguistic items based on selection log data (such as click log data provided by a search system). In some instances, the intent output information may reveal new intents that are not represented by the known intent labels. In one implementation, the functionality can use the intent output information to train a language understanding model.
    Type: Grant
    Filed: February 13, 2014
    Date of Patent: January 16, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dilek Hakkani-Tür, Fethiye Asli Celikyilmaz, Larry P. Heck, Gokhan Tur, Yangfeng Ji
  • Patent number: 9679558
    Abstract: Systems and methods are provided for training language models using in-domain-like data collected automatically from one or more data sources. The data sources (such as text data or user-interactional data) are mined for specific types of data, including data related to style, content, and probability of relevance, which are then used for language model training. In one embodiment, a language model is trained from features extracted from a knowledge graph modified into a probabilistic graph, where entity popularities are represented and the popularity information is obtained from data sources related to the knowledge. Embodiments of language models trained from this data are particularly suitable for domain-specific conversational understanding tasks where natural language is used, such as user interaction with a game console or a personal assistant application on personal device.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: June 13, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Murat Akbacak, Dilek Z. Hakkani-Tur, Gokhan Tur, Larry P. Heck, Benoit Dumoulin
  • Publication number: 20170116182
    Abstract: This disclosure pertains to a classification model, and to functionality for producing and applying the classification model. The classification model is configured to discriminate whether an input linguistic item (such as a query) corresponding to either a natural language (NL) linguistic item or a keyword language (KL) linguistic item. An NL linguistic item expresses an intent using a natural language, while a KL linguistic item expresses the intent using one or more keywords. In a training phase, the functionality produces the classification model based on query click log data or the like. In an application phase, the functionality may, among other uses, use the classification model to filter a subset of NL linguistic items from a larger set of items, and then use the subset of NL linguistic items to train a natural language interpretation model, such as a spoken language understanding model.
    Type: Application
    Filed: December 20, 2016
    Publication date: April 27, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Gokhan Tur, Fethiye Asli Celikyilmaz, Dilek Hakkani-Tür, Larry P. Heck
  • Patent number: 9558176
    Abstract: This disclosure pertains to a classification model, and to functionality for producing and applying the classification model. The classification model is configured to discriminate whether an input linguistic item (such as a query) corresponding to either a natural language (NL) linguistic item or a keyword language (KL) linguistic item. An NL linguistic item expresses an intent using a natural language, while a KL linguistic item expresses the intent using one or more keywords. In a training phase, the functionality produces the classification model based on query click log data or the like. In an application phase, the functionality may, among other uses, use the classification model to filter a subset of NL linguistic items from a larger set of items, and then use the subset of NL linguistic items to train a natural language interpretation model, such as a spoken language understanding model.
    Type: Grant
    Filed: January 14, 2014
    Date of Patent: January 31, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Gokhan Tur, Fethiye Asli Celikyilmaz, Dilek Hakkani-Tür, Larry P. Heck
  • Patent number: 9519859
    Abstract: A deep structured semantic module (DSSM) is described herein which uses a model that is discriminatively trained based on click-through data, e.g., such that a conditional likelihood of clicked documents, given respective queries, is maximized, and a condition likelihood of non-clicked documents, given the queries, is reduced. In operation, after training is complete, the DSSM maps an input item into an output item expressed in a semantic space, using the trained model. To facilitate training and runtime operation, a dimensionality-reduction module (DRM) can reduce the dimensionality of the input item that is fed to the DSSM. A search engine may use the above-summarized functionality to convert a query and a plurality of documents into the common semantic space, and then determine the similarity between the query and documents in the semantic space. The search engine may then rank the documents based, at least in part, on the similarity measures.
    Type: Grant
    Filed: September 6, 2013
    Date of Patent: December 13, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alejandro Acero, Larry P. Heck
  • Publication number: 20160321321
    Abstract: A deep structured semantic module (DSSM) is described herein which uses a model that is discriminatively trained based on click-through data, e.g., such that a conditional likelihood of clicked documents, given respective queries, is maximized, and a condition likelihood of non-clicked documents, given the queries, is reduced. In operation, after training is complete, the DSSM maps an input item into an output item expressed in a semantic space, using the trained model. To facilitate training and runtime operation, a dimensionality-reduction module (DRM) can reduce the dimensionality of the input item that is fed to the DSSM. A search engine may use the above-summarized functionality to convert a query and a plurality of documents into the common semantic space, and then determine the similarity between the query and documents in the semantic space. The search engine may then rank the documents based, at least in part, on the similarity measures.
    Type: Application
    Filed: July 12, 2016
    Publication date: November 3, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alejandro Acero, Larry P. Heck
  • Publication number: 20150370787
    Abstract: Systems and methods are provided for improving language models for speech recognition by adapting knowledge sources utilized by the language models to session contexts. A knowledge source, such as a knowledge graph, is used to capture and model dynamic session context based on user interaction information from usage history, such as session logs, that is mapped to the knowledge source. From sequences of user interactions, higher level intent sequences may be determined and used to form models that anticipate similar intents but with different arguments including arguments that do not necessarily appear in the usage history. In this way, the session context models may be used to determine likely next interactions or “turns” from a user, given a previous turn or turns. Language models corresponding to the likely next turns are then interpolated and provided to improve recognition accuracy of the next turn received from the user.
    Type: Application
    Filed: June 18, 2014
    Publication date: December 24, 2015
    Inventors: Murat Akbacak, Dilek Z. Hakkani-Tur, Gokhan Tur, Larry P. Heck
  • Publication number: 20150332670
    Abstract: Systems and methods are provided for training language models using in-domain-like data collected automatically from one or more data sources. The data sources (such as text data or user-interactional data) are mined for specific types of data, including data related to style, content, and probability of relevance, which are then used for language model training. In one embodiment, a language model is trained from features extracted from a knowledge graph modified into a probabilistic graph, where entity popularities are represented and the popularity information is obtained from data sources related to the knowledge. Embodiments of language models trained from this data are particularly suitable for domain-specific conversational understanding tasks where natural language is used, such as user interaction with a game console or a personal assistant application on personal device.
    Type: Application
    Filed: May 15, 2014
    Publication date: November 19, 2015
    Applicant: Microsoft Corporation
    Inventors: Murat Akbacak, Dilek Z. Hakkani-Tur, Gokhan Tur, Larry P. Heck, Benoit Dumoulin
  • Publication number: 20150332672
    Abstract: Systems and methods are provided for improving language models for speech recognition by personalizing knowledge sources utilized by the language models to specific users or user-population characteristics. A knowledge source, such as a knowledge graph, is personalized for a particular user by mapping entities or user actions from usage history for the user, such as query logs, to the knowledge source. The personalized knowledge source may be used to build a personal language model by training a language model with queries corresponding to entities or entity pairs that appear in usage history. In some embodiments, a personalized knowledge source for a specific user can be extended based on personalized knowledge sources of similar users.
    Type: Application
    Filed: May 16, 2014
    Publication date: November 19, 2015
    Applicant: Microsoft Corporation
    Inventors: Murat Akbacak, Dilek Z. Hakkani-Tur, Gokhan Tur, Larry P. Heck, Benoit Dumoulin
  • Publication number: 20150227845
    Abstract: Functionality is described herein for determining the intents of linguistic items (such as queries), to produce intent output information. For some linguistic items, the functionality deterministically assigns intents to the linguistic items based on known intent labels, which, in turn, may be obtained or derived from a knowledge graph or other type of knowledge resource. For other linguistic items, the functionality infers the intents of the linguistic items based on selection log data (such as click log data provided by a search system). In some instances, the intent output information may reveal new intents that are not represented by the known intent labels. In one implementation, the functionality can use the intent output information to train a language understanding model.
    Type: Application
    Filed: February 13, 2014
    Publication date: August 13, 2015
    Applicant: Microsoft Corporation
    Inventors: Dilek Hakkani-Tür, Fethiye Asli Celikyilmaz, Larry P. Heck, Gokhan Tur, Yangfeng Ji
  • Publication number: 20150161107
    Abstract: This disclosure pertains to a classification model, and to functionality for producing and applying the classification model. The classification model is configured to discriminate whether an input linguistic item (such as a query) corresponding to either a natural language (NL) linguistic item or a keyword language (KL) linguistic item. An NL linguistic item expresses an intent using a natural language, while a KL linguistic item expresses the intent using one or more keywords. In a training phase, the functionality produces the classification model based on query click log data or the like. In an application phase, the functionality may, among other uses, use the classification model to filter a subset of NL linguistic items from a larger set of items, and then use the subset of NL linguistic items to train a natural language interpretation model, such as a spoken language understanding model.
    Type: Application
    Filed: January 14, 2014
    Publication date: June 11, 2015
    Applicant: Microsoft Corporation
    Inventors: Gokhan Tur, Fethiye Asli Celikyilmaz, Dilek Hakkani-Tür, Larry P. Heck
  • Publication number: 20150074027
    Abstract: A deep structured semantic module (DSSM) is described herein which uses a model that is discriminatively trained based on click-through data, e.g., such that a conditional likelihood of clicked documents, given respective queries, is maximized, and a condition likelihood of non-clicked documents, given the queries, is reduced. In operation, after training is complete, the DSSM maps an input item into an output item expressed in a semantic space, using the trained model. To facilitate training and runtime operation, a dimensionality-reduction module (DRM) can reduce the dimensionality of the input item that is fed to the DSSM. A search engine may use the above-summarized functionality to convert a query and a plurality of documents into the common semantic space, and then determine the similarity between the query and documents in the semantic space. The search engine may then rank the documents based, at least in part, on the similarity measures.
    Type: Application
    Filed: September 6, 2013
    Publication date: March 12, 2015
    Applicant: Microsoft Corporation
    Inventors: Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alejandro Acero, Larry P. Heck
  • Patent number: 8775416
    Abstract: Techniques for predicting user interests based on information known about a specific context is provided. A context-independent relevance function is generated from information gathered from many users and/or from many documents (or files). Information about a specific context (e.g., a particular user, a particular group of users, or type of content) is used to adapt the CI relevance function to the specific context. Based on a query submitted by a user, the adapted relevance function is used to identify results that the user would most likely be interested in. Results may include references to webpages and advertisements.
    Type: Grant
    Filed: January 9, 2008
    Date of Patent: July 8, 2014
    Assignee: Yahoo!Inc.
    Inventor: Larry P. Heck
  • Patent number: 7941383
    Abstract: Mechanisms model, detect, and predict user behavior as a user navigates the Web. In one embodiment, mechanisms model user behavior using predictive models, such as discrete Markov processes, where the user's behavior transitions between a finite number of states. The user's behavior state may not be directly observable (e.g., a user does not proactively indicate what behavior state he is in). Thus, the behavior state of a user is usually only indirectly observable. Mechanisms use predictive models, such as hidden Markov models, to predict the transitions in the user's behavior states.
    Type: Grant
    Filed: December 21, 2007
    Date of Patent: May 10, 2011
    Assignee: Yahoo! Inc.
    Inventor: Larry P. Heck
  • Patent number: 7809664
    Abstract: A QA robot learns how to answer questions by observing human interaction over online social networks. The QA robot observes the way people ask questions and how other users respond to those questions. In addition, the QA robot observes which questions are most helpful and analyzes those questions to identify the characteristics of those questions that are most helpful. The QA robot then uses those observations to enhance the way it answers questions in the future.
    Type: Grant
    Filed: December 21, 2007
    Date of Patent: October 5, 2010
    Assignee: Yahoo! Inc.
    Inventor: Larry P. Heck