Patents by Inventor Larry P. Heck
Larry P. Heck has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220222491Abstract: A method includes performing, using at least one processor of an electronic device, semantic probing on a pre-trained model using one or more textual utterances. Performing the semantic probing includes processing each of the one or more textual utterances to determine a performance score for one or more targeted hidden layers of the pre-trained model. Performing the semantic probing also includes selecting a subset of the targeted hidden layers based on a comparison of the performance score to a predetermined threshold. The method also includes reconstructing, using the at least one processor, the pre-trained model based on the semantic probing to generate a reconstructed model.Type: ApplicationFiled: August 6, 2021Publication date: July 14, 2022Inventors: JongHo Shin, Larry P. Heck
-
Patent number: 11341945Abstract: A method includes receiving a non-linguistic input associated with an input musical content. The method also includes, using a model that embeds multiple musical features describing different musical content and relationships between the different musical content in a latent space, identifying one or more embeddings based on the input musical content. The method further includes at least one of: (i) identifying stored musical content based on the one or more identified embeddings or (ii) generating derived musical content based on the one or more identified embeddings. In addition, the method includes presenting at least one of: the stored musical content or the derived musical content. The model is generated by training a machine learning system having one or more first neural network components and one or more second neural network components such that embeddings of the musical features in the latent space have a predefined distribution.Type: GrantFiled: December 5, 2019Date of Patent: May 24, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Peter M. Bretan, Larry P. Heck, Hongxia Jin
-
Publication number: 20210049989Abstract: A method includes receiving a non-linguistic input associated with an input musical content. The method also includes, using a model that embeds multiple musical features describing different musical content and relationships between the different musical content in a latent space, identifying one or more embeddings based on the input musical content. The method further includes at least one of: (i) identifying stored musical content based on the one or more identified embeddings or (ii) generating derived musical content based on the one or more identified embeddings. In addition, the method includes presenting at least one of: the stored musical content or the derived musical content. The model is generated by training a machine learning system having one or more first neural network components and one or more second neural network components such that embeddings of the musical features in the latent space have a predefined distribution.Type: ApplicationFiled: December 5, 2019Publication date: February 18, 2021Inventors: Peter M. Bretan, Larry P. Heck, Hongxia Jin
-
Patent number: 10055686Abstract: A deep structured semantic module (DSSM) is described herein which uses a model that is discriminatively trained based on click-through data, e.g., such that a conditional likelihood of clicked documents, given respective queries, is maximized, and a condition likelihood of non-clicked documents, given the queries, is reduced. In operation, after training is complete, the DSSM maps an input item into an output item expressed in a semantic space, using the trained model. To facilitate training and runtime operation, a dimensionality-reduction module (DRM) can reduce the dimensionality of the input item that is fed to the DSSM. A search engine may use the above-summarized functionality to convert a query and a plurality of documents into the common semantic space, and then determine the similarity between the query and documents in the semantic space. The search engine may then rank the documents based, at least in part, on the similarity measures.Type: GrantFiled: July 12, 2016Date of Patent: August 21, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alejandro Acero, Larry P. Heck
-
Patent number: 9997157Abstract: Systems and methods are provided for improving language models for speech recognition by personalizing knowledge sources utilized by the language models to specific users or user-population characteristics. A knowledge source, such as a knowledge graph, is personalized for a particular user by mapping entities or user actions from usage history for the user, such as query logs, to the knowledge source. The personalized knowledge source may be used to build a personal language model by training a language model with queries corresponding to entities or entity pairs that appear in usage history. In some embodiments, a personalized knowledge source for a specific user can be extended based on personalized knowledge sources of similar users.Type: GrantFiled: May 16, 2014Date of Patent: June 12, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Murat Akbacak, Dilek Z. Hakkani-Tur, Gokhan Tur, Larry P. Heck, Benoit Dumoulin
-
Patent number: 9870356Abstract: Functionality is described herein for determining the intents of linguistic items (such as queries), to produce intent output information. For some linguistic items, the functionality deterministically assigns intents to the linguistic items based on known intent labels, which, in turn, may be obtained or derived from a knowledge graph or other type of knowledge resource. For other linguistic items, the functionality infers the intents of the linguistic items based on selection log data (such as click log data provided by a search system). In some instances, the intent output information may reveal new intents that are not represented by the known intent labels. In one implementation, the functionality can use the intent output information to train a language understanding model.Type: GrantFiled: February 13, 2014Date of Patent: January 16, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Dilek Hakkani-Tür, Fethiye Asli Celikyilmaz, Larry P. Heck, Gokhan Tur, Yangfeng Ji
-
Patent number: 9679558Abstract: Systems and methods are provided for training language models using in-domain-like data collected automatically from one or more data sources. The data sources (such as text data or user-interactional data) are mined for specific types of data, including data related to style, content, and probability of relevance, which are then used for language model training. In one embodiment, a language model is trained from features extracted from a knowledge graph modified into a probabilistic graph, where entity popularities are represented and the popularity information is obtained from data sources related to the knowledge. Embodiments of language models trained from this data are particularly suitable for domain-specific conversational understanding tasks where natural language is used, such as user interaction with a game console or a personal assistant application on personal device.Type: GrantFiled: May 15, 2014Date of Patent: June 13, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Murat Akbacak, Dilek Z. Hakkani-Tur, Gokhan Tur, Larry P. Heck, Benoit Dumoulin
-
Publication number: 20170116182Abstract: This disclosure pertains to a classification model, and to functionality for producing and applying the classification model. The classification model is configured to discriminate whether an input linguistic item (such as a query) corresponding to either a natural language (NL) linguistic item or a keyword language (KL) linguistic item. An NL linguistic item expresses an intent using a natural language, while a KL linguistic item expresses the intent using one or more keywords. In a training phase, the functionality produces the classification model based on query click log data or the like. In an application phase, the functionality may, among other uses, use the classification model to filter a subset of NL linguistic items from a larger set of items, and then use the subset of NL linguistic items to train a natural language interpretation model, such as a spoken language understanding model.Type: ApplicationFiled: December 20, 2016Publication date: April 27, 2017Applicant: Microsoft Technology Licensing, LLCInventors: Gokhan Tur, Fethiye Asli Celikyilmaz, Dilek Hakkani-Tür, Larry P. Heck
-
Patent number: 9558176Abstract: This disclosure pertains to a classification model, and to functionality for producing and applying the classification model. The classification model is configured to discriminate whether an input linguistic item (such as a query) corresponding to either a natural language (NL) linguistic item or a keyword language (KL) linguistic item. An NL linguistic item expresses an intent using a natural language, while a KL linguistic item expresses the intent using one or more keywords. In a training phase, the functionality produces the classification model based on query click log data or the like. In an application phase, the functionality may, among other uses, use the classification model to filter a subset of NL linguistic items from a larger set of items, and then use the subset of NL linguistic items to train a natural language interpretation model, such as a spoken language understanding model.Type: GrantFiled: January 14, 2014Date of Patent: January 31, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Gokhan Tur, Fethiye Asli Celikyilmaz, Dilek Hakkani-Tür, Larry P. Heck
-
Patent number: 9519859Abstract: A deep structured semantic module (DSSM) is described herein which uses a model that is discriminatively trained based on click-through data, e.g., such that a conditional likelihood of clicked documents, given respective queries, is maximized, and a condition likelihood of non-clicked documents, given the queries, is reduced. In operation, after training is complete, the DSSM maps an input item into an output item expressed in a semantic space, using the trained model. To facilitate training and runtime operation, a dimensionality-reduction module (DRM) can reduce the dimensionality of the input item that is fed to the DSSM. A search engine may use the above-summarized functionality to convert a query and a plurality of documents into the common semantic space, and then determine the similarity between the query and documents in the semantic space. The search engine may then rank the documents based, at least in part, on the similarity measures.Type: GrantFiled: September 6, 2013Date of Patent: December 13, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alejandro Acero, Larry P. Heck
-
Publication number: 20160321321Abstract: A deep structured semantic module (DSSM) is described herein which uses a model that is discriminatively trained based on click-through data, e.g., such that a conditional likelihood of clicked documents, given respective queries, is maximized, and a condition likelihood of non-clicked documents, given the queries, is reduced. In operation, after training is complete, the DSSM maps an input item into an output item expressed in a semantic space, using the trained model. To facilitate training and runtime operation, a dimensionality-reduction module (DRM) can reduce the dimensionality of the input item that is fed to the DSSM. A search engine may use the above-summarized functionality to convert a query and a plurality of documents into the common semantic space, and then determine the similarity between the query and documents in the semantic space. The search engine may then rank the documents based, at least in part, on the similarity measures.Type: ApplicationFiled: July 12, 2016Publication date: November 3, 2016Applicant: Microsoft Technology Licensing, LLCInventors: Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alejandro Acero, Larry P. Heck
-
Publication number: 20150370787Abstract: Systems and methods are provided for improving language models for speech recognition by adapting knowledge sources utilized by the language models to session contexts. A knowledge source, such as a knowledge graph, is used to capture and model dynamic session context based on user interaction information from usage history, such as session logs, that is mapped to the knowledge source. From sequences of user interactions, higher level intent sequences may be determined and used to form models that anticipate similar intents but with different arguments including arguments that do not necessarily appear in the usage history. In this way, the session context models may be used to determine likely next interactions or “turns” from a user, given a previous turn or turns. Language models corresponding to the likely next turns are then interpolated and provided to improve recognition accuracy of the next turn received from the user.Type: ApplicationFiled: June 18, 2014Publication date: December 24, 2015Inventors: Murat Akbacak, Dilek Z. Hakkani-Tur, Gokhan Tur, Larry P. Heck
-
Publication number: 20150332670Abstract: Systems and methods are provided for training language models using in-domain-like data collected automatically from one or more data sources. The data sources (such as text data or user-interactional data) are mined for specific types of data, including data related to style, content, and probability of relevance, which are then used for language model training. In one embodiment, a language model is trained from features extracted from a knowledge graph modified into a probabilistic graph, where entity popularities are represented and the popularity information is obtained from data sources related to the knowledge. Embodiments of language models trained from this data are particularly suitable for domain-specific conversational understanding tasks where natural language is used, such as user interaction with a game console or a personal assistant application on personal device.Type: ApplicationFiled: May 15, 2014Publication date: November 19, 2015Applicant: Microsoft CorporationInventors: Murat Akbacak, Dilek Z. Hakkani-Tur, Gokhan Tur, Larry P. Heck, Benoit Dumoulin
-
Publication number: 20150332672Abstract: Systems and methods are provided for improving language models for speech recognition by personalizing knowledge sources utilized by the language models to specific users or user-population characteristics. A knowledge source, such as a knowledge graph, is personalized for a particular user by mapping entities or user actions from usage history for the user, such as query logs, to the knowledge source. The personalized knowledge source may be used to build a personal language model by training a language model with queries corresponding to entities or entity pairs that appear in usage history. In some embodiments, a personalized knowledge source for a specific user can be extended based on personalized knowledge sources of similar users.Type: ApplicationFiled: May 16, 2014Publication date: November 19, 2015Applicant: Microsoft CorporationInventors: Murat Akbacak, Dilek Z. Hakkani-Tur, Gokhan Tur, Larry P. Heck, Benoit Dumoulin
-
Publication number: 20150227845Abstract: Functionality is described herein for determining the intents of linguistic items (such as queries), to produce intent output information. For some linguistic items, the functionality deterministically assigns intents to the linguistic items based on known intent labels, which, in turn, may be obtained or derived from a knowledge graph or other type of knowledge resource. For other linguistic items, the functionality infers the intents of the linguistic items based on selection log data (such as click log data provided by a search system). In some instances, the intent output information may reveal new intents that are not represented by the known intent labels. In one implementation, the functionality can use the intent output information to train a language understanding model.Type: ApplicationFiled: February 13, 2014Publication date: August 13, 2015Applicant: Microsoft CorporationInventors: Dilek Hakkani-Tür, Fethiye Asli Celikyilmaz, Larry P. Heck, Gokhan Tur, Yangfeng Ji
-
Publication number: 20150161107Abstract: This disclosure pertains to a classification model, and to functionality for producing and applying the classification model. The classification model is configured to discriminate whether an input linguistic item (such as a query) corresponding to either a natural language (NL) linguistic item or a keyword language (KL) linguistic item. An NL linguistic item expresses an intent using a natural language, while a KL linguistic item expresses the intent using one or more keywords. In a training phase, the functionality produces the classification model based on query click log data or the like. In an application phase, the functionality may, among other uses, use the classification model to filter a subset of NL linguistic items from a larger set of items, and then use the subset of NL linguistic items to train a natural language interpretation model, such as a spoken language understanding model.Type: ApplicationFiled: January 14, 2014Publication date: June 11, 2015Applicant: Microsoft CorporationInventors: Gokhan Tur, Fethiye Asli Celikyilmaz, Dilek Hakkani-Tür, Larry P. Heck
-
Publication number: 20150074027Abstract: A deep structured semantic module (DSSM) is described herein which uses a model that is discriminatively trained based on click-through data, e.g., such that a conditional likelihood of clicked documents, given respective queries, is maximized, and a condition likelihood of non-clicked documents, given the queries, is reduced. In operation, after training is complete, the DSSM maps an input item into an output item expressed in a semantic space, using the trained model. To facilitate training and runtime operation, a dimensionality-reduction module (DRM) can reduce the dimensionality of the input item that is fed to the DSSM. A search engine may use the above-summarized functionality to convert a query and a plurality of documents into the common semantic space, and then determine the similarity between the query and documents in the semantic space. The search engine may then rank the documents based, at least in part, on the similarity measures.Type: ApplicationFiled: September 6, 2013Publication date: March 12, 2015Applicant: Microsoft CorporationInventors: Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alejandro Acero, Larry P. Heck
-
Patent number: 8775416Abstract: Techniques for predicting user interests based on information known about a specific context is provided. A context-independent relevance function is generated from information gathered from many users and/or from many documents (or files). Information about a specific context (e.g., a particular user, a particular group of users, or type of content) is used to adapt the CI relevance function to the specific context. Based on a query submitted by a user, the adapted relevance function is used to identify results that the user would most likely be interested in. Results may include references to webpages and advertisements.Type: GrantFiled: January 9, 2008Date of Patent: July 8, 2014Assignee: Yahoo!Inc.Inventor: Larry P. Heck
-
Patent number: 7941383Abstract: Mechanisms model, detect, and predict user behavior as a user navigates the Web. In one embodiment, mechanisms model user behavior using predictive models, such as discrete Markov processes, where the user's behavior transitions between a finite number of states. The user's behavior state may not be directly observable (e.g., a user does not proactively indicate what behavior state he is in). Thus, the behavior state of a user is usually only indirectly observable. Mechanisms use predictive models, such as hidden Markov models, to predict the transitions in the user's behavior states.Type: GrantFiled: December 21, 2007Date of Patent: May 10, 2011Assignee: Yahoo! Inc.Inventor: Larry P. Heck
-
Patent number: 7809664Abstract: A QA robot learns how to answer questions by observing human interaction over online social networks. The QA robot observes the way people ask questions and how other users respond to those questions. In addition, the QA robot observes which questions are most helpful and analyzes those questions to identify the characteristics of those questions that are most helpful. The QA robot then uses those observations to enhance the way it answers questions in the future.Type: GrantFiled: December 21, 2007Date of Patent: October 5, 2010Assignee: Yahoo! Inc.Inventor: Larry P. Heck