Natural Language Patents (Class 704/257)
  • Patent number: 10607602
    Abstract: An object is to provide a speech recognition device with improved recognition accuracy using characteristics of a neural network. A speech recognition device includes: an acoustic model 308 implemented by a RNN (recurrent neural network) for calculating, for each state sequence, the 45 posterior probability of a state sequence in response to an observed sequence consisting of prescribed speech features obtained from a speech; a WFST 320 based on S-1HCLG calculating, for each word sequence, posterior probability of a word sequence in response to a state sequence; and a hypothesis selecting unit 322, performing speech recognition of the speech signal based on a score calculated for each hypothesis of a 50 word sequence corresponding to the speech signal, using the posterior probabilities calculated by the acoustic model 308 and the WFST 320 for the input observed sequence.
    Type: Grant
    Filed: May 10, 2016
    Date of Patent: March 31, 2020
    Assignee: NATIONAL INSTITUTE OF INFORMATION AND COMMUNICATIONS TECHNOLOGY
    Inventor: Naoyuki Kanda
  • Patent number: 10609047
    Abstract: An apparatus includes a memory and a hardware processor. The memory stores a threshold. The processor receives first, second, and third messages. The processor determines a number of occurrences of words in the messages. The processor also calculates probabilities that a word in the messages is a particular word and co-occurrence probabilities. The processor further calculates probability distributions of words in the messages. The processor also calculates probabilities based on the probability distributions. The processor compares these probabilities to a threshold to determine whether the first message is related to the second message and/or whether the first message is related to the third message.
    Type: Grant
    Filed: May 18, 2018
    Date of Patent: March 31, 2020
    Assignee: Bank of America Corporation
    Inventors: Marcus Adrian Streips, Arjun Thimmareddy
  • Patent number: 10599645
    Abstract: A speech recognition and natural language understanding system performs insertion, deletion, and replacement edits of tokens at positions with low probabilities according to both a forward and a backward statistical language model (SLM) to produce rewritten token sequences. Multiple rewrites can be produced with scores depending on the probabilities of tokens according to the SLMs. The rewritten token sequences can be parsed according to natural language grammars to produce further weighted scores. Token sequences can be rewritten iteratively using a graph-based search algorithm to find the best rewrite. Mappings of input token sequences to rewritten token sequences can be stored in a cache, and searching for a best rewrite can be bypassed by using cached rewrites when present. Analysis of various initial token sequences that produce the same new rewritten token sequence can be useful to improve natural language grammars.
    Type: Grant
    Filed: October 6, 2017
    Date of Patent: March 24, 2020
    Assignee: SoundHound, Inc.
    Inventors: Luke Lefebure, Pranav Singh
  • Patent number: 10600418
    Abstract: Implementations relate to dynamically, and in a context-sensitive manner, biasing voice to text conversion. In some implementations, the biasing of voice to text conversions is performed by a voice to text engine of a local agent, and the biasing is based at least in part on content provided to the local agent by a third-party (3P) agent that is in network communication with the local agent. In some of those implementations, the content includes contextual parameters that are provided by the 3P agent in combination with responsive content generated by the 3P agent during a dialog that: is between the 3P agent, and a user of a voice-enabled electronic device; and is facilitated by the local agent. The contextual parameters indicate potential feature(s) of further voice input that is to be provided in response to the responsive content generated by the 3P agent.
    Type: Grant
    Filed: December 7, 2016
    Date of Patent: March 24, 2020
    Assignee: GOOGLE LLC
    Inventors: Barnaby James, Bo Wang, Sunil Vemuri, David Schairer, Ulas Kirazci, Ertan Dogrultan, Petar Aleksic
  • Patent number: 10592504
    Abstract: A system and method for information retrieval are presented. A client computer receives a natural language query comprising an array of tokens. A query processing server analyzes the natural language query (interpreted as a question) to identify a plurality of terms and a relationship between one or more pairs of the terms according to a knowledge model defining interrelationships between a plurality of entities. A set of assertions is constructed using the relationship between the pair of terms, and a query is executed against a knowledge base of frequently asked questions, corresponding answers, documents and/or data using the set of assertions to generate a set of results. The knowledge base identifies a plurality of items, each of the plurality of items is associated with at least one annotation identifying at least one of the entities in the knowledge model. The set of results are transmitted to the client computer.
    Type: Grant
    Filed: October 11, 2018
    Date of Patent: March 17, 2020
    Assignee: CAPRICORN HOLDINGS PTE, LTD.
    Inventors: Carlos Ruiz Moreno, Sinuhé Arroyo
  • Patent number: 10592575
    Abstract: A method of inferring user intent in search input in a conversational interaction system is disclosed. A method of inferring user intent in a search input includes providing a user preference signature that describes preferences of the user, receiving search input from the user intended by the user to identify at least one desired item, and determining that a portion of the search input contains an ambiguous identifier. The ambiguous identifier is intended by the user to identify, at least in part, a desired item. The method further includes inferring a meaning for the ambiguous identifier based on matching portions of the search input to the preferences of the user described by the user preference signature and selecting items from a set of content items based on comparing the search input and the inferred meaning of the ambiguous identifier with metadata associated with the content items.
    Type: Grant
    Filed: August 4, 2016
    Date of Patent: March 17, 2020
    Assignee: VEVEO, INC.
    Inventors: Rakesh Barve, Murali Aravamudan, Sashikumar Venkataraman, Girish Welling
  • Patent number: 10593352
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for detecting an end of a query are disclosed. In one aspect, a method includes the actions of receiving audio data that corresponds to an utterance spoken by a user. The actions further include applying, to the audio data, an end of query model. The actions further include determining the confidence score that reflects a likelihood that the utterance is a complete utterance. The actions further include comparing the confidence score that reflects the likelihood that the utterance is a complete utterance to a confidence score threshold. The actions further include determining whether the utterance is likely complete or likely incomplete. The actions further include providing, for output, an instruction to (i) maintain a microphone that is receiving the utterance in an active state or (ii) deactivate the microphone that is receiving the utterance.
    Type: Grant
    Filed: June 6, 2018
    Date of Patent: March 17, 2020
    Assignee: Google LLC
    Inventors: Gabor Simko, Maria Carolina Parada San Martin, Sean Matthew Shannon
  • Patent number: 10586399
    Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing (live-action/animation/cinematic) virtual reality (VR) experiences or content. A script can be parsed to identify one or more elements in a script, and a VR representation of the one or more elements can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner along with the VR representation. The user may edit the VR representation, and the visual and temporal presentation of the script can be commensurately represented. The script may be analyzed for consistency and/or cohesiveness in the context of the VR representation or experience. A preview of the VR experience or content can be generated from the script and/or the VR representation.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: March 10, 2020
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Sasha Anna Schriber, Isa Simo, Merada Richter, Mubbasir Kapadia, Markus Gross
  • Patent number: 10586567
    Abstract: Example techniques may involve managing playback of media content by a playback device. In an example implementation, a playback device receives, via the network interface from a control device of the media playback system, an instruction to queue a container of audio tracks into a queue for playback by the playback device, wherein the container of audio tracks and consists of: (a) an album, (b) a playlist, or (c) an internet radio station. While the playback device is playing back the queue and before each audio track of the playlist is played back, the playback device determines whether the respective audio track is associated with a negative preference. If the respective audio track is associated with the negative preference, the playback device advances playback over the respective audio track to the next audio track within the queue.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: March 10, 2020
    Assignee: Sonos, Inc.
    Inventor: Yean-Nian W. Chen
  • Patent number: 10580408
    Abstract: A speech recognition platform configured to receive an audio signal that includes speech from a user and perform automatic speech recognition (ASR) on the audio signal to identify ASR results. The platform may identify: (i) a domain of a voice command within the speech based on the ASR results and based on context information associated with the speech or the user, and (ii) an intent of the voice command. In response to identifying the intent, the platform may perform a corresponding action, such as streaming audio to the device, setting a reminder for the user, purchasing an item on behalf of the user, making a reservation for the user or launching an application for the user. The speech recognition platform, in combination with the device, may therefore facilitate efficient interactions between the user and a voice-controlled device.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: March 3, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Gregory Michael Hart, Peter Paul Henri Carbon, John Daniel Thimsen, Vikram Kumar Gundeti, Scott Ian Blanksteen, Allan Timothy Lindsay, Frederic Johan Georges Deramat
  • Patent number: 10580403
    Abstract: A method for controlling operation of an agricultural machine and system thereof are disclosed. The method may comprise providing a portable device that has an input device, a processing unit, a storage unit, an output device, and a transceiver device configured for wireless data transmission; receiving a voice control command over a microphone device of the input device of the portable device; determining command text data from the voice control command by processing the voice control command by a speech recognition application running on the processing unit of the portable device; providing machine control signals assigned to a machine control function in a control device of an agricultural machine located remotely from the portable device; and controlling the operation of the agricultural machine according to the machine control signals.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: March 3, 2020
    Assignee: Kverneland Group Mechatronics B.V.
    Inventors: Peter van der Vlugt, Shay Navon
  • Patent number: 10573037
    Abstract: A method and apparatus for training and guiding users comprising generating a scene understanding based on video and audio input of a scene of a user performing a task in the scene, correlating the scene understanding with a knowledge base to produce a task understanding, comprising one or more goals, of a current activity of the user, reasoning, based on the task understanding and a user's current state, a next step for advancing the user towards completing one of the one or more goals of the task understanding and overlaying the scene with an augmented reality view comprising one or more visual and audio representation of the next step to the user.
    Type: Grant
    Filed: December 20, 2012
    Date of Patent: February 25, 2020
    Assignee: SRI International
    Inventors: Rakesh Kumar, Supun Samarasekera, Girish Acharya, Michael John Wolverton, Necip Fazil Ayan, Zhiwei Zhu, Ryan Villamil
  • Patent number: 10573309
    Abstract: Disclosed is the technology for dynamic and intelligent generation of dialog recommendations for the users of chat information systems based on multiple criteria. An example method may include receiving a speech-based user input, recognizing at least a part of the speech-based user input to generate a recognized input, and providing at least one response to the recognized input. The method may further include identifying at least one triggering event, generating at least one dialog recommendation based at least in part on the identification, and presenting the at least one dialog recommendation to a user via a user device.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: February 25, 2020
    Assignee: GOOGLE LLC
    Inventors: Ilya Gennadyevich Gelfenbeyn, Artem Goncharuk, Ilya Andreevich Platonov, Pavel Aleksandrovich Sirotin, Olga Aleksandrovna Gelfenbeyn
  • Patent number: 10572801
    Abstract: Systems and methods for implementing an artificially intelligent virtual assistant includes collecting a user query; using a competency classification machine learning model to generate a competency label for the user query; using a slot identification machine learning model to segment the text of the query and label each of the slots of the query; generating a slot value for each of the slots of the query; generating a handler for each of the slot values; and using the slot values to: identify an external data source relevant to the user query, fetch user data from the external data source, and apply one or more operations to the query to generate response data; and using the response data, to generate a response to the user query.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: February 25, 2020
    Assignee: Clinc, Inc.
    Inventors: Jason Mars, Lingjia Tang, Michael Laurenzano, Johann Hauswald, Parker Hill
  • Patent number: 10571288
    Abstract: Techniques for providing a trajectory route to multiple geographical locations of interest are described. This disclosure describes receiving global position system (GPS) logs associated with respective individual devices, each of the GPS logs including trajectories connecting a set of geographical locations previously visited by an individual of a respective individual device. A trajectory route service receives a request for a trajectory connecting a set of geographical locations of interest specified by a user. The trajectory route service calculates a proximal similarity between (1) the set of geographical locations of interest specified by the user, and (2) respective sets of geographical locations from the GPS logs. The trajectory route service constructs the requested trajectory with use of at least one of the trajectories from the GPS logs determined at least in part according to the calculated proximal similarities.
    Type: Grant
    Filed: January 20, 2017
    Date of Patent: February 25, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yu Zheng, Zaiben Chen, Xing Xie
  • Patent number: 10565984
    Abstract: Techniques related to coding data including techniques for speech recognition using a dynamic dictionary are generally described.
    Type: Grant
    Filed: November 15, 2013
    Date of Patent: February 18, 2020
    Assignee: Intel Corporation
    Inventor: Vadim Sukhomlinov
  • Patent number: 10565995
    Abstract: A network monitor system collects log entries from network appliances in the data network, each log entry includes a quantity context, a first time context, a first name context, and a value of the quantity context. The network monitor system receives a spoken question inputting by a user and processes the spoken question to determine a question context in the spoken question. The question context includes a second name context, a second time context, and a quantity entity context. The network monitor system compares the question context with one or more given log entries. For each match, the network monitor system stores the quantity context and the value of the quantity context in the given log entry as a result entry in a result entries list. The network monitor system composes a response according to the result entries and outputs the response for playing to the user.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: February 18, 2020
    Assignee: TP Lab, Inc.
    Inventors: Chi Fai Ho, John Chiong
  • Patent number: 10559308
    Abstract: A system determines user intent from text. A conversation element is received. An intent is determined by matching a domain independent relationship and a domain dependent term determined from the received conversation element to an intent included in an intent database that stores a plurality of intents and by inputting the matched intent into a trained classifier that computes a likelihood that the matched intent is the intent of the received conversation element. An action is determined based on the determined intent. A response to the received conversation element is generated based on the determined action and output.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: February 11, 2020
    Assignee: SAS Institute Inc.
    Inventors: Jared Michael Dean Smythe, David Blake Styles, Richard Welland Crowell
  • Patent number: 10553208
    Abstract: Artificial intelligence is introduced into an electronic meeting context to perform various tasks before, during, and/or after electronic meetings. The artificial intelligence may analyze a wide variety of data such as data pertaining to other electronic meetings, data pertaining to organizations and users, and other general information pertaining to any topic. Capability is also provided to create, manage, and enforce meeting rules templates that specify requirements and constraints for various aspects of electronic meetings. Embodiments include improved approaches for translation and transcription using multiple translation/transcription services. Embodiments also include using sensors in conjunction with interactive whiteboard appliances to perform person detection, person identification, attendance tracking, and improved meeting start.
    Type: Grant
    Filed: October 9, 2017
    Date of Patent: February 4, 2020
    Assignee: RICOH COMPANY, LTD.
    Inventors: Steven Nelson, Hiroshi Kitada, Lana Wong
  • Patent number: 10546069
    Abstract: A natural language processing system identifies command elements in a text natural language command and, for each command element, accesses a playlist access matrix and identifies any playlist pointer pairs associated therein with that command element. The natural language processing system then identifies whether a first playlist pointer element in any of those playlist pointer pairs indicates a current playlist pointer best match and, if so, updates a playlist entry identifier with a second playlist pointer element in the playlist pointer pair that includes that first playlist pointer element. When the natural language processing system determines that all of the command elements have been considered, it uses the playlist entry identifier to identify a computing language command in a command playlist, and executes the computing language command on a target element in the text natural language command based on an action element in the text natural language command.
    Type: Grant
    Filed: March 1, 2018
    Date of Patent: January 28, 2020
    Assignee: Dell Products L.P.
    Inventor: Mark Steven Sanders
  • Patent number: 10546067
    Abstract: Provided are systems and methods for creating custom dialog system engines. The system comprises a dialog system interface installed on a first server or a user device and a platform deployed on a second server. The platform is configured to receive dialog system entities and intents associated with a developer profile and associate the dialog system entities with the dialog system intents to form a custom dialog system engine associated with the dialog system interface. The web platform receives a user request from the dialog system interface, activates the custom dialog system engine based on identification, and retrieves the dialog system entities and intents. The user request is processed by applying the dialog system entities and intents to generate a response to the user request. The response is sent to the dialog system interface.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: January 28, 2020
    Assignee: GOOGLE LLC
    Inventors: Ilya Gelfenbeyn, Artem Goncharuk, Pavel Sirotin
  • Patent number: 10534847
    Abstract: Devices, systems, and methods for automatically creating a document. In one example, the system and method perform or include capturing, with a web-extension associated with a word-processing application, implicitly-tagged-content and an explicitly-tagged-content displayed on a web browser along with tags associated with the implicitly-tagged-content and the explicitly-tagged-content; receiving, with a speech-to-text interface, natural-language audio instruction associated with generating a document; generating, with a natural-language processor, a plain-text command associated with the natural-language audio instruction; retrieving personalized-content based on the plain-text command; and organizing, with a content-organizer, the personalized-content based on one or more criteria selected from a group consisting of page rank of a content displayed on the web browser, a source of the content, an authoring-style, and a document template.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: January 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Om Krishna
  • Patent number: 10528669
    Abstract: A method and device for extracting causal from natural language sentences is disclosed. The method includes determining, by a computing device, a plurality of parameters for each target word in a sentence inputted by a user. The method further includes processing for each target word, by the computing device, an input vector comprising the plurality of parameters for a causal classifier neural network. The method includes identifying, by the computing device, causal tags associated with each target word in the sentence based on processing of associated input vector. The method includes extracting, by the computing device, the causal text from the sentence based on the causal tags associated with each target word in the sentence. The method further includes providing, by the computing device, a response to the sentence inputted by the user based on the causal text extracted for the sentence.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: January 7, 2020
    Assignee: Wipro Limited
    Inventors: Arindam Chatterjee, Kartik Subodh Ballal
  • Patent number: 10521508
    Abstract: Provided is a process for extracting conveyance records from unstructured text documents, the process including: obtaining, with one or more processors, a plurality of documents describing, in unstructured form, one or more conveyances of interest in real property; determining, with one or more processors, for each of the documents, a respective jurisdiction; selecting, with one or more processors, from a plurality of language processing models for the English language, a respective language processing model for each of the documents based on the respective determined jurisdiction; extracting, with one or more processors, for each of the documents, a plurality of structured conveyance records from each of the plurality of documents by applying the language processing model selected for the respective document based on the jurisdiction associated with the document; and storing, with one or more processors, the extracted, structured conveyance record in memory.
    Type: Grant
    Filed: December 30, 2015
    Date of Patent: December 31, 2019
    Assignee: TitleFlow LLC
    Inventors: David T. Bateman, Aaron Phillips, Andrew E. Plagens, J. Charles Drennan, Wendell H. Langdon
  • Patent number: 10521514
    Abstract: An apparatus for notification of speech of interest to a user includes a voice analyzer configured to recognize speech, evaluate a relevance between a result of the speech recognition and a determined user's topic of interest, and determine whether to provide a notification; and an outputter configured to, in response to the voice analyzer determining to provide the notification, generate and output a notification message.
    Type: Grant
    Filed: July 14, 2016
    Date of Patent: December 31, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ji Hyun Lee, Young Sang Choi
  • Patent number: 10515093
    Abstract: A method implements a virtual machine for interactive visual analysis. The method receives a data visualization data flow graph, which is a directed graph including data nodes and transform nodes. Each transform node specifies a set of inputs for retrieval, where each input corresponds to a data node. Each transform node also specifies a transform operator that identifies an operation to be performed on the inputs. Some transform nodes specify (a) a set of outputs corresponding to respective data nodes and (b) a function for use in performing the operation of the transform node. The method traverses the data flow graph according to directions of arcs between nodes in the data flow graph, thereby retrieving data corresponding to each data node and executing the respective transformation operator specified for each of the transform nodes. This generates a data visualization according to transform nodes that specify graphical rendering.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: December 24, 2019
    Assignee: Tableau Software, Inc.
    Inventor: Scott Sherman
  • Patent number: 10515628
    Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: December 24, 2019
    Assignee: VB Assets, LLC
    Inventors: Larry Baldwin, Tom Freeman, Michael Tjalve, Blane Ebersold, Chris Weider
  • Patent number: 10510013
    Abstract: In implementations of the subject matter described herein, each token for containing an element in the training data is sampled according to a factorization strategy in training. Instead of using a single proposal, the property value of the target element located at the token being scanned is iteratively updated one or more times based on a combination of an element proposal and a context proposal. The element proposal tends to accept a value that is popular for the target element independently of the current piece of data, while the context proposal tends to accept whenever the property value that is popular in the context of the target data or popular for the element itself. The proposed modeling training approach can converge in a quite efficient way.
    Type: Grant
    Filed: July 16, 2015
    Date of Patent: December 17, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jinhui Yuan, Tie-Yan Liu
  • Patent number: 10510341
    Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: December 17, 2019
    Assignee: VB Assets, LLC
    Inventors: Larry Baldwin, Tom Freeman, Michael Tjalve, Blane Ebersold, Chris Weider
  • Patent number: 10504508
    Abstract: A dialog management unit (31) selects a response template corresponding to a dialog state with a user and outputs a term symbol included in the response template to a comprehension level estimating unit (30). The comprehension level estimating unit (30) outputs the user's comprehension level of the input term symbol to the dialog management unit (31). A response generating unit (32) generates a response sentence on the basis of the response template selected by the dialog management unit (31), adds an explanatory sentence to the response sentence depending on the user's comprehension level of the term input from the dialog management unit (31), and outputs the response sentence.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: December 10, 2019
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Yoichi Fujii, Yusuke Koji, Keisuke Watanabe
  • Patent number: 10497367
    Abstract: The customization of language modeling components for speech recognition is provided. A list of language modeling components may be made available by a computing device. A hint may then be sent to a recognition service provider for combining the multiple language modeling components from the list. The hint may be based on a number of different domains. A customized combination of the language modeling components based on the hint may then be received from the recognition service provider.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: December 3, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael Levit, Hernan Guelman, Shuangyu Chang, Sarangarajan Parthasarathy, Benoit Dumoulin
  • Patent number: 10497364
    Abstract: In some implementations, an utterance is determined to include a particular user speaking a hotword based at least on a first set of samples of the particular user speaking the hotword. In response to determining that an utterance includes a particular user speaking a hotword based at least on a first set of samples of the particular user speaking the hotword, at least a portion of the utterance is stored as a new sample. A second set of samples of the particular user speaking the utterance is obtained, where the second set of samples includes the new sample and less than all the samples in the first set of samples. A second utterance is determined to include the particular user speaking the hotword based at least on the second set of samples of the user speaking the hotword.
    Type: Grant
    Filed: April 18, 2018
    Date of Patent: December 3, 2019
    Assignee: Google LLC
    Inventors: Ignacio Lopez Moreno, Diego Melendo Casado
  • Patent number: 10490185
    Abstract: A method and system for providing dynamic conversation between an application and a user is discussed. The method includes utilizing a computing device to receive a requirement input from the user for the application. The method further includes determining a goal of the user based on the requirement input. Based on the goal, a plurality of conversation threads is initiated with the user, wherein each of the plurality of conversation threads has a degree of association with the goal. Thereafter, a plurality of slots is dynamically generated based on the goal and the plurality of conversation threads. A slot of the plurality of slots stores a data value corresponding to the requirement input of the user.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: November 26, 2019
    Assignee: Wipro Limited
    Inventors: Manjunath Ramachandra Iyer, Meenakshi Sundaram Murugeshan
  • Patent number: 10490187
    Abstract: Systems and processes for operating a digital assistant are provided. In one example process, a speech input is received from a user. A user intent is determined based on the speech input. Determining the user intent includes generating text based on the speech input, performing natural language processing of the text, and determining the user intent based on a result of the natural language processing. In accordance with the user intent, status information associated with at least one of the one or more electronic devices is requested. The status information associated with the at least one of the one or more electronic devices is received. A spoken output is generated and represents the status information associated with the at least one of the one or more electronic devices. The spoken output is caused to be provided to the user.
    Type: Grant
    Filed: September 15, 2016
    Date of Patent: November 26, 2019
    Assignee: Apple Inc.
    Inventors: Peter Allan Laurens, Christopher Verwymeren, Susan L. Booker, Jonathan J. Moore, Roshni Malani, Benjamin S. Phipps
  • Patent number: 10490306
    Abstract: Medical information is communicated between different entities. Personalized models of peoples' understanding of a medical field are created. Role information is used to create the personalized model. More information than mere role may be used for at least one of the personalized models, such as information on past medical history. The personalized models link to various subsets of base medical ontologies for one or more medical subjects. The concepts and relationships in these ontologies formed by the linking may be matched, providing a translation from one personalized model to another. The terminology with similar or the same concepts and/or relationships is output for a given user based on their model.
    Type: Grant
    Filed: February 20, 2015
    Date of Patent: November 26, 2019
    Assignee: CERNER INNOVATION, INC.
    Inventor: John D. Haley
  • Patent number: 10482885
    Abstract: A speech-processing system configured to determine entities corresponding to ambiguous words such as anaphora (“he,” “she,” “they,” etc.) included in an utterance. The system may associate incoming utterances with a speaker identification (ID), device ID, and other data. The system then tracks entities referred to in utterances so that if a later utterance includes an ambiguous entity reference, the system may take the speaker ID, device ID, etc. from the ambiguous reference, along with the text of the utterance and other data, and compare that information to previously mentioned entities (or other entities that may be relevant) to identify the entity mentioned in the ambiguous statement. Once the entity is determined, the system may then complete command processing of the utterance using the identified entity.
    Type: Grant
    Filed: November 15, 2016
    Date of Patent: November 19, 2019
    Assignee: Amazon Technologies, Inc.
    Inventor: Michael Moniz
  • Patent number: 10475083
    Abstract: A method, apparatus, and computer program product are disclosed for updating a structure database. The method includes accessing a corpus of machine readable text generated based on a plurality of promotions, wherein each of the plurality of promotions comprises at least one promotion option associated with at least one service, and extracting features from the promotion options, the features being mapped to services associated with respective promotion options from which the features are extracted. The method further includes identifying one or more components associated with the extracted features, tagging, using a processor, each promotion option with every component associated with at least one feature of the promotion option, and updating a structure database using the tagged promotion options. A corresponding apparatus and computer program product are also provided.
    Type: Grant
    Filed: September 26, 2013
    Date of Patent: November 12, 2019
    Assignee: GROUPON, INC.
    Inventors: Mechie Nkengla, Kavita Kochar, Shafiq Shariff, Gaston L'Huillier, Rajesh Parekh, Logan Tyler Jennings
  • Patent number: 10474949
    Abstract: A method for classifying an object includes applying multiple confidence values to multiple objects. The method also includes determining a metric based on the multiple confidence values. The method further includes determining a classification of a first object from the multiple objects based on a knowledge-graph when the metric is above a threshold.
    Type: Grant
    Filed: October 30, 2014
    Date of Patent: November 12, 2019
    Assignee: Qualcomm Incorporated
    Inventors: Somdeb Majumdar, Regan Blythe Towal, Sachin Subhash Talathi, David Jonathan Julian, Venkata Sreekanta Reddy Annapureddy
  • Patent number: 10460039
    Abstract: A method for controlling identification includes obtaining first text, which is text in a first language, obtaining second text, which is text in a second language obtained by translating the first text into the second language, obtaining correct labels, which indicate content of the first text, inputting the first text and the second text to an identification model common to the first and second languages, and updating the common identification model such that labels identified by the common identification model from the first text and the second text match the correct labels.
    Type: Grant
    Filed: July 28, 2017
    Date of Patent: October 29, 2019
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Hongjie Shi, Takashi Ushio, Mitsuru Endo, Katsuyoshi Yamagami
  • Patent number: 10460032
    Abstract: A method comprising of receiving a first communication content directed to a user. The first communication content includes one or a combination of the following: content read by the user and content written by the user. The method also comprises of generating tokens corresponding to the first communication content by applying natural language processing and generating a token frequency index for the user, based on the tokens generated from the first communication content. The method determines a lexicon reading level for the user, based on the token frequency index generated for the user. The lexicon reading level indicates a reading level of the user. The method adds the lexicon reading level to a lexicon profile of the user. The method modifies a second communication content by replacing tokens with synonyms of the tokens based on comparing the difficulty ratings of the tokens with the user's lexicon reading level.
    Type: Grant
    Filed: March 17, 2017
    Date of Patent: October 29, 2019
    Assignee: International Business Machines Corporation
    Inventors: Pasquale A. Catalano, Andrew G. Crimmins, Arkadiy O. Tsfasman, John S. Werner
  • Patent number: 10453454
    Abstract: Example implementations described herein are directed to a dialog system with self-learning natural language understanding (NLU), involving a client-server configuration. If the NLU results in the client is not confident, the NLU will be done again in the server. In the dialog system, the human user and the system communicate via speech or text information. The examples of such products include robots, interactive voice response system (IVR) for call centers, voice-enabled personal devices, car navigation system, smart phones, and voice input devices in the work environments where the human operator cannot operate the devices by hands.
    Type: Grant
    Filed: October 26, 2017
    Date of Patent: October 22, 2019
    Assignee: HITACHI, LTD.
    Inventors: Takeshi Homma, Masahito Togami
  • Patent number: 10453448
    Abstract: Embodiments of the present disclosure provide a method and a device for managing a dialogue based on artificial intelligence. The method includes the followings. An optimum system action is determined from at least one candidate system action according to a current dialogue status feature, a candidate system action feature and surrounding feedback information of the at least one candidate system action and based on a decision model. Since the current dialogue status corresponding to the current dialogue status feature includes uncertain results of natural language understanding, the at least one candidate system action acquired according to the current dialogue status also includes the uncertain results of natural language understanding.
    Type: Grant
    Filed: January 16, 2018
    Date of Patent: October 22, 2019
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Yuan Gao, Daren Li, Dai Dai, Qiaoqiao She
  • Patent number: 10431202
    Abstract: Examples of the present disclosure describe systems and methods relating to conversation state management using frame tracking. In an example, a frame may represent one or more constraints (e.g., parameters, variables, or other information) received from or generated as a result of interactions with a user. Consequently, each frame may represent one or more states of an ongoing conversation. When the user provides new or different information, a new frame may be created to represent the now-current state of the conversation. The previous frame may be retained for later access by what is referred to herein as a “dialog agent,” which is the portion of the system that can search and use previous state-related information. When an utterance is received, a frame to which the utterance relates may be identified. Thus, the dialog agent may track multiple states simultaneously, thereby enabling conversation features that were not previously possible.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: October 1, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Justin Harris, Layla El Asri, Emery Fine, Rahul Mehrotra, Hannes Schulz, Shikhar Sharma, Jeremie Zumer
  • Patent number: 10430466
    Abstract: A computer-implemented method includes storing, by a computing device, a plurality of dialogs between user devices and an automated support application hosted by the computing device; determining, by the computing device, transitive relationships between the plurality of dialogs; and updating, by the computing device, a question mapping based on the determining the transitive relationships; and applying, the computing device, the updated question mapping to a subsequent support dialog.
    Type: Grant
    Filed: January 25, 2018
    Date of Patent: October 1, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Craig M. Trim, John M. Ganci, Jr., Wing L. Leung, Kimberly G. Starks
  • Patent number: 10427306
    Abstract: Methods, systems, and apparatus for receiving a command for controlling a robot, the command referencing an object, receiving sensor data for a portion of an environment of the robot, identifying, from the sensor data, a gesture of a human that indicates a spatial region located outside of the portion of the environment described by the sensor data, accessing map data indicating locations of objects within a space, searching the map data for the object, wherein the search of the map data is restricted to the spatial region, determining, based at least on searching the map data for the object referenced in the command, that the object referenced in the command is present in the spatial region, and in response to determining that the object referenced in the command is present in the spatial region, controlling the robot to perform an action with respect to the object referenced in the command.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: October 1, 2019
    Assignee: X Development LLC
    Inventors: Michael Joseph Quinlan, Gabriel A. Cohen
  • Patent number: 10431216
    Abstract: Enhanced graphical user interfaces for transcription of audio and video messages is disclosed. Audio data may be transcribed, and the transcription may include emphasized words and/or punctuation corresponding to emphasis of user speech. Additionally, the transcription may be translated into a second language. A message spoken by a user depicted in one or more images of video data may also be transcribed and provided to one or more devices.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: October 1, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Sandra Lemon, Nancy Yi Liang
  • Patent number: 10417312
    Abstract: Disclosed is an information added document preparation device including: a selection unit configured to select an addition format to be used when predetermined additional information is added to an original document; and an information adding unit configured to prepare a document in which the predetermined additional information is added to the original document in the addition format selected by the selection unit, wherein the selection unit selects the addition format in which the document prepared by the information adding unit satisfies a predetermined layout condition, among a plurality of addition formats which are previously prepared.
    Type: Grant
    Filed: October 14, 2016
    Date of Patent: September 17, 2019
    Assignee: KONICA MINOLTA, INC.
    Inventor: Jun Kuroki
  • Patent number: 10418033
    Abstract: Configurable core domains of a speech processing system are described. A core domain output data format for a given command is originally configured with default content portions. When a user indicates additional content should be output for the command, the speech processing system creates a new output data format for the core domain. The new output data format is user specific and includes both default content portions as well as user preferred content portions.
    Type: Grant
    Filed: June 1, 2017
    Date of Patent: September 17, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Rohan Mutagi, Felix Wu, Rongzhou Shen, Neelam Satish Agrawal, Vibhunandan Gavini, Pablo Carballude Gonzalez
  • Patent number: 10418029
    Abstract: Method of selecting training text for language model, and method of training language model using the training text, and computer and computer program for executing the methods. The present invention provides for selecting training text for a language model that includes: generating a template for selecting training text from a corpus in a first domain according to generation techniques of: (i) replacing one or more words in a word string selected from the corpus in the first domain with a special symbol representing any word or word string, and adopting the word string after replacement as a template for selecting the training text; and/or (ii) adopting the word string selected from the corpus in the first domain as the template for selecting the training text; and selecting text covered by the template as the training text from a corpus in a second domain different from the first domain.
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: September 17, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nobuyasu Itoh, Gakuto Kurata, Masafumi Nishimura
  • Patent number: 10409916
    Abstract: A natural language processing system identifies an action element, a target element, and command element(s) in a text natural language command. For each identified command element, in the order it appears in the text natural language command, the natural language processing system accesses a playlist access matrix according to a matrix access counter to identify a playlist pointer associated with that command element, determines whether that playlist pointer indicates its associated command element is a best match relative to any other command elements that have already been considered and, if so, updates a playlist entry identifier with that playlist pointer and increments the matrix access counter. When all of the command elements have been considered, the natural language processing system uses the playlist entry identifier to identify a computing language command in a command playlist, and executes the computing language command on the target element based on the action element.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: September 10, 2019
    Assignee: Dell Products L.P.
    Inventor: Mark Steven Sanders