Natural Language Patents (Class 704/257)
  • Patent number: 10967520
    Abstract: Methods, systems, and apparatus for receiving a command for controlling a robot, the command referencing an object, receiving sensor data for a portion of an environment of the robot, identifying, from the sensor data, a gesture of a human that indicates a spatial region located outside of the portion of the environment described by the sensor data, searching map data for the object, determining, based at least on searching the map data for the object referenced in the command, that the object referenced in the command is present in the spatial region, and in response to determining that the object referenced in the command is present in the spatial region, controlling the robot to perform an action with respect to the object referenced in the command.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: April 6, 2021
    Assignee: X Development LLC
    Inventors: Michael Joseph Quinlan, Gabriel A. Cohen
  • Patent number: 10971134
    Abstract: A computer-implemented method comprising: receiving, by a computing device, an input phrase from a text generator; determining, by the computing device, a complexity level for an audience; generating, by the computing device, a plurality of target phrases including a modification of the input phrase; generating, by the computing device, respective readability scores for each of the plurality of target phrases; mapping, by the computing device, the plurality of the target phrases to the target audience complexity level to select a particular target phrase of the plurality of the target phrases; and outputting, by the computing device, the selected particular target phrase to a text-to-speech (T2S) component to cause the T2S component to output the selected particular target phrase as audible speech.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: April 6, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Craig M. Trim, John M. Ganci, Jr., Aaron K. Baughman, Veronica Wyatt
  • Patent number: 10963636
    Abstract: User-generated input is received that includes a sequence of words associated with initiation of a computer-implemented event. Thereafter, such input is parsed using at least one natural language processing (NLP) model. This parsed input is then used by a machine learning model to determine a suggested template having a plurality of fields for initiating the event. The template can then be presented in a graphical user interface. Related apparatus, systems, techniques and articles are also described.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: March 30, 2021
    Assignee: SAP SE
    Inventors: Nishant Kumar, Panish Ramakrishna, Kumaraswamy Gowda, Rajendra Vuppala, Vidhya Neelakantan, Erica Vandenhoek, Nithya Rajagopalan
  • Patent number: 10964311
    Abstract: According to one embodiment, a word detection system acquires speech data including a plurality of frames, generates the speech characteristic amount, calculates a frame score by matching a reference model based on the speech characteristic amount associated with a target word with the frames in the speech data, calculates a first score of the word from the frame score, detects the word from the speech data based on the first score, calculates a second score of the word based on time information on the start and the end of the detected word and the frame score, compares the value of the second score with the second scores of a plurality of words, and determines a word to be output based on the comparison result.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: March 30, 2021
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventor: Hiroshi Fujimura
  • Patent number: 10950235
    Abstract: Provided are an information processing device, etc. that is capable of extracting information specific to a user from speech data. This information processing device is provided with: speech recognition means for generating a character string based on speech data; filtering means for filtering one or more keywords extracted from the character string generated by the speech recognition means, based on one or more words which are relevant to a speaker of the speech data and stored in advance; and output means for outputting a result of the filtering performed by the filtering means.
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: March 16, 2021
    Assignee: NEC CORPORATION
    Inventor: Masato Moriya
  • Patent number: 10943587
    Abstract: An information processing device including an electronic control unit is provided. The electronic control unit is configured: to acquire speech data which is uttered by a user; to acquire context in associated with a situation of the user; to convert the speech data into text data; to select a dictionary which is referred to for determining a meaning of a word included in the text data based on the context information when the speech data has been acquired; to give the meaning of the word determined with reference to the selected dictionary to the text data; and to provide a service based on the text data to which the meaning of the word is given.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: March 9, 2021
    Assignee: Toyota Jidosha Kabushiki Kaisha
    Inventor: Koichi Suzuki
  • Patent number: 10937429
    Abstract: A network monitor system collects log entries from network appliances in the data network, each log entry including a quantity context related to an activity or a resources usage and a value of the quantity context. The network monitor system receives a spoken question inputting by a user and processes the spoken question. The network monitor determines a question context included in the spoken question, including a quantity entity context, compares the question context with given log entries, and for matching given log entry, stores the quantity context and the value of the quantity context in the given log entry as a result entry in a result entries list. The network monitor system further composes a response according to the result entries and outputs the response for playing to the user.
    Type: Grant
    Filed: January 7, 2020
    Date of Patent: March 2, 2021
    Assignee: TP Lab, Inc.
    Inventors: Chi Fai Ho, John Chiong
  • Patent number: 10936642
    Abstract: Under one aspect, first user input including free-form text is received in a first graphical user interface (GUI). A classification engine of the computer system incorporating a machine learning model classifies words of the free-form text into a male-biased class, a female-biased class, or a neutral class. At least one of the words is classified into the male-biased class or the female-biased class. At least one of the words classified into the male-biased class or the female-biased class is flagged in the first GUI. Second user input is received in the first GUI including at least one revision to at least one of the words of the free-form text classified into the male-biased class or the female-biased class responsive to the flagging. The revised free-form text is posted to a web site for display in a second GUI.
    Type: Grant
    Filed: February 5, 2019
    Date of Patent: March 2, 2021
    Assignee: SAP SE
    Inventors: Weiwei Shen, Manish Tripathi
  • Patent number: 10936825
    Abstract: Methods and apparatus for automated processing of natural language text is described. The text can be preprocessed to produce language-space data that includes descriptive data elements for words. Source code that includes linguistic expressions, and that may be written in a programming language that is user-friendly to linguists, can be compiled to produce finite-state transducers and bi-machine transducers that may be applied directly to the language-space data by a language-processing virtual machine. The language-processing virtual machine can select and execute code segments identified in the finite-state and/or bi-machine transducers to disambiguate meanings of words in the text.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: March 2, 2021
    Assignee: CLRV Technologies, LLC
    Inventor: Emmanuel Roche
  • Patent number: 10936346
    Abstract: In one embodiment, a method includes receiving from a client system associated with a first user a user input based on one or more modalities, at least one of which is a visual modality, identifying one or more subjects associated with the user input based on the visual modality based on one or more machine-learning models, determining one or more attributes associated with the one or more subjects respectively based on the one or more machine-learning models, resolving one or more entities corresponding to the one or more subjects based on the determined one or more attributes, executing one or more tasks associated with the one or more resolved entities, and sending instructions for presenting a communication content including information associated with the executed one or more tasks responsive to user input to the client system associated with the first user.
    Type: Grant
    Filed: August 2, 2018
    Date of Patent: March 2, 2021
    Assignee: Facebook, Inc.
    Inventors: Vivek Natarajan, Shawn C. P. Mei, Zhengping Zuo
  • Patent number: 10937413
    Abstract: Techniques are provided for training a target language model based at least in part on data associated with a reference language model. For example, language data utilized to train an English language model may be translated and provided as training data to train a German language model to recognize utterances provided in German. By utilizing the techniques herein, the efficiency of training a new language model may be improved due at least in part to replacing labor-intensive operations conventionally performed by specialized personnel with machine-generated data. Additionally, techniques discussed herein provide for reducing the time required for training a new language model by leveraging information associated with utterances of one language to train the new language model associated with a different language.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: March 2, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Jonathan B. Feinstein, Alok Verma, Amina Shabbeer, Brandon Scott Durham, Catherine Breslin, Edward Bueche, Fabian Moerchen, Fabian Triefenbach, Klaus Reiter, Toby R. Latin-Stoermer, Panagiota Karanasou, Judith Gaspers
  • Patent number: 10930285
    Abstract: A method to select a response in a multi-turn conversation between a user and a conversational bot. The conversation is composed of a set of events, wherein an event is a linear sequence of observations that are user speech or physical actions. Queries are processed against a set of conversations that are organized as a set of inter-related data tables, with events and observations stored in distinct tables. As the multi-turn conversation proceeds, a data model comprising an observation history, together with a hierarchy of events determined to represent the conversation up to at least one turn, is persisted. When a new input (speech or physical action) is received, it is classified using a statistical model to generate a result. The result is then mapped to an observation in the data model. Using the mapped observation, a look-up is performed into the data tables to retrieve a possible response.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: February 23, 2021
    Assignee: Drift.com, Inc.
    Inventors: Jeffrey D. Orkin, Christopher M. Ward
  • Patent number: 10930268
    Abstract: Disclosed is a speech recognition method and apparatus, wherein the apparatus acquires first outputs from sub-models in a recognition model based on a speech signal, acquires a second output including values corresponding to the sub-models from a classification model based on the speech signal, and recognizes the speech signal based on the first outputs and the second output.
    Type: Grant
    Filed: January 10, 2019
    Date of Patent: February 23, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sang Hyun Yoo, Minyoung Mun, Inchul Song
  • Patent number: 10930280
    Abstract: Disclosed is a system for providing a toolkit for an agent developer. A system for providing a toolkit for an agent developer according to an embodiment of the present invention includes: an interface unit that obtains an utterance input by a user and outputs the utterance; and a support unit that determines intent of the utterance input by the user when the utterance is received through the interface unit, and provides another utterance or response corresponding to the determined intent through the interface unit.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: February 23, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Seulki Jung, Bongjun Choi
  • Patent number: 10923140
    Abstract: When speech of a first user includes a first word that is stored in a memory and associated with the first user, it is determined whether or not a difference between a first time and a second time is equal to or less than a predetermined time. The first time is a current time at which the first user spoke the first word. The second time is a time at which a second user last spoke a second word associated with the first word. When the difference between the first time and the second time is equal to or less than the predetermined time, a speaker outputs speech of a same content associated with the first word and the second word.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: February 16, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Ryouta Miyazaki, Yusaku Ota
  • Patent number: 10923110
    Abstract: An apparatus, method, and computer program product for adapting an acoustic model to a specific environment are defined. An adapted model obtained by adapting an original model to the specific environment using adaptation data, the original model being trained using training data and being used to calculate probabilities of context-dependent phones given an acoustic feature. Adapted probabilities obtained by adapting original probabilities using the training data and the adaptation data, the original probabilities being trained using the training data and being prior probabilities of context-dependent phones. An adapted acoustic model obtained from the adapted model and the adapted probabilities.
    Type: Grant
    Filed: August 25, 2017
    Date of Patent: February 16, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gakuto Kurata, Bhuvana Ramabhadran, Masayuki Suzuki
  • Patent number: 10923115
    Abstract: A method and system of dynamically generating computerized dialog. Natural language input previously from a user and cognitive context are analyzed. A dictionary is selected as a function of the natural language input and stored information previously known about the user. A corpus including knowledge of the topics of interest is further selected. One or more expressions are extracted from a network accessible data source. The one or more expressions extracted from the network accessible data source are filtered through the dictionary and the corpus. Dialog is generated in response to the natural language input, as a function of the cognitive context and topic of interest by integrating the one or more expressions filtered through the dictionary and corpus.
    Type: Grant
    Filed: October 2, 2019
    Date of Patent: February 16, 2021
    Assignee: International Business Machines Corporation
    Inventors: Edgar Adolfo Zamora Duran, Franz Friedrich Liebinger Portela, Yanil Zeledon, Roxana Monge Nunez
  • Patent number: 10909972
    Abstract: An example apparatus for detecting intent in voiced audio includes a receiver to receive one or more word sequence hypotheses related to a voiced audio and a dynamic vocabulary. The apparatus also includes a natural language understander (NLU) to detect an intent and recognize a property related to the intent based on the word sequence hypothesis and the dynamic vocabulary. The apparatus further includes a transmitter to transmit the detected intent and recognized associated property to an application.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: February 2, 2021
    Assignee: Intel Corporation
    Inventors: Munir Nikolai Alexander Georges, Grzegorz Wojdyga, Tomasz Noczynski, Jakub Nowicki, Szymon Jessa
  • Patent number: 10901688
    Abstract: In embodiments, a method includes detecting, by a computing device, open applications of the computing device; storing, by the computing device, a buffer that tags and tracks audio content and audio context of the open applications; receiving, by the computing device, a user request to take an action regarding at least one of the open applications; determining, by the computing device, a match between the user request and the at least one of the open applications utilizing the buffer; and initiating, by the computing device, a function based on the user request in response to determining the match.
    Type: Grant
    Filed: September 12, 2018
    Date of Patent: January 26, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lisa Seacat Deluca, Kelley Anders, Jeremy R. Fox
  • Patent number: 10902847
    Abstract: Methods, systems, and related products that provide detection of media content items that are under-locatable by machine voice-driven retrieval of uttered requests for retrieval of the media items. For a given media item, a resolvability value and/or an utterance resolve frequency is calculated by a number of playbacks of the media item by a speech retrieval modality to a total number of playbacks of the media item regardless of retrieval modality. In some examples, the methods, systems and related products also provide for improvement in the locatability of an under-locatable media item by collecting and/or generating one or more pronunciation aliases for the under-locatable item.
    Type: Grant
    Filed: September 7, 2018
    Date of Patent: January 26, 2021
    Assignee: Spotify AB
    Inventors: Aaron Springer, Henriette Cramer, Sravana Reddy
  • Patent number: 10896203
    Abstract: A digital analytics system comprises a data management system including data extraction modules and a data storage system. The data extraction modules extract data from data sources and store the data in storage units. An analytics engine system including analytics engines and interfaces to retrieve data relevant to the analytics engines from the storage units. The analytics engines may perform prescriptive or descriptive analytics on the retrieved data. An applications interface and storage stores applications. The applications may be executed using information generated by the prescriptive or descriptive analytics performed by the analytics engines.
    Type: Grant
    Filed: August 1, 2017
    Date of Patent: January 19, 2021
    Assignee: ACCENTURE GLOBAL SERVICES LIMITED
    Inventors: Leonidas Michael Barrett, Tzuu-Wang Shein
  • Patent number: 10896297
    Abstract: A method uses natural language processing for visual analysis of a dataset by a computer. The computer displays a data visualization based on a dataset retrieved from a database using a first set of database queries. The computer receives user input (e.g., keyboard or voice) to specify a natural language command related to the displayed data visualization. Based on the displayed data visualization, the computer extracts one or more cue phrases from the natural language command. The computer computes analytical intent (e.g., visualization state intent and/or transitional intent) of the user based on the one or more cue phrases. The computer then derives visualization states based on the analytical intent. The computer subsequently computes one or more analytical functions associated with the visualization states, thereby creating one or more functional phrases. The computer then updates the data visualization based on the one or more functional phrases.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: January 19, 2021
    Assignee: Tableau Software, Inc.
    Inventors: Melanie K. Tory, Vidya R. Setlur
  • Patent number: 10891968
    Abstract: An interactive server, a control method thereof, and an interactive system are provided. The interactive server includes: a communicator which communicates with a display apparatus to receive an uttered voice signal; a storage device which stores utterance history information of a second uttered voice signal received from the display apparatus before the first uttered voice signal is received; an extractor which extracts uttered elements from the received first uttered voice signal; and a controller which generates response information based on the utterance history information stored in the storage device and the extracted uttered elements and transmits the response information to the display apparatus. Therefore, the interactive server comprehends intentions of the user with respect to various uttered voices of the user to generate response information according to the intentions and transmits the response information to the display apparatus.
    Type: Grant
    Filed: January 7, 2014
    Date of Patent: January 12, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ji-hye Chung, Cheong-jae Lee, Hye-jeong Lee, Yong-wook Shin
  • Patent number: 10878805
    Abstract: A computer-implemented technique is described herein for expediting a user's interaction with a digital assistant. In one implementation, the technique involves receiving a system prompt generated by a digital assistant in response to an input command provided by a user via an input device. The technique then generates a predicted response based on linguistic content of the system prompt, together with contextual features pertaining to a circumstance in which the system prompt was issued. The predicted response corresponds to a prediction of how the user will respond to the system prompt. The technique then selects one or more dialogue actions from a plurality of dialogue actions, based on a confidence value associated with the predicted response. The technique expedites the user's interaction with the digital assistant by reducing the number of system prompts that the user is asked to respond to.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: December 29, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vipul Agarwal, Rahul Kumar Jha, Soumya Batra, Karthik Tangirala, Mohammad Makarechian, Imed Zitouni
  • Patent number: 10872601
    Abstract: A natural language understanding (NLU) system that uses a reduced dimensionality of word embedding features to configure compressed NLU models that use reduced computing resources for NLU tasks. A modified NLU model may include a compressed vocabulary data structure of word embedding data vectors that include a set of values corresponding to a reduced dimensionality of the original word embedding features, resulting in a smaller sized vocabulary data structure and reduced size of the vocabulary data structure. Further components of the modified NLU model perform matrix operations to expand the dimensionality of the reduced word embedding data vectors up to the expected dimensionality of later layers of the NLU model. Additional training and reweighting can adjust for potential loses in performance resulting from reductions in the word embedding features. Thus the modified NLU model can achieve similar performance to an original NLU model with reductions in use of computing resources.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: December 22, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Anish Acharya, Angeliki Metallinou, Rahul Goel, Inderjit Dhillon
  • Patent number: 10872104
    Abstract: A method includes associating, for each one of a plurality of answer definitions, at least one or more pattern-form questions, wherein each answer definition has an associated jump target that defines a respective entry point into the workspace analytics system to provide information responsive to the associated one or more pattern-faun questions. The method further includes receiving a user input including capturing input text defining a natural language user query, matching the received input text to one of the pattern-form questions thereby selecting the jump target associated with the matched pattern-form question, and generating a response to the natural language user query by retrieving information from the workspace analytics system by referencing a link based on the selected jump target and zero or more parameters values.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: December 22, 2020
    Assignee: Lakeside Software, LLC
    Inventors: Edward S. Wegryn, Lawrence J. Birk, Christopher Dyer, Kenneth M. Schumacher
  • Patent number: 10861454
    Abstract: A method includes a voice-activated device establishing a communication channel with a mobile device through a communication interface, receiving a voice command of a user to perform an action, determining, in response to the voice command, the action based at least in part on the voice command, and outputting an audible response corresponding to the determined action. During outputting of the audible response, visual data that includes a representation of the determined action is displayed on the mobile device. The user is enabled to validate or modify the visual data via a user interface of the mobile device.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: December 8, 2020
    Assignee: MASTERCARD ASIA/PACIFIC PTE. LTD
    Inventors: Zunhua Wang, Hui Fang, Shiying Lian
  • Patent number: 10861456
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating dialogue responses based on received utterances utilizing an independent gate context-dependent additive recurrent neural network. For example, the disclosed systems can utilize a neural network model to generate a dialogue history vector based on received utterances and can use the dialogue history vector to generate a dialogue response. The independent gate context-dependent additive recurrent neural network can remove local context to reduce computation complexity and allow for gates at all time steps to be computed in parallel. The independent gate context-dependent additive recurrent neural network maintains the sequential nature of a recurrent neural network using the hidden vector output.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: December 8, 2020
    Assignee: ADOBE INC.
    Inventors: Quan Tran, Trung Bui, Hung Bui
  • Patent number: 10853578
    Abstract: Provided are systems, methods, and devices for extracting unconscious meaning from media corpora. One or more corpora are received from one or more media databases. A number of phrases are extracted from the corpora, and then disambiguated according to natural language processing methods. One or more criteria are then selected to be used for phrase analysis, and the phrases are then analyzed to extract unconscious meaning based on the one or more criteria. The phrase analysis involves machine learning or predictive analysis methods. The results of the phrase analysis are then provided to one or more client devices, with the results containing findings of unconscious meaning for the phrases.
    Type: Grant
    Filed: October 19, 2018
    Date of Patent: December 1, 2020
    Assignee: MACHINEVANTAGE, INC.
    Inventors: Ratnakar Dev, Anantha K. Pradeep
  • Patent number: 10854195
    Abstract: A dialogue processing apparatus and method monitor an intensity of an acoustic signal that is input in real time and determine that speech recognition has started, when the intensity of the input acoustic signal is equal to or greater than a reference value, allowing a user to start speech recognition by an utterance without an additional trigger. A vehicle can include the apparatus and method. The apparatus includes: a monitor to compare an input signal level with a reference level in real time and to determine that speech is input when the input signal level is greater than the reference level; a speech recognizer to output a text utterance by performing speech recognition on the input signal when it is determined that the speech is input; a natural language processor to extract a domain and a keyword based on the utterance; and a dialogue manager to determine whether a previous context is maintained based on the domain and the keyword.
    Type: Grant
    Filed: June 26, 2017
    Date of Patent: December 1, 2020
    Assignees: Hyundai Motor Company, Kia Motors Corporation
    Inventor: Kyung Chul Lee
  • Patent number: 10847163
    Abstract: One embodiment provides a method, including: receiving, at an information handling device, voice input; determining, using at least one sensor associated with the information handling device, whether the voice input comprises voice input provided proximate to the information handling device; and providing, based on determining that the voice input is provided proximate to the information handling device, output responsive to the voice input. Other aspects are described and claimed.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: November 24, 2020
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: John Weldon Nicholson, Daryl Cromer, Ming Qian, David Alexander Schwarz, Lincoln Penn Hancock
  • Patent number: 10847148
    Abstract: Multi-turn conversation systems that are personalized to a user based on insights derived from big data are described. A computer-based conversation system for interacting with a user includes: a CPU, a computer readable memory, and a computer readable storage medium associated with a computer device; and program instructions defining a statement and question framer that is configured to: obtain insights about a user from a big data engine; and generate a response to an input from the user based on the insights and the input. The program instructions are stored on the computer readable storage medium for execution by the CPU via the computer readable memory.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: November 24, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Faried Abrahams, Lalit Agarwalla, Gandhi Sivakumar
  • Patent number: 10847175
    Abstract: In some natural language understanding (NLU) applications, results may not be tailored to the user's query. In an embodiment of the present invention, a method includes tagging elements of automated speech recognition (ASR) data based on an ontology stored in a memory. The method further includes indexing tagged elements to an entity of the ontology. The method further includes generating a logical form of the ASR data based on the tagged elements and the indexed entities. The method further includes mapping the logical form to a query to a respective corresponding database stored in the memory. The method further includes issuing the query to the respective corresponding databases. The method further includes presenting results of the query to the user via a display or a voice response system.
    Type: Grant
    Filed: July 24, 2015
    Date of Patent: November 24, 2020
    Assignee: Nuance Communications, Inc.
    Inventors: Peter Yeh, William Jarrold, Adwait Ratnaparkhi, Deepak Ramachandran, Peter Patel-Schneider, Benjamin Douglas
  • Patent number: 10832674
    Abstract: An electronic device and method are disclosed. The electronic device includes a touchscreen, microphone, speaker, wireless communication circuit, processor and memory. The memory stores instructions executable by the processor to: receive a first user utterance through the microphone, transmit, by the wireless communication circuit, the received first user utterance to an external server through the wireless communication circuit, receive, by the wireless communication circuit, first text data generated by the external server using automatic speech recognition (ASR), when the first text data includes at least one pre-stored word, phrase, and sentence, identifying a plurality of tasks mapped to the at least one pre-stored word, phrase, and sentence, and execute the identified plurality of tasks using at least one of sequential execution or parallel execution.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: November 10, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ho Jun Jaygarl, Hyun Woo Kang, Jae Hwan Lee, Han Jun Ku, Nam Hoon Kim, Eun Taek Lim, Da Som Lee
  • Patent number: 10832005
    Abstract: The technology disclosed relates to computer-implemented conversational agents and particularly to detecting a point in the dialog (end of turn, or end of utterance) at which the agent can start responding to the user. The technology disclosed provides a method of incrementally parsing an input utterance with multiple parses operating in parallel. The technology disclosed includes detecting an interjection point in the input utterance when a pause exceeds a high threshold, or detecting an interjection point in the input utterance when a pause exceeds a low threshold and at least one of the parallel parses is determined to be interruptible by matching a complete sentence according to the grammar. The conversational agents start responding to the user at a detected interjection point.
    Type: Grant
    Filed: January 9, 2019
    Date of Patent: November 10, 2020
    Assignee: SoundHound, Inc.
    Inventors: Keyvan Mohajer, Bernard Mont-Reynaud
  • Patent number: 10827067
    Abstract: A text-to-speech method includes outputting an instruction according to voice information entered by a user; obtaining text information according to the instruction; converting the text information to audio; and playing the audio. According to the embodiments of the present invention, news or other text content in a browser can be played by voice, which liberates hands and eyes of a user. The user can use the browser in some scenarios where the user cannot easily use the browser, such as driving a car, thereby improving user experience.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: November 3, 2020
    Assignee: Guangzhou UCWeb Computer Technology Co., Ltd.
    Inventor: Xiang Liu
  • Patent number: 10826923
    Abstract: An apparatus includes a memory and a hardware processor. The memory stores a threshold. The processor receives first, second, and third messages. The processor determines a number of occurrences of words in the messages. The processor also calculates probabilities that a word in the messages is a particular word and co-occurrence probabilities. The processor further calculates probability distributions of words in the messages. The processor also calculates probabilities based on the probability distributions. The processor compares these probabilities to a threshold to determine whether the first message is related to the second message and/or whether the first message is related to the third message.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: November 3, 2020
    Assignee: Bank of America Corporation
    Inventors: Marcus Adrian Streips, Arjun Thimmareddy
  • Patent number: 10824675
    Abstract: A technique is described for generating a knowledge graph that links names associated with a first subject matter category (C1) (such as brands) with names associated with a second subject matter category (C2) (such as products). In one implementation, the technique relies on two similarly-constituted processing pipelines, a first processing pipeline for processing the C1 names, and a second processing pipeline for processing the C2 names. Each processing pipeline includes three main stages, including a name-generation stage, a verification stage, and an augmentation stage. The generation stage uses a voting strategy to form an initial set of seed names. The verification stage removes noisy seed names. And the augmentation stage expands each verified name to include related terms. A final edge-forming stage identifies relationships between the expanded C1 names and the expanded C2 names using a voting strategy.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: November 3, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Omar Rogelio Alonso, Vasileios Kandylas, Rukmini Iyer
  • Patent number: 10818293
    Abstract: A method to select a response in a multi-turn conversation between a user and a conversational bot. The conversation is composed of a set of events, wherein an event is a linear sequence of observations that are user speech or physical actions. Queries are processed against a set of conversations that are organized as a set of inter-related data tables, with events and observations stored in distinct tables. As the multi-turn conversation proceeds, a data model comprising an observation history, together with a hierarchy of events determined to represent the conversation up to at least one turn, is persisted. When a new input (speech or physical action) is received, it is classified using a statistical model to generate a result. The result is then mapped to an observation in the data model. Using the mapped observation, a look-up is performed into the data tables to retrieve a possible response.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: October 27, 2020
    Assignee: Drift.com, Inc.
    Inventors: Jeffrey D. Orkin, Christopher M. Ward
  • Patent number: 10803249
    Abstract: In one aspect, a computerized method useful for, with an ensemble of Natural Language Understanding and Processing methods converting a set of user actions into machine queries, includes the step of providing a knowledge model. The method includes the step of receiving a natural language user query; preprocesses the natural language user query for further processing as a preprocessed user query. The preprocessing includes the step of chunking a set of sentences of the natural language query into a set of smaller sentences and retaining the reference between chunks of the set of sentences. The method includes the step of, with the preprocessed user query. For each chunk of the chunked preprocessed user query the method implements the following steps.
    Type: Grant
    Filed: August 12, 2018
    Date of Patent: October 13, 2020
    Inventor: Seyed Ali Loghmani
  • Patent number: 10796227
    Abstract: A system comprising: a processor; a data bus coupled to the processor; and a non-transitory, computer-readable storage medium embodying computer program code, the non-transitory, computer-readable storage medium being coupled to the data bus. The computer program code interacting with a plurality of computer operations and comprising instructions executable by the processor and configured for: receiving data from a data source; processing the data, the processing comprising performing a parsing operation on the data, the processing the data identifying a plurality of knowledge elements based upon the parsing operation, the parsing operation comprising ranking of parse options; and, storing the knowledge elements within the cognitive graph as a collection of knowledge elements, the storing universally representing knowledge obtained from the data.
    Type: Grant
    Filed: October 11, 2016
    Date of Patent: October 6, 2020
    Assignee: Cognitive Scale, Inc.
    Inventor: Hannah R. Lindsley
  • Patent number: 10789955
    Abstract: A method includes receiving a speech input from a user and obtaining context metadata associated with the speech input. The method also includes generating a raw speech recognition result corresponding to the speech input and selecting a list of one or more denormalizers to apply to the generated raw speech recognition result based on the context metadata associated with the speech input. The generated raw speech recognition result includes normalized text. The method also includes denormalizing the generated raw speech recognition result into denormalized text by applying the list of the one or more denormalizers in sequence to the generated raw speech recognition result.
    Type: Grant
    Filed: November 16, 2018
    Date of Patent: September 29, 2020
    Assignee: Google LLC
    Inventors: Assaf Hurwitz Michaely, Petar Aleksic, Pedro Moreno
  • Patent number: 10783881
    Abstract: A method for processing a recognition result of an automatic online speech recognizer for a mobile end device by a communication exchange device, wherein the recognition result for a phrase spoken by a user is received from the online speech recognizer as a text. A language model of permitted phrases is received from the mobile end device. A specification of meaning relating to a meaning of the phrase is assigned to each permitted phrase by the language model, and, through a decision-making logic of the communication exchange device, the text of the recognition result is compared with the permitted phrases defined by the language model and, for a matching permitted phrase in accordance with a predetermined matching criterion, the specification of meaning thereof is determined and the specification of meaning is provided to the mobile end device.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: September 22, 2020
    Assignee: AUDI AG
    Inventor: Christoph Voigt
  • Patent number: 10783879
    Abstract: Methods, programming, and system for modifying a slot value are described herein. In a non-limiting embodiment, an intent may be determined based on a first utterance. A first slot-value pair may be obtained for the first utterance based on the intent, the first slot-value pair including a first slot and a first value associated with the first slot. A second value associated with the first slot may be identified, the second value being identified from a second utterance that was previously received. Based on the intent and the first slot, a type of update to be performed with respect to the second value may be determined. The second value may then be updated based on the first value and the type of update.
    Type: Grant
    Filed: February 22, 2018
    Date of Patent: September 22, 2020
    Assignee: Oath Inc.
    Inventors: Prakhar Biyani, Cem Akkaya, Kostas Tsioutsiouliklis
  • Patent number: 10777199
    Abstract: [Object] To provide an information processing system and an information processing method capable of auditing the utterance data of an agent more flexibly. [Solution] An information processing system including: a storage section that stores utterance data of an agent; a communication section that receives request information transmitted from a client terminal and requesting utterance data of a specific agent from a user; and a control section that, when the request information is received through the communication section, replies to the client terminal with corresponding utterance data, and in accordance with feedback from the user with respect to the utterance data, updates an utterance probability level expressing a probability that the specific agent will utter utterance content indicated by the utterance data, and records the updated utterance probability level in association with the specific agent and the utterance content in the storage section.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: September 15, 2020
    Assignee: SONY CORPORATION
    Inventor: Akihiro Komori
  • Patent number: 10778618
    Abstract: A computer system, computer program product, and computer-implemented method for communicating electronic messages over a communication network coupled thereto are provided. The computer system comprises a network interface for receiving messages sent over the network and addressed to a user of the computer system; and computer executable electronic message processing software. The software comprises instructions for directing the computer system to receive a message over the network, and to identify whether a sender of the received electronic message is a human or a machine. The identifying includes first and second phases of operation. The first phase includes an offline phase employing information and activities resident on the computer system. The second phase includes an online phase employing resources remotely accessible over the network.
    Type: Grant
    Filed: January 9, 2014
    Date of Patent: September 15, 2020
    Assignee: OATH INC.
    Inventors: Zohar Karnin, Guy Halawi, David Wajc, Edo Liberty
  • Patent number: 10770060
    Abstract: An embodiment provides a method, including: receiving, via an audio receiver of an information handling device, user voice input; identifying a first word based on the user voice input; accessing a word association data store; selecting an equivalent based on an association with the first word within the word association data store; committing an action based on the equivalent; receiving feedback input from the user regarding the equivalent; and updating the selecting based on the feedback. Other aspects are described and claimed.
    Type: Grant
    Filed: December 5, 2013
    Date of Patent: September 8, 2020
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: Russell Speight VanBlon, Jon Wayne Heim, Jonathan Gaither Knox, Peter Hamilton Wetsel, Suzanne Marion Beaumont
  • Patent number: 10755051
    Abstract: Systems and processes for rule-based natural language processing are provided. In accordance with one example, a method includes, at an electronic device with one or more processors, receiving a natural-language input; determining, based on the natural-language input, an input expression pattern; determining whether the input expression pattern matches a respective expression pattern of each of a plurality of intent definitions; and in accordance with a determination that the input expression pattern matches an expression pattern of an intent definition of the plurality of intent definitions: selecting an intent definition of the plurality of intent definitions having an expression pattern matching the input expression pattern; performing a task associated with the selected intent definition; and outputting an output indicating whether the task was performed.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: August 25, 2020
    Assignee: Apple Inc.
    Inventors: Philippe P. Piernot, Didier Rene Guzzoni
  • Patent number: 10755042
    Abstract: The exemplary embodiments described herein are related to techniques for automatically generating narratives about data based on communication goal data structures that are associated with configurable content blocks. The use of such communication goal data structures facilitates modes of operation whereby narratives can be generated in real-time and/or interactive manners.
    Type: Grant
    Filed: May 11, 2018
    Date of Patent: August 25, 2020
    Assignee: NARRATIVE SCIENCE INC.
    Inventors: Lawrence Birnbaum, Kristian J. Hammond, Nathan Drew Nichols, Andrew R. Paley
  • Patent number: 10755699
    Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: August 25, 2020
    Assignee: VB Assets, LLC
    Inventors: Larry Baldwin, Tom Freeman, Michael Tjalve, Blane Ebersold, Chris Weider