Patents Examined by L. Thomas
  • Patent number: 11361168
    Abstract: Systems and methods are described herein for replaying content dialogue in an alternate language in response to a user command. While the content is playing on a media device, a first language in which the content dialogue is spoken is identified. Upon receiving a voice command to repeat a portion of the dialogue, the language in which the command was spoken is identified. The portion of the content dialogue to repeat is identified and translated from the first language to the second language. The translated portion of the content dialogue is then output. In this way, the user can simply ask in their native language for the dialogue to be repeated and the repeated portion of the dialogue is presented in the user's native language.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: June 14, 2022
    Assignee: Rovi Guides, Inc.
    Inventors: Carla Mack, Phillip Teich, Mario Sanchez, John Blake
  • Patent number: 11361752
    Abstract: The object of the present invention is to provide a voice recognition technique that can handle wording in other languages. The voice recognition dictionary data construction apparatus includes an attribute setting unit that sets attributes to first words that constitute a first character string representing a place name in the first language, a language conversion unit that creates a second character string by replacing the first words in the first character string with the second words without changing the attributes thereof, an order changing unit that creates a third character string by changing the word order of the second character string based on the attributes of the words of the second character string and the word order rule of place names of the second language, a phoneme data construction unit that constructs the phoneme data of the third character string.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: June 14, 2022
    Inventor: Yuzo Maruta
  • Patent number: 11347801
    Abstract: Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: May 31, 2022
    Assignee: GOOGLE LLC
    Inventors: Adam Coimbra, Ulas Kirazci, Abraham Lee, Wei Dong, Thushan Amarasiriwardena
  • Patent number: 11315555
    Abstract: Embodiments of the present disclosure disclose a terminal holder and a far-field voice interaction system. A specific implementation of the terminal holder includes: a far-field voice pickup device and a voice analysis device. The far-field voice pickup device receives voice sent by a user, and sends the voice to the voice analysis device. The voice analysis device analyzes the voice, determines whether the voice contains a preset wake-up word, and sends the voice to a terminal in communication connection with the terminal holder when the preset wake-up word is contained. This embodiment receives voice sent by a user through the terminal holder supporting a far-field voice pickup function, thereby facilitating the far-field voice control over the terminal.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: April 26, 2022
    Assignees: Baidu Online Network Technology (Beijing) Co., Ltd., Shanghai Xiaodu Technology Co., Ltd.
    Inventors: Hong Su, Peng Li, Lifeng Zhao
  • Patent number: 11302300
    Abstract: A system and method enable one to set a target duration of a desired synthesized utterance without removing or adding spoken content. Without changing the spoken text, the voice characteristics may be kept the same or substantially the same. Silence adjustment and interpolation may be used to alter the duration while preserving speech characteristics. Speech may be translated prior to a vocoder step, pursuant to which the translated speech is constrained by the original audio duration, while mimicking the speech characteristics of the original speech.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: April 12, 2022
    Assignee: Applications Technology (AppTek), LLC
    Inventors: Nick Rossenbach, Mudar Yaghi
  • Patent number: 11295213
    Abstract: Embodiments of the present invention relate to computer-implemented methods, systems, and computer program products for managing a conversational system. In one embodiment, a computer-implemented method comprises: obtaining, by a device operatively coupled to one or more processors, a first message sequence comprising messages involved in a conversation between a user and a conversation server; obtaining, by the device, a conversation graph indicating an association relationship between messages involved in a conversation; and in response to determining that the first message sequence is not matched in the conversation graph, updating, by the device, the conversation graph with a second message sequence, the second message sequence being generated based on a knowledge library including expert knowledge that is associated with a topic of the conversation.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: April 5, 2022
    Inventors: Li Jun Mei, Qi Cheng Li, Xin Zhou, Ya Bin Dang, Shao Chun Li
  • Patent number: 11289067
    Abstract: Methods and systems for generating voices based on characteristics of an avatar. One or more characteristics of an avatar are obtained and one or more parameters of a voice synthesizer for generating a voice corresponding to the one or more avatar characteristics are determined. The voice synthesizer is configured based on the one or more parameters and a voice is generated using the parameterized voice synthesizer.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: March 29, 2022
    Assignee: International Business Machines Corporation
    Inventors: Kristina Marie Brimijoin, Gregory Boland, Joseph Schwarz
  • Patent number: 11282500
    Abstract: The disclosed technology relates to a process for automatically training a machine learning algorithm to recognize a custom wake word. By using different text-to-speech services, input providing a custom wake word to a text to speech service can be used in order to generate different speech samples covering different variations in how the custom wake word can be pronounced. These samples are automatically generated and are subsequently used to train the wake word detection algorithm that will be used by the computing device to recognize and detect when the custom wake word is uttered by any user nearby a computing device for the purposes of initiating a virtual assistant. In a further embodiment, “white-listed” words (e.g different words that are pronounced similar to the custom wake word) are also identified and trained in order to minimize the occurrence of erroneously initiating the virtual assistant.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: March 22, 2022
    Inventors: Keith Griffin, Dario Cazzani
  • Patent number: 11281854
    Abstract: The technology disclosed herein summarizes a document using a dictionary derived from tokens within the document itself. In a particular implementation, a method provides identifying a first document for summarization and inputting the first document into a natural language model. The natural language model is configured to summarize the first document using words from a first dictionary compiled based on tokens from the first document. The method further provides receiving a first summary output by the natural language model after the natural language model summarizes the first document.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: March 22, 2022
    Assignee: Primer Technologies, Inc.
    Inventors: John Bohannon, Oleg Vasilyev, Thomas Alexander Grek
  • Patent number: 11275900
    Abstract: Embodiments of a computer-implemented system for improving classification of data associated with the deep web or dark net are disclosed.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: March 15, 2022
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Revanth Patil, Paulo Shakarian, Ashkan Aleali, Ericsson Marin
  • Patent number: 11270077
    Abstract: A computing device receives a natural language input from a user. The computing device routes the natural language input from an active domain node of multiple domain nodes of a multi-domain context-based hierarchy to a leaf node of the domain nodes by selecting a parent domain node in the hierarchy until an off-topic classifier labels the natural language input as in-domain and then selecting a subdomain node in the hierarchy until an in-domain classifier labels the natural language input with a classification label, each of the plurality of domain nodes comprising a respective off-topic classifier and a respective in-domain classifier trained for a respective domain node. The computing device outputs the classification label determined by the leaf node.
    Type: Grant
    Filed: May 13, 2019
    Date of Patent: March 8, 2022
    Assignee: International Business Machines Corporation
    Inventors: Ming Tan, Ladislav Kunc, Yang Yu, Haoyu Wang, Saloni Potdar
  • Patent number: 11270691
    Abstract: A voice interaction system performs a voice interaction with a user. The voice interaction system includes: topic detection means for estimating a topic of the voice interaction and detecting a change in the topic that has been estimated; and ask-again detection means for detecting, when the change in the topic has been detected by the topic detection means, the user's voice as ask-again by the user based on prosodic information on the user's voice.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: March 8, 2022
    Inventors: Narimasa Watanabe, Sawa Higuchi, Tatsuro Hori
  • Patent number: 11227579
    Abstract: A technique for data augmentation for speech data is disclosed. Original speech data including a sequence of feature frames is obtained. A partially prolonged copy of the original speech data is generated by inserting one or more new frames into the sequence of the feature frames. The partially prolonged copy is output as augmented speech data for training an acoustic model for training an acoustic model.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: January 18, 2022
    Inventors: Toru Nagano, Takashi Fukuda, Masayuki Suzuki, Gakuto Kurata
  • Patent number: 11217236
    Abstract: A method and an apparatus for extracting information are provided. The method according to an embodiment includes: receiving and parsing voice information of a user to generate text information corresponding to the voice information; extracting to-be-recognized contact information from the text information; acquiring an address book of the user, the address book including at least two pieces of contact information; generating at least two types of matching information based on the to-be-recognized contact information; determining, for each of the at least two types of matching information, a matching degree between the to-be-recognized contact information and each of at least two pieces of contact information based on the type of matching information; and extracting contact information matching the to-be-recognized contact information from the address book based on the determined matching degree.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: January 4, 2022
    Assignees: Baidu Online Network Technology (Beijing) Co., Ltd., Shanghai Xiadu Technology Co., Ltd.
    Inventors: Xiangyu Pang, Guangyao Tang
  • Patent number: 11200904
    Abstract: An electronic apparatus is provided. The electronic apparatus includes an inputter comprising input circuitry, a voice receiver comprising voice receiving circuitry, a storage, and a processor configured to: provide a guide prompting a user utterance based on user authentication being performed according to user information input through the inputter, generate a speaker recognition model corresponding to the user information based on a voice corresponding to the guide being received through the voice receiver, store the speaker recognition model in the storage, and identify a user corresponding to a voice received through the voice receiver based on the speaker recognition model updated by comparing a voice received through the voice receiver with the speaker recognition model.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: December 14, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Chanhee Choi
  • Patent number: 11157692
    Abstract: In some implementations, a computing system is provided. The computing system includes a device. The device includes a non-volatile memory divided into a plurality of memory sub-arrays. Each memory sub-array comprises a plurality of selectable locations. A plurality of data processing units are communicatively coupled to the non-volatile memory in the absence of a central processing unit of the computing system. The data processing unit is assigned to process data of a memory sub-array, and configured to store the first data object in the non-volatile memory receive a first data object via a communication interface. The first data object comprises a first content and is associated with a first set of keywords. The data processing unit is also configured to add the first set of keywords to a local dictionary. The local dictionary is stored in the non-volatile memory. The data processing unit is further configured to determine whether the first data object is related to one or more other data objects.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: October 26, 2021
    Assignee: Western Digital Technologies, Inc.
    Inventors: Viacheslav Dubeyko, Luis Vitorio Cargnini
  • Patent number: 11145315
    Abstract: An electronic device includes an audio capture device receiving audio input. The electronic device includes one or more processors, operable with the audio capture device, and configured to execute a control operation in response to a device command preceded by a trigger phrase identified in the audio input when in a first mode of operation. The one or more processors transition from the first mode of operation to a second mode of operation in response to detecting a predefined operating condition of the electronic device. In the second mode of operation, the one or more processors execute the control operation without requiring the trigger phrase to precede the device command.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: October 12, 2021
    Assignee: Motorola Mobility LLC
    Inventors: John Gorsica, Thomas Merrell
  • Patent number: 11133012
    Abstract: An attribute identification technology that can reject an attribute identification result if the reliability thereof is low is provided. An attribute identification device includes: a posteriori probability calculation unit 110 that calculates, from input speech, a posteriori probability sequence {q(c, i)} which is a sequence of the posteriori probabilities q(c, i) that a frame i of the input speech is a class c; a reliability calculation unit 120 that calculates, from the posteriori probability sequence {q(c, i)}, reliability r(c) indicating the extent to which the class c is a correct attribute identification result; and an attribute identification result generating unit 130 that generates an attribute identification result L of the input speech from the posteriori probability sequence {q(c, i)} and the reliability r(c).
    Type: Grant
    Filed: May 11, 2018
    Date of Patent: September 28, 2021
    Inventors: Hosana Kamiyama, Satoshi Kobashikawa, Atsushi Ando
  • Patent number: 11133005
    Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: September 28, 2021
    Assignee: Rovi Guides, Inc.
    Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
  • Patent number: 11114091
    Abstract: A method of processing audio communications over a network, comprising: at a first client device: receiving a first audio transmission from a second client device that is provided in a source language distinct from a default language associated with the first client device; obtaining current user language attributes for the first client device that are indicative of a current language used for the communication session at the first client device; if the current user language attributes suggest a target language currently used for the communication session at the first client device is distinct from the default language associated with the first client device: obtaining a translation of the first audio transmission from the source language into the target language; and presenting the translation of the first audio transmission in the target language to a user at the first client device.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: September 7, 2021
    Inventors: Fei Xiong, Jinghui Shi, Lei Chen, Min Ren, Feixiang Peng