Patents Examined by L. Thomas
  • Patent number: 11727923
    Abstract: A method for conducting a conversation between a user and a virtual agent is disclosed. The method includes receiving, by an ASR sub-system, a plurality of utterances from the user, and converting, by the ASR sub-system, each utterance of the plurality of utterances into a text message. The method further includes determining, by a NLU sub-system, an intent, at least one entity associated to the intent, or a combination thereof from the text message.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: August 15, 2023
    Assignee: Coinbase, Inc.
    Inventors: Arjun Kumeresh Maheswaran, Akhilesh Sudhakar, Bhargav Upadhyay
  • Patent number: 11720749
    Abstract: Techniques are described herein for enabling autonomous agents to generate conclusive answers. An example of a conclusive answer is text that addresses concerns of a user who is interacting with an autonomous agent. For example, an autonomous agent interacts with a user device, answering user utterances, for example questions or concerns. Based on the interactions, the autonomous agent determines that a conclusive answer is appropriate. The autonomous agent formulates the conclusive answer, which addresses multiple user utterances. The conclusive answer provided to the user device.
    Type: Grant
    Filed: October 26, 2022
    Date of Patent: August 8, 2023
    Assignee: Oracle International Corporation
    Inventor: Boris Galitsky
  • Patent number: 11714973
    Abstract: Systems and methods are described herein for replaying content dialogue in an alternate language in response to a user command. While the content is playing on a media device, a first language in which the content dialogue is spoken is identified. Upon receiving a voice command to repeat a portion of the dialogue, the language in which the command was spoken is identified. The portion of the content dialogue to repeat is identified and translated from the first language to the second language. The translated portion of the content dialogue is then output. In this way, the user can simply ask in their native language for the dialogue to be repeated and the repeated portion of the dialogue is presented in the user's native language.
    Type: Grant
    Filed: May 12, 2022
    Date of Patent: August 1, 2023
    Assignee: Rovi Guides, Inc.
    Inventors: Carla Mack, Phillip Teich, Mario Sanchez, John Blake
  • Patent number: 11710001
    Abstract: A computing device can receive a communication including text that can be presented on a display screen of the computing device. A camera of the computing device can capture image data. The computing device can determine, from the image data, an identity represented in the image data. The computing device can determine an amount of the communication to present on the display screen based on the identity. The computing device can determine, from the image data, user attention is directed toward the display screen. The computing device can present the amount of the communication on the display screen. In some embodiments, the computing device can determine which content of the communication to display based on the identity. The computing device can display a summary of the communication. The computing device can display an amount of the summary and/or the content of the summary based on the identity.
    Type: Grant
    Filed: October 6, 2022
    Date of Patent: July 25, 2023
    Assignee: Amazon Technologies, Inc.
    Inventor: Ryan H. Cassidy
  • Patent number: 11699430
    Abstract: A system and method for providing a text to speech output by receiving user audio data, determining a user region-specific-pronunciation classification according to the audio data, determining text for a response to the user according to the audio data, identifying a portion from the text, where a region specific-pronunciation dictionary includes the portion, and using a phoneme string, from the dictionary selected according to the user region-specific pronunciation classification, for the word in a text to speech output to the user.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: July 11, 2023
    Assignee: International Business Machines Corporation
    Inventors: Andrew R. Freed, Vamshi Krishna Thotempudi, Sujatha B. Perepa
  • Patent number: 11682406
    Abstract: Techniques are described for audio decoding for, in an example, computer games. Audio is delivered in packets. The components of a packet are sorted in the time domain or the frequency domain by magnitude. An elimination threshold can be dynamically established with components below the threshold being eliminated from processing by the receiver, to save processing requirements.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: June 20, 2023
    Assignee: Sony Interactive Entertainment LLC
    Inventor: Mathieu Jean
  • Patent number: 11676606
    Abstract: A voice to text model used by a voice-enabled electronic device is dynamically and in a context-sensitive manner updated to facilitate recognition of entities that potentially may be spoken by a user in a voice input directed to the voice-enabled electronic device. The dynamic update to the voice to text model may be performed, for example, based upon processing of a first portion of a voice input, e.g., based upon detection of a particular type of voice action, and may be targeted to facilitate the recognition of entities that may occur in a later portion of the same voice input, e.g., entities that are particularly relevant to one or more parameters associated with a detected type of voice action.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: June 13, 2023
    Assignee: GOOGLE LLC
    Inventors: Yuli Gao, Sangsoo Sung, Prathab Murugesa
  • Patent number: 11663402
    Abstract: An approach for a fast and accurate word embedding model, “desc2vec,” for out-of-dictionary (OOD) words with a model learning from the dictionary descriptions of the word is disclosed. The approach includes determining that a target text element is not in a set of reference text elements, information describing the target text element is obtained. The information comprises a set of descriptive text elements. A set of vectorized representations for the set of descriptive text elements is determined. A target vectorized representation for the target text element is determined based on the set of vectorized representations using a machine learning model. The machine learning model is trained to represent a predetermined association between the set of vectorized representations for the set of descriptive text elements describing the target text element and the target vectorized representation.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: May 30, 2023
    Assignee: International Business Machines Corporation
    Inventors: Chao-Min Chang, Kuei-Ching Lee, Ci-Hao Wu, Chia-Heng Lin
  • Patent number: 11646044
    Abstract: A method obtains a first sound signal representative of a first sound, including a first spectrum envelope contour and a first reference spectrum envelope contour; obtains a second sound signal, representative of a second sound differing in sound characteristics from the first sound, including a second spectrum envelope contour and a second reference spectrum envelope contour; generates a synthesis spectrum envelope contour by transforming the first spectrum envelope contour based on a first difference between the first spectrum envelope contour and the first reference spectrum envelope contour at a first time point of the first sound signal, and a second difference between the second spectrum envelope contour and the second reference spectrum envelope contour at a second time point of the second sound signal; and generates a third sound signal representative of the first sound that has been transformed using the generated synthesis spectrum envelope contour.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: May 9, 2023
    Assignee: YAMAHA CORPORATION
    Inventors: Ryunosuke Daido, Hiraku Kayama
  • Patent number: 11640826
    Abstract: A communication system includes at least one first device and at least one second device which are linked in a manner that enables data transfer with each other. The first device enables the speech signal that it receives as the input to be expressed in terms of the energy functions representing the energy patterns, information functions representing the information patterns and the noise functions of the frames of the real speech samples; and transfers the indexes of these functions in the database and the frame gain factor of each frame to the second device. The second device finds the functions via the indexes from the copy database which is a copy of the database and reconstructs the speech signal by these functions and the frame gain factor, enabling it to be provided as the voice output.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: May 2, 2023
    Inventor: Bekir Siddik Binboga Yarman
  • Patent number: 11636271
    Abstract: According to one embodiment, a dialogue apparatus includes a processing circuit. The processing circuit designates one database from among a plurality of databases. The processing circuit extracts a keyword from text information. The processing circuit searches the designated database and another database which is included in the plurality of databases and which is other than the designated database, by using the keyword. The processing circuit generates a response in accordance with a first number of hit items, which is a number of data items matching the keyword in the designated database, and a second number of hit items, which is a number of data items matching the keyword in said another database.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: April 25, 2023
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Yuka Kobayashi, Kenji Iwata, Takami Yoshida
  • Patent number: 11636275
    Abstract: Disclosed herein are a natural language processing system and method, more particularly a natural language processing system and method using a synapper model unit.
    Type: Grant
    Filed: November 10, 2022
    Date of Patent: April 25, 2023
    Inventor: Min Ku Kim
  • Patent number: 11625561
    Abstract: A method for determining a class to which an input image belongs in an inference process, includes: storing a frequent feature for each class in a frequent feature database; inputting an image as the input image; extracting which class the input image belongs to; extracting a plurality of features that appear in the inference process; extracting one of the features that satisfies a predetermined condition as a representative feature; reading out the frequent feature corresponding to an extracted class from the frequent feature database; extracting one or a plurality of ground features based on the frequent feature and the representative feature; storing a concept data representing each feature in an annotation database; reading out the concept data corresponding to the one or the plurality of ground features from the annotation database; generating explanation information; and outputting the ground feature and the explanatory information together with the extracted class.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: April 11, 2023
    Assignee: DENSO CORPORATION
    Inventors: Hiroshi Kuwajima, Masayuki Tanaka
  • Patent number: 11626113
    Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.
    Type: Grant
    Filed: August 26, 2021
    Date of Patent: April 11, 2023
    Assignee: Rovi Guides, Inc.
    Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
  • Patent number: 11621010
    Abstract: Efficient assignment of bit numbers is performed even under a low bit rate condition. A quantizer 12 obtains a quantized spectral sequence from a frequency spectral sequence. An integer transformer 13 obtains a unified quantized spectral sequence by obtaining, by a bijective transformation, a transformed integer for each of the sets, each being made up of integer values, obtained from the quantized spectral sequence. An integer encoder 15 obtains an integer code by encoding the unified quantized spectral sequence using a bit assignment sequence. An object-to-be-encoded estimator 18 obtains an estimated unified spectral sequence from the frequency spectral sequence by a transformation which is performed by the integer transformer 13 or a transformation that approximates the magnitude relationship between values before and after the above transformation. A bit assigner 14 obtains a bit assignment sequence and a bit assignment code from the estimated unified spectral sequence.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: April 4, 2023
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Ryosuke Sugiura, Yutaka Kamamoto, Takehiro Moriya
  • Patent number: 11609740
    Abstract: Techniques for joining a device of a third user to a communication between a device of a first user and a device of a second user are described herein. For instance, two or more users may utilize respective computing devices to engage in a telephone call, a video call, an instant-messaging session, or any other type of communication in which the users communicate with each other audibly and/or visually. In some instances, a first user of the two users may issue a voice command requesting to join a device of a third user to the communication. One or more computing devices may recognize this voice command and may attempt to join a device of a third user to the communication.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: March 21, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Ty Loren Carlson, Rohan Mutagi
  • Patent number: 11610062
    Abstract: A label assignment model generation device extracts a plurality of feature amounts for a word from a document as an extraction source, and generates, based on an appearance frequency of each of the extracted feature amounts, a label assignment model that is a machine learning model and assigns a label to a word included in a document as an assignment target. The label assignment model generation device adjusts a degree of influence of the feature amount on the label assignment model based on a deviation of the appearance frequency of each of the plurality of feature amounts. The label assignment model generation device extracts a plurality of feature amounts for a word from a remaining document excluding a predetermined document from a plurality of documents as extraction sources, and generates the label assignment model based on the appearance frequency of the extracted feature amounts.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: March 21, 2023
    Assignee: HITACHI, LTD.
    Inventors: Takuya Oda, Michiko Tanaka
  • Patent number: 11605373
    Abstract: A text search query including one or more words may be received. An ASR index created for an audio recording may be searched over using the query to produce ASR search results including words, each word associated with a confidence score. For each of the words in the ASR search results associated with a confidence score below a threshold (and in some cases having one or more preceding words in the ASR index and one or more subsequent words in the ASR index), a phonetic representation of the audio recording may be searched for the word having the confidence score below the threshold, where it occurs in the audio recording, possibly after the one or more preceding words and in the audio recording before the one or more subsequent words, to produce phonetic search results. Search results may be returned include ASR and phonetic results.
    Type: Grant
    Filed: July 28, 2022
    Date of Patent: March 14, 2023
    Assignee: Nice Ltd.
    Inventors: William Mark Finlay, Robert William Morris, Peter S. Cardillo, Maria Kunin
  • Patent number: 11587549
    Abstract: A text search query including one or more words may be received. An ASR index created for an audio recording may be searched over using the query to produce ASR search results including words, each word associated with a confidence score. For each of the words in the ASR search results associated with a confidence score below a threshold (and in some cases having one or more preceding words in the ASR index and one or more subsequent words in the ASR index), a phonetic representation of the audio recording may be searched for the word having the confidence score below the threshold, where it occurs in the audio recording, possibly after the one or more preceding words and in the audio recording before the one or more subsequent words, to produce phonetic search results. Search results may be returned include ASR and phonetic results.
    Type: Grant
    Filed: July 28, 2022
    Date of Patent: February 21, 2023
    Assignee: Nice Ltd.
    Inventors: William Mark Finlay, Robert William Morris, Peter S. Cardillo, Maria Kunin
  • Patent number: 11573763
    Abstract: A system, method, and wireless earpieces for implementing a virtual assistant. A request is received from a user to be implemented by wireless earpieces. A virtual assistant is executed on the wireless earpieces. An action is implemented to fulfill the request utilizing the virtual assistant. The wireless earpieces may be a set of wireless earpieces and the virtual assistant may be implemented independently by the wireless earpieces.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: February 7, 2023
    Assignee: BRAGI GMBH
    Inventor: Peter Vincent Boesen