Patents Examined by Thuykhanh Le
  • Patent number: 11080484
    Abstract: Electronic records are accessed from computer storage for a given subject, wherein the electronic records include natural language notes about the subject. Tokens are identified in the natural language notes. For each token, a corresponding intensity score is generated representing an intensity of match between the token and a particular dimension, wherein the intensity scores are each values on a first scale, wherein each dimension is one of a plurality of dimensions of a category out of a plurality of categories; generating rescaled-intensity scores from the intensity scores by rescaling the intensity scores from the first scale to a second scale different from the first scale. For each dimension of each category, a dimension-score is compiled based on the intensity scores; and categorizing the subject into at least one category based on the dimension scores. The subject is categorized into at least one category based on the dimension scores.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: August 3, 2021
    Assignee: Omniscient Neurotechnology Pty Limited
    Inventors: Michael Edward Sughrue, Stephane Philippe Doyen, Peter James Nicholas
  • Patent number: 11081111
    Abstract: Methods, systems, and related products that provide emotion-sensitive responses to user's commands and other utterances received at an utterance-based user interface. Acknowledgements of user's utterances are adapted to the user and/or the user device, and emotions detected in the user's utterance that have been mapped from one or more emotion features extracted from the utterance. In some examples, extraction of a user's changing emotion during a sequence of interactions is used to generate a response to a user's uttered command. In some examples, emotion processing and command processing of natural utterances are performed asynchronously.
    Type: Grant
    Filed: March 17, 2020
    Date of Patent: August 3, 2021
    Assignee: Spotify AB
    Inventors: Daniel Bromand, David Gustafsson, Richard Mitic, Sarah Mennicken
  • Patent number: 11031014
    Abstract: Systems and methods for optimizing voice detection via a network microphone device (NMD) based on a selected voice-assistant service (VAS) are disclosed herein. In one example, the NMD detects sound via individual microphones and selects a first VAS to communicate with the NMD. The NMD produces a first sound-data stream based on the detected sound using a spatial processor in a first configuration. Once the NMD determines that a second VAS is to be selected over the first VAS, the spatial processor assumes a second configuration for producing a second sound-data stream based on the detected sound. The second sound-data stream is then transmitted to one or more remote computing devices associated with the second VAS.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: June 8, 2021
    Assignee: Sonos, Inc.
    Inventors: Connor Kristopher Smith, Kurt Thomas Soto, Charles Conor Sleith
  • Patent number: 11004445
    Abstract: In one embodiment, a smartwatch includes a processor and a memory storing instructions to be executed in the processor. The instructions are configured to cause the processor to obtain input comprising voice information; determine whether the voice information comprises interrogative keyword; and determine that the voice information is interrogative information in response to determining that the voice information comprises interrogative keyword. The instructions are configured to cause the processor to determine whether reply information corresponding to the interrogative information can be obtained from a memory of the smartwatch; and send the interrogative information to a server through a wireless network in response to determining that the reply information corresponding to the interrogative information cannot be obtained from the memory of the smartwatch.
    Type: Grant
    Filed: May 27, 2017
    Date of Patent: May 11, 2021
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yizu Feng, Bin Li
  • Patent number: 10991384
    Abstract: A method for automatic affective state inference from speech signals and an automated affective state interference system are disclosed. In an embodiment the method includes capturing speech signals of a target speaker, extracting one or more acoustic voice parameters from the captured speech signals, calibrating voice markers on basis of the one or more acoustic voice parameters that have been extracted from the speech signals of the target speaker, one or more speaker-inherent reference parameters of the target speaker and one or more inter-speaker reference parameters of a sample of reference speakers, applying at least one set of prediction rules that are based on an appraisal criteria to the calibrated voice markers for inferring two or more appraisal criteria scores relating to appraisal of affect-eliciting events with which the target speaker is confronted and assigning one or more affective state terms to the two or more appraisal criteria scores.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: April 27, 2021
    Assignee: audEERING GMBH
    Inventors: Florian Eyben, Klaus R. Scherer, Björn W. Schuller
  • Patent number: 10950251
    Abstract: Systems and methods include audio encoders having improved coding of harmonic signals. The audio encoders can be implemented as transform-based codecs with frequency coefficients quantized using spectral weights. The frequency coefficients can be quantized by use of the generated spectral weights applied to the frequency coefficients prior to the quantization or by use of the generated spectral weights in computation of error within a vector quantization that performs the quantization. Additional apparatus, systems, and methods are disclosed.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: March 16, 2021
    Assignee: DTS, Inc.
    Inventors: Elias Nemer, Zoran Fejzo
  • Patent number: 10936824
    Abstract: Automatic semantic analysis for characterizing and correlating literary elements within a digital work of literature is accomplished by employing natural language processing and deep semantic analysis of text to create annotations for the literary elements found in a segment or in the entirety of the literature, a weight to each literary element and its associated annotations, wherein the weight indicates an importance or relevance of a literary element to at least the segment of the work of literature; correlating and matching the literary elements to each other to establish one or more interrelationships; and producing an overall weight for the correlated matches.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: March 2, 2021
    Assignee: International Business Machines Corporation
    Inventors: Corville O. Allen, Scott R. Carrier, Eric Woods
  • Patent number: 10923114
    Abstract: Configuring computer memory including parsing digitized speech into a triples of a description logic; determining whether parsed triples are recorded in a general language triple store of the computer memory; determining whether parsed triples are recorded in a jargon triple store of the computer memory; and, if the parsed triples are recorded in neither the general language triple store nor the jargon triple store, recording the parsed triples in the jargon triple store.
    Type: Grant
    Filed: October 10, 2018
    Date of Patent: February 16, 2021
    Assignee: N3, LLC
    Inventor: Shannon L. Copeland
  • Patent number: 10916243
    Abstract: Methods and systems for facilitating communications between shared electronic devices are described herein. In some embodiments, a group account may be assigned to a shared electronic device. The group account may include one or more user accounts, where individuals associated with those user accounts may interact with the shared electronic device, and also may form a part of the group account. When a message is sent from one shared electronic device to another personal device or shared electronic device, the message may be indicated as being sent from the group account, as if the shared electronic device corresponds to its own separate account. In some embodiments, speaker identification processing may be employed to determine a speaker of the message and, if the speaker is able to be identified, the message may be sent from the corresponding speaker's user account instead of the shared electronic device's corresponding group account.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: February 9, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Christo Frank Devaraj, Venkata Krishnan Ramamoorthy, Gregory Michael Hart, Samuel Scott Gigliotti, Scott Southwood, Ran Mokady, Hale Sostock, Roman Yusufov
  • Patent number: 10909326
    Abstract: One or more implementations of the present specification provide a social content risk identification method. Social content data to be identified is obtained. Features of the social content data are extracted, including a plurality of features of at least one of social behavior records or social message records in the social content data. The features are expanded by generating dimension-extended features using a tree structured machine learning model. The social content data is classified as risky social content data by processing the dimension-extended features using a deep machine learning model.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: February 2, 2021
    Assignee: Advanced New Technologies Co., Ltd.
    Inventor: Chuan Wang
  • Patent number: 10909981
    Abstract: A method of controlling a device includes controlling a processor by a mobile terminal to acquire a voice instruction of a user, controlling an artificial intelligence (AI) module, in accordance with a mapping relationship collection which is between a preset voice command and an instruction code combination information, and an acquired voice instruction, to determine the instruction code combination information corresponding to the acquired voice instruction, where the acquired voice instruction has a plurality of instruction codes and transmission sequence of the instruction codes, and controlling the processor to transmit the instruction codes to a target device in accordance with the transmission sequence, where each of the instruction codes is used to instruct the target device to execute an operation corresponding to each of the instruction codes.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: February 2, 2021
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Jian Bai
  • Patent number: 10902849
    Abstract: A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the computer controlling an utterance of a robot, the process including detecting an utterance of a person by using a microphone, obtaining, in response to the detecting, pieces of response information for the utterance of the person based on first information indicating a content of the utterance of the person, obtaining second information relating to at least one of the person and a motion of the person other than the utterance of the person, selecting specified response information among the pieces of response information based on the second information, and transmitting, to the robot, an instruction that causes the robot to execute a response in accordance with the specified response information.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: January 26, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Kenichiro Maeda, Masaki Miura, Makoto Hyuga
  • Patent number: 10878833
    Abstract: A speech processing method and a terminal are provided. The method includes: receiving signals from a plurality of microphones; performing, by using a same sampling rate, analog-to-digital conversion on the plurality of paths of signals received from the plurality of microphones, to obtain a plurality of paths of time-domain digital signals; performing time-to-frequency-domain conversion on the plurality of paths of time-domain digital signals to obtain a plurality of paths of frequency-domain signals; and determining a signal type of the primary frequency-domain signal based on at least one of a sound pressure difference between the primary frequency-domain signal and each of N paths of secondary frequency-domain signals in the M paths of secondary frequency-domain signals, a phase difference between the primary frequency-domain signal and each of the N paths of secondary frequency-domain signals, and a frequency distribution characteristic of the primary frequency-domain signal.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: December 29, 2020
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yanbin Du, Zhihai Zhu, Meng Liao, Weijun Zheng, Weibin Chen, Guangzhao Bao, Cunshou Qiu
  • Patent number: 10861453
    Abstract: A processing device receives from a speech-detection device intent data and metadata associated with an intent to schedule a resource located in proximity to a location of the speech-detection device. The metadata includes one or more device identifiers associated with one or more devices discovered by the speech-detection device. The processing device determines an availability of resources associated with the one or more device IDs and schedule one of the resources based on the availability. The scheduled resource is located in in proximity to the location of the speech-detection device.
    Type: Grant
    Filed: May 1, 2018
    Date of Patent: December 8, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Kunal Chadha, James L. Ford
  • Patent number: 10847162
    Abstract: Multi-modal speech localization is achieved using image data captured by one or more cameras, and audio data captured by a microphone array. Audio data captured by each microphone of the array is transformed to obtain a frequency domain representation that is discretized in a plurality of frequency intervals. Image data captured by each camera is used to determine a positioning of each human face. Input data is provided to a previously-trained, audio source localization classifier, including: the frequency domain representation of the audio data captured by each microphone, and the positioning of each human face captured by each camera in which the positioning of each human face represents a candidate audio source. An identified audio source is indicated by the classifier based on the input data that is estimated to be the human face from which the audio data originated.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: November 24, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Eyal Krupka, Xiong Xiao
  • Patent number: 10843707
    Abstract: The assistance system of a motor vehicle includes a driver-assistance system for monitoring a driving situation of the motor vehicle and a control device for determining at least one course of option resulting from the monitored driving situation, by taking into consideration data that can be retrieved by the control device independently of a user input to the assistance system. The assistance system also includes an output device for outputting a query to provide to a driver a function of the assistance system corresponding to a course of action. The assistance system also includes a detection device for detecting a response reaction of the driver. The control device provides the function of the assistance system depending on the detected response to reduce the cognitive load of the driver.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: November 24, 2020
    Assignee: Audi AG
    Inventor: Gerd Gruchalski
  • Patent number: 10847146
    Abstract: Embodiments of the present disclosure disclose a method and apparatus for switching multiple speech recognition models. The method includes: acquiring at least one piece of speech information in user input speech; recognizing the speech information and matching a linguistic category for the speech information to determine a corresponding target linguistic category based on a matching degree; and switching a currently used speech recognition model to a speech recognition model corresponding to the target linguistic category. The embodiments of the present disclosure determine the corresponding target linguistic category based on the matching degree by recognizing the speech information and matching the linguistic category for the speech information, and switch the currently used speech recognition model to the speech recognition model corresponding to the target linguistic category.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: November 24, 2020
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Bing Jiang, Xiangang Li, Ke Ding
  • Patent number: 10839803
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for contextual hotwords are disclosed. In one aspect, a method, during a boot process of a computing device, includes the actions of determining, by a computing device, a context associated with the computing device. The actions further include, based on the context associated with the computing device, determining a hotword. The actions further include, after determining the hotword, receiving audio data that corresponds to an utterance. The actions further include determining that the audio data includes the hotword. The actions further include, in response to determining that the audio data includes the hotword, performing an operation associated with the hotword.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: November 17, 2020
    Assignee: Google LLC
    Inventors: Christopher Thaddeus Hughes, Ignacio Lopez Moreno, Aleksandar Kracun
  • Patent number: 10839165
    Abstract: Systems and methods for determining knowledge-guided information for a recurrent neural networks (RNN) to guide the RNN in semantic tagging of an input phrase are presented. A knowledge encoding module of a Knowledge-Guided Structural Attention Process (K-SAP) receives an input phrase and, in conjunction with additional sub-components or cooperative components generates a knowledge-guided vector that is provided with the input phrase to the RNN for linguistic semantic tagging. Generating the knowledge-guided vector comprises at least parsing the input phrase and generating a corresponding hierarchical linguistic structure comprising one or more discrete sub-structures. The sub-structures may be encoded into vectors along with attention weighting identifying those sub-structures that have greater importance in determining the semantic meaning of the input phrase.
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: November 17, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yun-Nung Vivian Chen, Dilek Z. Hakkani-Tur, Gokhan Tur, Asli Celikyilmaz, Jianfeng Gao, Li Deng
  • Patent number: 10831999
    Abstract: One embodiment provides a method, including: receiving a foreign language trouble ticket requiring resolution; translating text of the foreign language trouble ticket into a language known to the person, wherein the translating comprises (i) translating a subset of foreign language keywords within a portion of the foreign language trouble ticket identified as a problem portion into the known language and (ii) translating a remaining subset of keywords into the known language using keyword links generated from previously resolved tickets by: extracting keywords from the historical tickets, wherein the keywords are recognized as corresponding to an identified portion; and generating at least one keyword link from at least one of the identified portions identified as a problem description portion; and directing the known language ticket to a resolver group, wherein the resolver group is selected based upon an issue identified within the ticket.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: November 10, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Atri Mandal, Giriprasad Sridhara, Vijay Ekambaram, Gargi Banerjee Dasgupta