Patents Examined by Darioush Agahi
  • Patent number: 11244679
    Abstract: An electronic device according to various embodiments of the present invention comprises: a display; a communication circuit; a processor electrically connected with the display and the communication circuit; and a memory electrically connected with the processor, wherein when instructions, which can be included by the memory, are executed, the processor acquires message data received through a communication circuit and confirms attribute information included in the message data and, resulting from the confirmation, the display displays first text data for a first time if the text data included in the message data is first text data inputted from a touch screen of an external electronic device, and the display displays second text data for a second time different from the first time if the text data included in the message data is second text data converted from voice data.
    Type: Grant
    Filed: February 12, 2018
    Date of Patent: February 8, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Nojoon Park, Junhyung Park, Hyojung Lee, Taehee Lee, Geonsoo Kim, Hanjib Kim, Yongjoon Jeon
  • Patent number: 11189302
    Abstract: A speech emotion detection system may obtain to-be-detected speech data. The system may generate speech frames based on framing processing and the to-be-detected speech data. The system may extract speech features corresponding to the speech frames to form a speech feature matrix corresponding to the to-be-detected speech data. The system may input the speech feature matrix to an emotion state probability detection model. The system may generate, based on the speech feature matrix and the emotion state probability detection model, an emotion state probability matrix corresponding to the to-be-detected speech data. The system may input the emotion state probability matrix and the speech feature matrix to an emotion state transition model. The system may generate an emotion state sequence based on the emotional state probability matrix, the speech feature matrix, and the emotional state transition model. The system may determine an emotion state based on the emotion state sequence.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: November 30, 2021
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Haibo Liu
  • Patent number: 11170175
    Abstract: Certain aspects of the present disclosure provide techniques for generating a replacement sentence with the same or similar meaning but a different sentiment than an input sentence. The method generally includes receiving a request for a replacement sentence and iteratively determining a next word of the replacement sentence word-by-word based on an input sentence. Iteratively determining the next word generally includes evaluating a set of words of the input sentence using a language model configured to output candidate sentences and evaluating the candidate sentences using a sentiment model configured to output sentiment scores for the candidates sentences. Iteratively determining the next word further includes calculating convex combinations for the candidate sentences and selecting an ending word of one of the candidate sentences as the next word of the replacement sentence. The method further includes transmitting the replacement sentence in response to the request for the replacement sentence.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: November 9, 2021
    Assignee: INTUIT, INC.
    Inventors: Manav Kohli, Cindy Osmon, Nicholas Roberts
  • Patent number: 11164582
    Abstract: Set forth is a motorized computing device that selectively navigates to a user according content of a spoken utterance directed at the motorized computing device. The motorized computing device can modify operations of one or more motors of the motorized computing device according to whether the user provided a spoken utterance while the one or more motors are operating. The motorized computing device can render content according to interactions between the user and an automated assistant. For instance, when automated assistant is requested to provide graphical content for the user, the motorized computing device can navigate to the user in order to present the content the user. However, in some implementations, when the user requests audio content, the motorized computing device can bypass navigating to the user when the motorized computing device is within a distance from the user for audibly rendering the audio content.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: November 2, 2021
    Assignee: GOOGLE LLC
    Inventors: Scott Stanford, Keun-Young Park, Vitalii Tomkiv, Hideaki Matsui, Angad Sidhu
  • Patent number: 11132998
    Abstract: A voice recognition device includes: a first feature vector calculating unit (2) for calculating a first feature vector from voice data input; an acoustic likelihood calculating unit (4) for calculating an acoustic likelihood of the first feature vector by using an acoustic model used for calculating an acoustic likelihood of a feature vector; a second feature vector calculating unit (3) for calculating a second feature vector from the voice data; a noise degree calculating unit (6) for calculating a noise degree of the second feature vector by using a discriminant model used for calculating a noise degree indicating whether a feature vector is noise or voice; a noise likelihood recalculating unit (8) for recalculating an acoustic likelihood of noise on the basis of the acoustic likelihood of the first feature vector and the noise degree of the second feature vector; and a collation unit (9) for performing collation with a pattern of a vocabulary word to be recognized, by using the acoustic likelihood calcula
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: September 28, 2021
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Toshiyuki Hanazawa, Tomohiro Narita
  • Patent number: 11126797
    Abstract: Methods, systems, and devices for language mapping are described. Some machine learning models may be trained to support multiple languages. However, word embedding alignments may be too general to accurately capture the meaning of certain words when mapping different languages into a single reference vector space. To improve the accuracy of vector mapping, a system may implement a supervised learning layer to refine the cross-lingual alignment of particular vectors corresponding to a vocabulary of interest (e.g., toxic language). This supervised learning layer may be trained using a dictionary of toxic words or phrases across the different supported languages in order to learn how to weight an initial vector alignment to more accurately map the meanings behind insults, threats, or other toxic words or phrases between languages. The vector output from this weighted mapping can be sent to supervised models, trained on the reference vector space, to determine toxicity scores.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: September 21, 2021
    Assignee: Spectrum Labs, Inc.
    Inventors: Josh Newman, Yacov Salomon, Jonathan Thomas Purnell, Indrajit Haridas, Alexander Greene
  • Patent number: 11062693
    Abstract: To provide a more natural sounding set of voice prompts of an interactive voice response (IVR) script, the voice recordings of the prompts may be modified to have a predetermined amount of silence at the end of the recording. The amount of silence required can be determined from the context in which the voice prompt appears in the IVR script. Different contexts may include mid-sentence, terminating in a comma, or a sentence ending context. These contexts may require silence periods of 100 ms, 250 ms and 500 ms respectively. Voice files may be trimmed to remove any existing silence and then the required silence period may be added.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: July 13, 2021
    Assignee: West Corporation
    Inventors: Terry Olson, Mark Sempek, Roger Wehrle
  • Patent number: 11043215
    Abstract: A method and a system for generating textual representation of user spoken utterance is disclosed. The method comprises receiving an indication of the user spoken utterance; generating, at least two hypotheses; generating, by the electronic device, from the at least two hypotheses a set of paired hypotheses, a given one of the set of paired hypotheses including a first hypothesis paired with a second hypothesis; determining, for the given one of the set of paired hypotheses, a pair score; generating a set of utterance features, the set of utterance features being indicative of one or more characteristics associated with the user spoken utterance; ranking, the first hypothesis and the second hypothesis based at least on the pair score and the set of utterance features; and in response to the first hypothesis being a highest ranked hypothesis, selecting the first hypothesis as the textual representation of user spoken utterance.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: June 22, 2021
    Assignee: YANDEX EUROPE AG
    Inventors: Sergey Surenovich Galustyan, Fedor Aleksandrovich Minkin