Patents Examined by Huyen Vo
  • Patent number: 10228906
    Abstract: An electronic apparatus and a controlling method thereof are provided. The electronic apparatus includes a communicator configured to communicate with an external apparatus including a microphone, a storage configured to store a voice recognition application and a middleware which relays the voice recognition application, and a processor configured to, in response to receiving an initiation signal from the external apparatus through the communicator, control the communicator to generate a microphone activation signal for activating the microphone using the middleware, transmit the generated microphone activation signal to the external apparatus, generate a voice recognition application signal for loading the voice recognition application, and initiate an operation of the voice recognition application.
    Type: Grant
    Filed: April 19, 2017
    Date of Patent: March 12, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Jang-ho Jin
  • Patent number: 10224057
    Abstract: A method to present communications is provided. The method may include obtaining, at a device, a request from a user to play back a stored message that includes audio. In response to obtaining the request, the method may include directing the audio of the message to a transcription system from the device. In these and other embodiments, the transcription system may be configured to generate text that is a transcription of the audio in real-time. The method may further include obtaining, at the device, the text from the transcription system and presenting, by the device, the text generated by the transcription system in real-time. In response to obtaining the text from the transcription system, the method may also include presenting, by the device, the audio such that the text as presented is substantially aligned with the audio.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: March 5, 2019
    Assignee: Sorenson IP Holdings, LLC
    Inventor: Brian Chevrier
  • Patent number: 10181321
    Abstract: A portable terminal has a network interface that receives a set of instructions having a sequence of at least one location and audio properties associated with the at least one location from a server. An audio circuit receives audio signals picked up by a microphone and processes the audio signals in a manner defined by the audio properties associated with the at least one location. A speech recognition module receives processed signals from the audio circuit and carries out a speech recognition process thereupon.
    Type: Grant
    Filed: September 27, 2016
    Date of Patent: January 15, 2019
    Assignee: VOCOLLECT, INC.
    Inventors: Kurt Charles Miller, Arthur McNair, Vanessa Cassandra Sanchez, Philip E. Russell, Allan Strane
  • Patent number: 10170104
    Abstract: Provided are an electronic device, a method and a training method for natural language processing. The electronic device for natural language processing includes a processor configured to: for each of words obtained by segmenting a sentence in a training data set, obtain an attention parameter representing correlation between the word and each of one or more words of other words in the sentence, where each of the words is represented by a real vector; and train, based on each of the words in the sentence, information on the attention parameter acquired for the word and label information of the sentence in the training data set, a neural network for sentence classification.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: January 1, 2019
    Assignee: SONY CORPORATION
    Inventors: Zhiwei Zhao, Youzheng Wu
  • Patent number: 10170100
    Abstract: A computer-implemented method includes determining, by a first device, a current emotional state of a user of the first device. The current emotional state is based, at least in part, on real-time information corresponding to the user and relates to a textual message from the user. The computer-implemented method further includes determining, by the first device, a set of phonetic data associated with a plurality of vocal samples corresponding to the user. The computer-implemented method further includes dynamically converting, by the first device, the textual message into an audio message. The audio message is converted from the textual message into the audio message based, at least in part, on the current emotional state and a portion of the set of phonetic data that corresponds to the current emotional state. A corresponding computer system and computer program product are also disclosed.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Kevin G. Carr, Thomas D. Fitzsimmons, Johnathon J. Hoste, Angel A. Merchan
  • Patent number: 10170101
    Abstract: A computer-implemented method includes determining, by a first device, a current emotional state of a user of the first device. The current emotional state is based, at least in part, on real-time information corresponding to the user and relates to a textual message from the user. The computer-implemented method further includes determining, by the first device, a set of phonetic data associated with a plurality of vocal samples corresponding to the user. The computer-implemented method further includes dynamically converting, by the first device, the textual message into an audio message. The audio message is converted from the textual message into the audio message based, at least in part, on the current emotional state and a portion of the set of phonetic data that corresponds to the current emotional state. A corresponding computer system and computer program product are also disclosed.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Kevin G. Carr, Thomas D. Fitzsimmons, Johnathon J. Hoste, Angel A. Merchan
  • Patent number: 10169319
    Abstract: A dialog performance improvement method, system, and computer program product, include, computing a plurality of question classes and a confidence score for each of the question classes for a language input of a user, comparing the confidence score to an upper threshold and a lower threshold for each of the question classes to determine which of at least one action to perform, receiving a language feedback from the user for the performed action, and adjusting at least one of the upper threshold and the lower threshold based on the language feedback from the user.
    Type: Grant
    Filed: September 27, 2016
    Date of Patent: January 1, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan Hudson Connell, II, Mishal Dholakia, Shang Qing Guo, Jonathan Lenchner
  • Patent number: 10163436
    Abstract: Systems, methods, and devices for training a Natural Language Understanding (NLU) component of a system using spoken utterances of individuals are described. A server sends a device, such as a speech-controlled device, a signal that causes the device to output audio soliciting content regarding how a user would speak a particular command for execution by a particular application. The device captures spoken audio and sends it to the server. The server performs speech processing on received audio data to parse the audio data into multiple portions. The server then associates a first portion of the audio data with a command indicator and a second portion of the audio data with a content indicator. The associated data is then used to update how the NLU component determines how utterances triggering the command are spoken.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: December 25, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Janet Louise Slifka, Elizabeth Baran
  • Patent number: 10157615
    Abstract: Techniques are described herein for chatbots to achieve greater social grace by tracking users' states and providing corresponding dialog. In various implementations, input may be received from a user at a client device operating a chatbot, e.g., during a first session between the user and the chatbot. The input may be semantically processed to determine a state expressed by the user to the chatbot. An indication of the state expressed by the user may be stored in memory for future use by the chatbot. It may then be determined, e.g., by the chatbot based on various signals, that a second session between the user and the chatbot is underway. In various implementations, as part of the second session, the chatbot may output a statement formed from a plurality of candidate words, phrases, and/or statements based on the stored indication of the state expressed by the user.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: December 18, 2018
    Assignee: GOOGLE LLC
    Inventors: Bryan Horling, David Kogan, Maryam Garrett, Daniel Kunkle, Wan Fen Nicole Quah, Ruijie He, Wangqing Yuan, Wei Chen, Michael Itz
  • Patent number: 10152976
    Abstract: A purchase settlement method is provided. Voice information is acquired. A spoken command indicating a control instruction as to a device is obtained based on the acquired voice information. When the spoken command relates to purchase settlement, speaker information relating to a speaker who has spoken the acquired voice information is identified based on the acquired voice information. It is determined whether or not the identified speaker information is of a speaker permitted to perform purchase settlement by referencing a table in which speaker information of speakers permitted to perform purchase settlement and information necessary for purchase settlement are associated with each other. When it is determined that the identified speaker information is of the speaker permitted to perform purchase settlement, purchase settlement processing is performed using the spoken command and the information necessary for purchase settlement.
    Type: Grant
    Filed: December 4, 2017
    Date of Patent: December 11, 2018
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventor: Mariko Yamada
  • Patent number: 10147429
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for identifying a user in a multi-user environment. One of the methods includes receiving, by a first user device, an audio signal encoding an utterance, obtaining, by the first user device, a first speaker model for a first user of the first user device, obtaining, by the first user device for a second user of a second user device that is co-located with the first user device, a second speaker model for the second user or a second score that indicates a respective likelihood that the utterance was spoken by the second user, and determining, by the first user device, that the utterance was spoken by the first user using (i) the first speaker model and the second speaker model or (ii) the first speaker model and the second score.
    Type: Grant
    Filed: September 6, 2017
    Date of Patent: December 4, 2018
    Assignee: Google LLC
    Inventors: Raziel Alvarez Guevara, Othar Hansson
  • Patent number: 10140977
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating additional training data for a natural language understanding engine. One of the methods includes: obtaining data identifying (i) a first input conversational turn and (ii) a first annotation, determining that the first annotation accurately characterized the first input conversational turn, determining that the natural language understanding engine is likely to generate inaccurate annotations of other conversational turns that are similar to the first input conversational turn, in response to the determining, obtaining one or more first paraphrases of the first input conversational turn; and generating, for each of the one or more first paraphrases, a respective first training example that identifies the first annotation as the correct annotation for the first paraphrase; and training the natural language understanding engine on at least the first training examples.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: November 27, 2018
    Assignee: botbotbotbot Inc.
    Inventors: Antoine Raux, Yi Ma
  • Patent number: 10127920
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for adjusting acoustic parameters. In one aspect, a method includes receiving an identifier associated with an enclosure for a computing device, transmitting data identifying the identifier associated with the enclosure for the computing device, and receiving one or more physical parameters of the enclosure for the computing device. The method also includes based on the one or more physical parameters of the enclosure for the computing device, determining, one or more acoustic parameter adjustments of the computing device in the enclosure, the one or more acoustic parameter adjustments being configured to preserve one or more acoustic characteristics of the computing device out of the enclosure while the computing device is in the enclosure, and based on the one or more acoustic parameter adjustments, adjusting the one or more acoustic parameters of the computing device.
    Type: Grant
    Filed: January 9, 2017
    Date of Patent: November 13, 2018
    Assignee: Google LLC
    Inventors: Jean-Michel Trivi, Moonseok Kim
  • Patent number: 10114814
    Abstract: A system and method for processing and actionizing structured and unstructured patient experience data is disclosed herein. In some embodiments, a system may include a natural language processing (NLP) engine configured to transform a data set into a plurality of concepts within a plurality of distinct contexts, and a data mining engine configured to process the relationships of the concepts and to identify associations and correlations in the data set. In some embodiments, the method may include the steps of receiving a data set, scanning the data set with an NLP engine to identify a plurality of concepts within a plurality of distinct contexts, and identifying patterns in the relationships between the plurality of concepts.
    Type: Grant
    Filed: September 27, 2016
    Date of Patent: October 30, 2018
    Assignee: NARRATIVEDX, INC.
    Inventors: Kyle Robertson, Taylor Turpen
  • Patent number: 10109295
    Abstract: An upper limit of a frequency range of audio indicated by input audio data is detected. A representative point extraction unit downsamples the input audio data to a sampling rate set to be less than or equal to twice the detected upper limit to obtain representative-point audio data. An interpolation processing unit upsamples the representative-point audio data by using a fractal interpolation function (FIF) that uses a mapping function calculated by a mapping function calculation unit, while using the input audio data, if necessary, to generate high-frequency interpolated audio data.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: October 23, 2018
    Assignee: Alpine Electronics, Inc.
    Inventor: Ryosuke Tachi
  • Patent number: 10089298
    Abstract: At least one computer-mediated communication produced by or received by an author is collected and parsed to identify categories of information within it. The categories of information are processed with at least one analysis to quantify at least one type of information in each category. A first output communication is generated regarding the at least one computer-mediated communication, describing the psychological state, attitudes or characteristics of the author of the communication. A second output communication is generated when a difference between the quantification of at least one type of information for at least one category and a reference for the at least one category is detected involving a psychological state, attitude or characteristic of the author to which a responsive action should be taken.
    Type: Grant
    Filed: February 21, 2018
    Date of Patent: October 2, 2018
    Assignee: Stroz Friedberg LLC
    Inventor: Eric D. Shaw
  • Patent number: 10090004
    Abstract: The present invention relates to an audio encoding and, more particularly, to a signal classifying method and device, and an audio encoding method and device using the same, which can reduce a delay caused by an encoding mode switching while improving the quality of reconstructed sound. The signal classifying method may comprise the operations of: classifying a current frame into one of a speech signal and a music signal; determining, on the basis of a characteristic parameter obtained from multiple frames, whether a result of the classifying of the current frame includes an error; and correcting the result of the classifying of the current frame in accordance with a result of the determination. By correcting an initial classification result of an audio signal on the basis of a correction parameter, the present invention can determine an optimum coding mode for the characteristic of an audio signal and can prevent frequent coding mode switching between frames.
    Type: Grant
    Filed: February 24, 2015
    Date of Patent: October 2, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ki-hyun Choo, Anton Viktorovich Porov, Konstantin Sergeevich Osipov
  • Patent number: 10083168
    Abstract: A method, computer system, and a computer program product for altering a written communication based on a dress style associated with a recipient is provided. The present invention may include receiving a plurality of visual data associated with the recipient. The present invention may also include analyzing the received plurality of visual data. The present invention may then include determining the dress style associated with the recipient based on the analyzed plurality of visual data. The present invention may further include retrieving a writing style associated with the recipient from a knowledge base based on the determined dress style. The present invention may also include generating a plurality of writing guidelines based on the retrieved writing style associated with the recipient.
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: September 25, 2018
    Assignee: International Business Machines Corporation
    Inventors: Joshua H. Armitage, Michael C. Froend, Christine A. Jenkins, Mohammad Zanjani
  • Patent number: 10074361
    Abstract: Provided is a speech recognition apparatus. The apparatus includes a preprocessor configured to extract select frames from all frames of a first speech of a user, and a score calculator configured to calculate an acoustic score of a second speech, made up of the extracted select frames, by using a Deep Neural Network (DNN)-based acoustic model, and to calculate an acoustic score of frames, of the first speech, other than the select frames based on the calculated acoustic score of the second speech.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: September 11, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: In Chul Song
  • Patent number: 10067939
    Abstract: A machine translation method includes converting a source sentence written in a first language to language-independent information using an encoder for the first language, and converting the language-independent information to a target sentence corresponding to the source sentence and written in a second language different from the first language using a decoder for the second language. The encoder for the first language is trained to output language-independent information corresponding to the target sentence in response to an input of the source sentence.
    Type: Grant
    Filed: January 9, 2017
    Date of Patent: September 4, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hwidong Na, Hoshik Lee, Young Sang Choi