Patents Examined by Eric Yen
  • Patent number: 10297249
    Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.
    Type: Grant
    Filed: April 20, 2015
    Date of Patent: May 21, 2019
    Assignee: VB Assets, LLC
    Inventors: Larry Baldwin, Tom Freeman, Michael Tjalve, Blane Ebersold, Chris Weider
  • Patent number: 10296075
    Abstract: In an embodiment, an apparatus includes an input circuit coupled to a first power supply with a first voltage level, a power circuit coupled to a second power supply with a second voltage level, and an output driver. The input circuit may receive an input signal, and generate an inverted signal dependent upon the input signal. The power circuit may generate a power signal in response to first values of the input and the inverted signals, wherein a voltage level of the power signal may be dependent upon the second voltage level. The power circuit may also generate a third voltage level on the power signal in response to second values of the input and the inverted signals. The output driver may generate an output signal dependent upon the input signal. The output signal may transition between the voltage level of the power signal and the ground reference level.
    Type: Grant
    Filed: May 11, 2016
    Date of Patent: May 21, 2019
    Assignee: Apple Inc.
    Inventors: Zhao Wang, Miles G. Canada
  • Patent number: 10275450
    Abstract: A method and system to identify similar names and addresses from given data set comprising plurality of names and addresses. The invention more specifically addresses the challenge faced in Spanish data quality assurance. The name and data is parsed through parsing engine to parse the plurality of Spanish names and addresses. The parsed Spanish names and addresses are sent to a Probable identification engine to identify the probable matches. The combination of name and address matching process can be used for assuring data quality for Spanish names and addresses. The Spanish name matching process consists of identification of probable matches and finding similarity percentages between those probable. Similarly, the Spanish address matching process consists of identification of probable matches (criteria like same city) and finding similarity percentages between those probable. The system includes a parsing engine, a probable identification engine and a match percentage calculation engine.
    Type: Grant
    Filed: September 20, 2016
    Date of Patent: April 30, 2019
    Assignee: Tata Consultancy Services Limited
    Inventors: Ashish Diwan, Nandish Kirtikumar Solanki, Sridhar G. Pattar, Sudhir Kumar
  • Patent number: 10269344
    Abstract: Provided is a smart home appliance. The smart home appliance includes: a voice input unit collecting a voice; a voice recognition unit recognizing a text corresponding to the voice collected through the voice input unit; a capturing unit collecting an image for detecting a user's visage or face; a memory unit mapping the text recognized by the voice recognition unit and a setting function and storing the mapped information; and a control unit determining whether to perform a voice recognition service on the basis of at least one information of image information collected by the capturing unit and voice information collected by the voice input unit.
    Type: Grant
    Filed: November 4, 2014
    Date of Patent: April 23, 2019
    Assignee: LG ELECTRONICS INC.
    Inventors: Juwan Lee, Lagyoung Kim, Taedong Shin, Daegeun Seo
  • Patent number: 10269362
    Abstract: According to an aspect of the present invention, a method for reconstructing an audio signal having a baseband portion and a highband portion is disclosed. The method includes obtaining a decoded baseband audio signal by decoding an encoded audio signal and obtaining a plurality of subband signals by filtering the decoded baseband audio signal. The method further includes generating a high-frequency reconstructed signal by copying a number of consecutive subband signals of the plurality of subband signals and obtaining an envelope adjusted high-frequency signal. The method further includes generating a noise component based on a noise parameter. Finally, the method includes adjusting a phase of the high-frequency reconstructed signal and obtaining a time-domain reconstructed audio signal by combining the decoded baseband audio signal and the combined high-frequency signal to obtain a time-domain reconstructed audio signal.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: April 23, 2019
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Michael M. Truman, Mark S. Vinton
  • Patent number: 10255905
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating word pronunciations. One of the methods includes determining, by one or more computers, spelling data that indicates the spelling of a word, providing the spelling data as input to a trained recurrent neural network, the trained recurrent neural network being trained to indicate characteristics of word pronunciations based at least on data indicating the spelling of words, receiving output indicating a stress pattern for pronunciation of the word generated by the trained recurrent neural network in response to providing the spelling data as input, using the output of the trained recurrent neural network to generate pronunciation data indicating the stress pattern for a pronunciation of the word, and providing, by the one or more computers, the pronunciation data to a text-to-speech system or an automatic speech recognition system.
    Type: Grant
    Filed: June 10, 2016
    Date of Patent: April 9, 2019
    Assignee: Google LLC
    Inventors: Mason Vijay Chua, Kanury Kanishka Rao, Daniel Jacobus Josef van Esch
  • Patent number: 10242663
    Abstract: Voice command recognition with dialect translation is disclosed. User voice input can be translated to a standard voice pattern using a dialect translation unit. A control command can then be generated based on the translated user voice input. In certain embodiments, the voice command recognition system with dialect translation can be implemented in a driving apparatus. In those embodiments, various control commands to control the driving apparatus can be generated by a user with a dialect input. The generated voice control commands for the driving apparatus can include starting the driving apparatus, turning on/off A/C unit, controlling the A/C unit, turning on/off entertainment system, controlling the entertainment system, turning on/off certain safety features, turning on/off certain driving features, adjusting seat, adjusting steering wheel, taking a picture of surroundings and/or any other control commands that can control various functions of the driving apparatus.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: March 26, 2019
    Assignee: Thunder Power New Energy Vehicle Development Company Limited
    Inventor: Yong-Syuan Chen
  • Patent number: 10242685
    Abstract: An encoding system (400) encodes an N-channel audio signal (X), wherein N?3, as a single-channel downmix signal (Y) together with dry and wet upmix parameters ({tilde over (C)}, {tilde over (P)}). In a decoding system (200), a decorrelating section (101) outputs, based on the downmix signal, an (N?1)-channel decorrelated signal (Z); a dry upmix section (102) maps the downmix signal linearly in accordance with dry upmix coefficients (C) determined based on the dry upmix parameters; a wet upmix section (103) populates an intermediate matrix based on the wet upmix parameters and knowing that the intermediate matrix belongs to a predefined matrix class, obtains wet upmix coefficients (P) by multiplying the intermediate matrix by a predefined matrix, and maps the decorrelated signal linearly in accordance with the wet upmix coefficients; and a combining section (104) combines outputs from the upmix sections to obtain a reconstructed signal ({circumflex over (X)}) corresponding to the signal to be reconstructed.
    Type: Grant
    Filed: May 21, 2018
    Date of Patent: March 26, 2019
    Assignee: Dolby International AB
    Inventors: Lars Villemoes, Heidi-Maria Lehtonen, Heiko Purnhagen, Toni Hirvonen
  • Patent number: 10243912
    Abstract: A system that incorporates teachings of the present disclosure may include, for example, a server including a controller to receive audio signals and content identification information from a media processor, generate text representing a voice message based on the audio signals, determine an identity of media content based on the content identification information, generate an enhanced message having text and additional content where the additional content is obtained by the controller based on the identity of the media content, and transmit the enhanced message to the media processor for presentation on the display device, where the enhanced message is accessible by one or more communication devices that are associated with a social network and remote from the media processor. Other embodiments are disclosed.
    Type: Grant
    Filed: January 19, 2016
    Date of Patent: March 26, 2019
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Hisao Chang, Bernard S. Renger
  • Patent number: 10236002
    Abstract: A method and device for decoding a signal. The method for decoding a signal includes: obtaining spectral coefficients of sub-bands from a received bitstream by means of decoding; classifying sub-bands in which the spectral coefficients are located into a sub-band with saturated bit allocation and a sub-band with unsaturated bit allocation; performing noise filling on a spectral coefficient that has not been obtained by means of decoding and is in the sub-band with unsaturated bit allocation, so as to restore the spectral coefficient that has not been obtained by means of decoding; and obtaining a frequency domain signal according to the spectral coefficients obtained by means of decoding and the restored spectral coefficient. Therefore, a sub-band with unsaturated bit allocation in a frequency domain signal may be obtained by classification, thereby improving signal decoding quality.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: March 19, 2019
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Zexin Liu, Fengyan Qi, Lei Miao
  • Patent number: 10235996
    Abstract: A system and method for providing a voice assistant including receiving, at a first device, a first audio input from a user requesting a first action; performing automatic speech recognition on the first audio input; obtaining a context of user; performing natural language understanding based on the speech recognition of the first audio input; and taking the first action based on the context of the user and the natural language understanding.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: March 19, 2019
    Assignee: Xbrain, Inc.
    Inventors: Gregory Renard, Mathias Herbaux
  • Patent number: 10236001
    Abstract: Techniques for passive enrollment of a user in a speaker identification (ID) device are provided. One technique includes: parsing, by a processor of the speaker ID device, a speech sample, spoken by the user, into a keyword phrase sample and a command phrase sample; identifying, by a text-dependent speaker ID circuit of the speaker ID device, the user as the speaker of the keyword phrase sample; associating the command phrase sample with the identified user; determining if the command phrase sample in conjunction with one or more earlier command phrase samples associated with the user is sufficient command phrase sampling to enroll the user in a text-independent speaker ID circuit of the speaker ID device; and enrolling the user in the text-independent speaker ID circuit using the command phrase samples associated with the user after determining there is sufficient command phrase sampling to enroll the user.
    Type: Grant
    Filed: May 24, 2018
    Date of Patent: March 19, 2019
    Assignee: INTEL CORPORATION
    Inventor: David Pearce
  • Patent number: 10235993
    Abstract: An input signal may be classified by computing correlations between feature vectors of the input signal and feature vectors of reference signals, wherein the reference signals correspond to a class. The feature vectors of the input signal and/or the reference signals may be segmented to identify portions of the signals before performing the correlations. Multiple correlations of the segments may be combined to produce a segment score corresponding to a segment. The signal may then be classified using multiple segment scores, for example by comparing a combination of the segment scores to a threshold.
    Type: Grant
    Filed: June 14, 2016
    Date of Patent: March 19, 2019
    Assignee: Friday Harbor LLC
    Inventors: David Carlson Bradley, Sean Michael O'Connor, Yao Huang Morin, Ellisha Natalie Marongelli
  • Patent number: 10229678
    Abstract: A remote device has an associated natural language description that includes a record of commands supported by the remote device. This record of commands includes command names, the command functions to which those names correspond, and natural language strings that are the natural language words or phrases that correspond to the command. A computing device includes a device control module that obtains the natural language description for the remote device and provides the natural language strings to a natural language assistant on the computing device. The natural language assistant monitors the natural language inputs to the computing device, and notifies the device control module when a natural language input matches one of the natural language strings. The device control module uses the natural language description to determine the command name that corresponds to the matching natural language string, and communicates the command name to the remote device.
    Type: Grant
    Filed: October 14, 2016
    Date of Patent: March 12, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Justin A. Hutchings
  • Patent number: 10229668
    Abstract: Methods and systems are described in which spoken voice prompts can be produced in a manner such that they will most likely have the desired effect, for example to indicate empathy, or produce a desired follow-up action from a call recipient. The prompts can be produced with specific optimized speech parameters, including duration, gender of speaker, and pitch, so as to encourage participation and promote comprehension among a wide range of patients or listeners. Upon hearing such voice prompts, patients/listeners can know immediately when they are being asked questions that they are expected to answer, and when they are being given information, as well as the information that considered sensitive.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: March 12, 2019
    Assignee: Eliza Corporation
    Inventors: Lisa Lavoie, Lucas Merrow, Alexandra Drane, Frank Rizzo, Ivy Krull
  • Patent number: 10229114
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for contextual language translation. In one aspect, a method includes the actions of receiving a first text string. The actions further include generating a first translation of the first text string. The actions further include providing, for output, the first translation of the first text string. The actions further include receiving a second text string. The actions further include generating a combined text string by combining the first text string and the second text string. The actions further include generating a second translation of the combined text string. The actions further include providing, for output, a portion of the second translation that includes a translation of the second text string without providing, for output, a portion of the second translation that includes a translation of the first text string.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: March 12, 2019
    Assignee: Google LLC
    Inventor: Tal Cohen
  • Patent number: 10224026
    Abstract: An electronic device comprising circuitry configured to record sensor data that is obtained from data sources and to retrieve information from the recorded sensor data using concepts that are defined by a user.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: March 5, 2019
    Assignee: SONY CORPORATION
    Inventors: Aurel Bordewieck, Fabien Cardinaux, Wilhelm Hagg, Thomas Kemp, Stefan Uhlich, Fritz Hohl
  • Patent number: 10218327
    Abstract: Various embodiments relate to signal processing and, more particularly, to processing of received speech signals to preserve and enhance speech intelligibility. In one embodiment, a communications apparatus includes a receiving path over which received speech signals traverse in an audio stream, and an dynamic audio enhancement device disposed in the receiving path. The dynamic audio enhancement (“DAE”) device is configured to modify an amount of volume and an amount of equalization of the audio stream. The DAE device can include a noise level estimator (“NLE”) configured to generate a signal representing a noise level estimate. The noise level estimator can include a non-stationary noise detector and a stationary noise detector. The noise level estimator can be configured to generate the signal representing a first noise level estimate based on detection of the non-stationary noise or a second noise level estimate based on detection of the stationary noise.
    Type: Grant
    Filed: January 9, 2012
    Date of Patent: February 26, 2019
    Inventor: Zhinian Jing
  • Patent number: 10217453
    Abstract: A speech-enabled dialog system responds to a plurality of wake-up phrases. Based on which wake-up phrase is detected, the system's configuration is modified accordingly. Various configurable aspects of the system include selection and morphing of a text-to-speech voice; configuration of acoustic model, language model, vocabulary, and grammar; configuration of a graphic animation; configuration of virtual assistant personality parameters; invocation of a particular user profile; invocation of an authentication function; and configuration of an open sound. Configuration depends on a target market segment. Configuration also depends on the state of the dialog system, such as whether a previous utterance was an information query.
    Type: Grant
    Filed: October 14, 2016
    Date of Patent: February 26, 2019
    Assignee: SoundHound, Inc.
    Inventors: Mark Stevans, Monika Almudafar-Depeyrot, Keyvan Mohajer
  • Patent number: 10210153
    Abstract: A language processing system for text normalization of an input string of a semiotic class. In an aspect, a method includes receiving an input string; accessing, for a semiotic class of non-standard words, a language universal covering grammar for a plurality of languages that generates, for each language of the plurality of languages, one or more sequences of word-level components for each instance of the semiotic class in the language; for each of the plurality of languages, accessing a lexical map specific to the language and that maps each sequence of word-level components for each instance of the semiotic class in the language verbalizations in the language; generating, from the language universal grammar and the lexical maps, a lattice of possible verbalizations of the input string; and selecting one of the possible verbalizations as a selected verbalization for the input string.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: February 19, 2019
    Assignee: Google LLC
    Inventors: Richard Sproat, Ke Wu, Kyle Gorman