Patents Examined by Anne Thomas-Homescu
  • Patent number: 9953630
    Abstract: A computing device reduces the complexity of setting a preferred language on the computing device based on verbal communications with a user. The device may detect when a user is having difficulty navigating a device in a current language and detects the language spoken by a user to cause a language setting to change. The computing device may cross reference other information associated with user, such as other applications or content, when selecting a preferred language.
    Type: Grant
    Filed: May 31, 2013
    Date of Patent: April 24, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Colleen Maree Aubrey, Jeffrey Penrod Adams
  • Patent number: 9916295
    Abstract: A method and apparatus to align contexts with text. Multiple versions within separate forms of context are controlled; all contexts are controlled in independent alignment with parts in text. Plain text syllables are synchronized with audio vocalization playback with timings applied in context. Precise synchronization is controlled within a multi-touch tap process. Same-language restatements, translations, linguistic alignment “ties” and tags are controlled in contexts. Depictions and vocalizations of text and parts in text are controlled within contexts and sorted within tiered carousels. Toggle controls quickly access separate contexts. Independent alignments between multiple contexts and parts in text are controlled and dynamically adjusted in real-time. Text and contexts in multiple writing systems, styles and sizes are aligned and edited within WYSIWYG textarea. Context alignment controls are applied within a collaborative social framework.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: March 13, 2018
    Inventor: Richard Henry Dana Crawford
  • Patent number: 9906641
    Abstract: Provided are a system and method of providing a voice-message call service. A mobile device that performs a call with an external mobile device comprises a control unit configured to obtain text, the text converted from voice data that is exchanged between the mobile device and the external mobile device, during the call between the mobile device and the external mobile device, and obtain input text input to the mobile device and provided text that is received from the external mobile device; and a display unit configured to arrange the text, the input text, and the provided text and display the arranged text, input text, and provided text on a screen of the device, during the call between the mobile device and the external mobile device.
    Type: Grant
    Filed: May 26, 2015
    Date of Patent: February 27, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hong-chul Kim, Seon-ae Kim, Hyun-jae Shin
  • Patent number: 9864745
    Abstract: A universal language translator automatically translates a spoken word or phrase between two speakers. The translator can lock onto a speaker and filter out ambient noise so as to be used in noisy environments, and to ensure accuracy of the translation when multiple speakers are present. The translator can also synthesize a speaker's voice into the dialect of the other speaker such that each speaker sounds like they're speaking the language of the other. A dialect detector could automatically select target dialects either by auto-sensing the dialect by listening to aspects of each speaker's phrases, or based upon the location of the device.
    Type: Grant
    Filed: October 29, 2015
    Date of Patent: January 9, 2018
    Inventor: Reginald Dalce
  • Patent number: 9805721
    Abstract: Techniques for indicating to a voice-controlled device that a user is going to provide a voice command to the device. In response to receiving such an indication, the device may prepare to process an audio signal based on sound captured by a microphone of the device for the purpose of identifying the voice command from the audio signal. For instance, a user may utilize a signaling device that includes a button that, when actuated, sends a signal that is received by the voice-controlled device. In response to receiving the signal, a microphone of the voice-controlled device may capture sound that is proximate to the voice-controlled device and may create an audio signal based on the sound. The voice-controlled device may then analyze the audio signal for a voice command of the user or may provide the audio signal to a remote service for identifying the command.
    Type: Grant
    Filed: September 21, 2012
    Date of Patent: October 31, 2017
    Assignee: Amazon Technologies, Inc.
    Inventor: Allan Timothy Lindsay
  • Patent number: 9792913
    Abstract: The present disclosure provides a voiceprint authentication method and a voiceprint authentication apparatus. The method includes: displaying a first character string to a user, in which the first character string includes a predilection character preset by the user, and the predilection character is displayed as a symbol corresponding to the predilection character in the first character string; obtaining a speech of the first character string read by the user; obtaining a first voiceprint identity vector of the speech of the first character string; comparing the first voiceprint identity vector with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication.
    Type: Grant
    Filed: December 23, 2015
    Date of Patent: October 17, 2017
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Chao Li, Zhijian Wang
  • Patent number: 9786281
    Abstract: A user profile for a plurality of users may be built for speech recognition purposes and for acting as an agent of the user. In some embodiments, a speech processing device automatically receives an utterance from a user. The utterance may be analyzed using signal processing to identify data associated with the user. The utterance may also be analyzed using speech recognition to identify additional data associated with the user. The identified data may be stored in a profile of the user. Data in the user profile may be used to select an acoustic model and/or a language model for speech recognition or to take actions on behalf of the user.
    Type: Grant
    Filed: August 2, 2012
    Date of Patent: October 10, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Jeffrey P. Adams, Stan W. Salvador, Reinhard Kneser
  • Patent number: 9785628
    Abstract: An input method editor (IME) is associated with a local user. Memory stores local data and a processor, coupled to the memory, is configured to receive input from a local, first user, obtain shared data associated with at least a remote, second user from a remote server and generate prediction candidates and conversion candidates based on the input provided by the local, first user and correlation of the input and the obtained shared data.
    Type: Grant
    Filed: September 29, 2011
    Date of Patent: October 10, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dong Li, Xi Chen, Yoshiharu Sato, Keita Ooi
  • Patent number: 9785630
    Abstract: Systems and processes are disclosed for predicting words in a text entry environment. Candidate words and probabilities associated therewith can be determined by combining a word n-gram language model and a unigram language model. Using the word n-gram language model, based on previously entered words, candidate words can be identified and a probability can be calculated for each candidate word. Using the unigram language model, based on a character entered for a new word, candidate words beginning with the character can be identified along with a probability for each candidate word. In some examples, a geometry score can be included in the unigram probability related to typing geometry on a virtual keyboard. The probabilities of the n-gram language model and unigram model can be combined, and the candidate word or words having the highest probability can be displayed for a user.
    Type: Grant
    Filed: May 28, 2015
    Date of Patent: October 10, 2017
    Assignee: Apple Inc.
    Inventors: Christopher P. Willmore, Nicholas K. Jong, Justin S. Hogg
  • Patent number: 9786271
    Abstract: A method for voice pattern coding and catalog matching. The method includes identifying a set of vocal variables for a user, by a voice recognition system, based, at least in part, on a user interaction with the voice recognition system. The method further includes generating a voice model of speech patterns that represent the speaking of a particular language using the identified set of vocal variables, wherein the voice model is adapted to improve recognition of the user's voice by the voice recognition system. The method further includes matching the generated voice model to a catalog of speech patterns, and identifying a voice model code that represents speech patterns in the catalog that match the generated voice model. The method further includes providing the identified voice model code to the user.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: October 10, 2017
    Assignee: International Business Machines Corporation
    Inventors: Judith M. Combs, Peeyush Jaiswal, Priyansh Jaiswal, David Jaramillo, Annita Tomko
  • Patent number: 9779758
    Abstract: Example methods and systems use multiple sensors to determine whether a speaker is speaking. Audio data in an audio-channel speech band detected by a microphone can be received. Vibration data in a vibration-channel speech band representative of vibrations detected by a sensor other than the microphone can be received. The microphone and the sensor can be associated with a head-mountable device (HMD). It is determined whether the audio data is causally related to the vibration data. If the audio data and the vibration data are causally related, an indication can be generated that the audio data contains HMD-wearer speech. Causally related audio and vibration data can be used to increase accuracy of text transcription of the HMD-wearer speech. If the audio data and the vibration data are not causally related, an indication can be generated that the audio data does not contain HMD-wearer speech.
    Type: Grant
    Filed: August 17, 2015
    Date of Patent: October 3, 2017
    Assignee: Google Inc.
    Inventors: Michael Patrick Johnson, Jianchun Dong, Mat Balez
  • Patent number: 9781262
    Abstract: Methods and apparatus for voice-enabling a web application, wherein the web application includes one or more web pages rendered by a web browser on a computer. At least one information source external to the web application is queried to determine whether information describing a set of one or more supported voice interactions for the web application is available, and in response to determining that the information is available, the information is retrieved from the at least one information source. Voice input for the web application is then enabled based on the retrieved information.
    Type: Grant
    Filed: August 2, 2012
    Date of Patent: October 3, 2017
    Assignee: Nuance Communications, Inc.
    Inventors: Christopher Hardy, David E. Reich
  • Patent number: 9691407
    Abstract: A noise reduction apparatus according to the present invention includes: a sudden sound information storage unit that stores an input signal that are input before a current input signal is input as sudden sound information, the input signal having a signal level of voice components equal to or smaller than a predetermined threshold and including a sudden sound to be suppressed; a phase difference calculation unit that calculates a phase difference between the sudden sound information and a sudden sound in the current input signal based on a maximum value of a correlation value between the sudden sound information and the current input signal; an addition signal generation unit that shifts a phase of the sudden sound information based on the phase difference to generate an addition signal; and a sudden sound suppression unit that adds the addition signal and the current input signal to output an output signal.
    Type: Grant
    Filed: August 15, 2014
    Date of Patent: June 27, 2017
    Assignee: JVC KENWOOD CORPORATION
    Inventors: Keisuke Oda, Takaaki Yamabe
  • Patent number: 9575960
    Abstract: One or more words at a specified location in an electronic document can be identified. The identified one or more words can be analyzed to determine one or more semantic meanings associated with the words. An audio clip (i.e., audio file, audio element) associated with or corresponding to (the semantic meaning(s) of) the one or more words can be searched for in an audio database. The search for the audio clip associated with the one or more words can utilize an index that specifies the associations between words and audio clips. In some embodiments, the audio clip can be played when an estimated location of where the user is reading is at or near the specified location of the one or more words. In some embodiments, the audio clip can be played when it is calculated that the user is reading the one or more words at the specified location.
    Type: Grant
    Filed: September 17, 2012
    Date of Patent: February 21, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: David M. Lerner, Brandon J. Smith, Jon Robert Ducrou, Erik J. Miller, Marcus A. Barry, Kenneth O. Sanders, II
  • Patent number: 9530167
    Abstract: In one embodiment, a system includes one or more computing systems that implement a social networking environment and is operable to parse users' actions that include free form text to determine and store objects and affinities contained in the text string through natural-language processing. The method comprises accessing a text string, identifying objects and affinity declarations via natural-language processing, assessing the combination of objects and context data to determine an instance of a broader concept, and determining an affinity coefficient through a natural-language processing dictionary. Once a database of stored instances and affinities has been generated and stored, it may be leveraged to push suggestions to members of the social network to enhance their social networking experience.
    Type: Grant
    Filed: August 12, 2011
    Date of Patent: December 27, 2016
    Assignee: Facebook, Inc.
    Inventor: Erick Tseng
  • Patent number: 9530419
    Abstract: A stereophonic signal is converted into a mid channel signal and a side channel signal. Noise is added to the side channel signal. The amount of noise is selected depending on masking thresholds for at least two channels of the stereophonic signal. The mid channel signal and the modified side channel signal are quantized for transmission. Alternatively or in addition, a set of quantization parameter for the quantization of the side channel signal is selected depending on the masking thresholds.
    Type: Grant
    Filed: May 4, 2011
    Date of Patent: December 27, 2016
    Assignee: Nokia Technologies Oy
    Inventors: Miikka Tapani Vilermo, Lasse Juhani Laaksonen
  • Patent number: 9524293
    Abstract: A computer-implemented technique can include receiving a machine translation input specifying (i) a source text, (ii) a source language of the source text, and (iii) a target language for the source text, and obtaining a machine translation of the source text from the source language to the target language to obtain a translated source text. The technique can include determining whether to swap the source and target languages based on (i) the source text and (ii) at least one language model, and in response to determining to swap the source and target languages: swapping the source and target languages to obtain modified source and target languages, utilizing the translated source text as a modified source text, obtaining a machine translation of the modified source text from the modified source language to the modified target language to obtain a translated modified source text, and outputting the translated modified source text.
    Type: Grant
    Filed: August 15, 2014
    Date of Patent: December 20, 2016
    Assignee: Google Inc.
    Inventors: Alexander Jay Cuthbert, Chao Tian
  • Patent number: 9485597
    Abstract: A system and method may be configured to process an audio signal. The system and method may track pitch, chirp rate, and/or harmonic envelope across the audio signal, may reconstruct sound represented in the audio signal, and/or may segment or classify the audio signal. A transform may be performed on the audio signal to place the audio signal in a frequency chirp domain that enhances the sound parameter tracking, reconstruction, and/or classification.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: November 1, 2016
    Assignee: KnuEdge Incorporated
    Inventors: David C. Bradley, Daniel S. Goldin, Robert N. Hilton, Nicholas K. Fisher, Rodney Gateau, Derrick R. Roos, Eric Wiewiora
  • Patent number: 9483463
    Abstract: A method, system, and computer program product for extracting text motifs from the electronic documents is disclosed. A user provides a largest-maximal repeat or a super-maximal repeat as a first text block. The occurrences of the first text block are detected to identify the second text blocks in the vicinity of the occurrences of the first text block on the basis of pre-defined parameters. The text motifs are determined by combining the first text block and the second text block. Finally, the text motifs are extracted from the electronic documents.
    Type: Grant
    Filed: September 10, 2012
    Date of Patent: November 1, 2016
    Assignee: Xerox Corporation
    Inventors: Matthias Galle, Jean-Michel Renders
  • Patent number: 9473866
    Abstract: A system and method may be configured to analyze audio information derived from an audio signal. The system and method may track sound pitch across the audio signal. The tracking of pitch across the audio signal may take into account change in pitch by determining at individual time sample windows in the signal duration an estimated pitch and a representation of harmonic envelope at the estimated pitch. The estimated pitch and the representation of harmonic envelope may then be implemented to determine an estimated pitch for another time sample window in the signal duration with an enhanced accuracy and/or precision.
    Type: Grant
    Filed: November 25, 2013
    Date of Patent: October 18, 2016
    Assignee: KnuEdge Incorporated
    Inventors: David C. Bradley, Rodney Gateau, Daniel S. Goldin, Robert N. Hilton, Nicholas K. Fisher