Patents by Inventor Johan Schalkwyk
Johan Schalkwyk has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9728185Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for recognizing speech using neural networks. One of the methods includes receiving an audio input; processing the audio input using an acoustic model to generate a respective phoneme score for each of a plurality of phoneme labels; processing one or more of the phoneme scores using an inverse pronunciation model to generate a respective grapheme score for each of a plurality of grapheme labels; and processing one or more of the grapheme scores using a language model to generate a respective text label score for each of a plurality of text labels.Type: GrantFiled: May 22, 2015Date of Patent: August 8, 2017Assignee: Google Inc.Inventors: Johan Schalkwyk, Francoise Beaufays, Hasim Sak, John Giannandrea
-
Publication number: 20170199665Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.Type: ApplicationFiled: March 29, 2017Publication date: July 13, 2017Inventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
-
Patent number: 9678664Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.Type: GrantFiled: April 10, 2015Date of Patent: June 13, 2017Assignee: Google Inc.Inventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
-
Patent number: 9569431Abstract: The present disclosure describes a teleconferencing system that may use a virtual participant processor to translate language content of the teleconference into each participant's spoken language without additional user inputs. The virtual participant processor may connect to the teleconference as do the other participants. The virtual participant processor may intercept all text or audio data that was previously exchanged between the participants may now be intercepted by the virtual participant processor. Upon obtaining a partial or complete language recognition result or making a language preference determination, the virtual participant processor may call a translation engine appropriate for each of the participants. The virtual participant processor may send the resulting translation to a teleconference management processor. The teleconference management processor may deliver the respective translated text or audio data to the appropriate participant.Type: GrantFiled: March 21, 2016Date of Patent: February 14, 2017Assignee: Google Inc.Inventors: Jakob David Uszkoreit, Ashish Venugopal, Johan Schalkwyk, Joshua James Estelle
-
Patent number: 9536528Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining hotword suitability. In one aspect, a method includes receiving speech data that encodes a candidate hotword spoken by a user, evaluating the speech data or a transcription of the candidate hotword, using one or more predetermined criteria, generating a hotword suitability score for the candidate hotword based on evaluating the speech data or a transcription of the candidate hotword, using one or more predetermined criteria, and providing a representation of the hotword suitability score for display to the user.Type: GrantFiled: August 6, 2012Date of Patent: January 3, 2017Assignee: Google Inc.Inventors: Andrew E. Rubin, Johan Schalkwyk, Maria Carolina Parada San Martin
-
Patent number: 9495127Abstract: Methods, computer program products and systems are described for converting speech to text. Sound information is received at a computer server system from an electronic device, where the sound information is from a user of the electronic device. A context identifier indicates a context within which the user provided the sound information. The context identifier is used to select, from among multiple language models, a language model appropriate for the context. Speech in the sound information is converted to text using the selected language model. The text is provided for use by the electronic device.Type: GrantFiled: December 22, 2010Date of Patent: November 15, 2016Assignee: Google Inc.Inventors: Brandon M. Ballinger, Johan Schalkwyk, Michael H. Cohen, Cyril Georges Luc Allauzen
-
Patent number: 9483459Abstract: A system is configured to receive a first string corresponding to an interpretation of a natural-language user voice entry; provide a representation of the first string as feedback to the natural-language user voice entry; receive, based on the feedback, a second string corresponding to a natural-language corrective user entry, where the natural-language corrective user entry may correspond to a correction to the natural-language user voice entry; parse the second string into one or more tokens; determine at least one corrective instruction from the one or more tokens of the second string; generate, from at least a portion of each of the first and second strings and based on the at least one corrective instruction, candidate corrected user entries; select a corrected user entry from the candidate corrected user entries; and output the selected, corrected user entry.Type: GrantFiled: March 13, 2013Date of Patent: November 1, 2016Assignee: Google Inc.Inventors: Michael D Riley, Johan Schalkwyk, Cyril Georges Luc Allauzen, Ciprian Ioan Chelba, Edward Oscar Benson
-
Publication number: 20160299685Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.Type: ApplicationFiled: April 10, 2015Publication date: October 13, 2016Inventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
-
Patent number: 9418177Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for processing spoken query terms. In one aspect, a method includes performing speech recognition on an audio signal to select two or more textual, candidate transcriptions that match a spoken query term, and to establish a speech recognition confidence value for each candidate transcription, obtaining a search history for a user who spoke the spoken query term, where the search history references one or more past search queries that have been submitted by the user, generating one or more n-grams from each candidate transcription, where each n-gram is a subsequence of n phonemes, syllables, letters, characters, words or terms from a respective candidate transcription, and determining, for each n-gram, a frequency with which the n-gram occurs in the past search queries, and a weighting value that is based on the respective frequency.Type: GrantFiled: August 5, 2013Date of Patent: August 16, 2016Assignee: Google Inc.Inventors: Matthew I. Lloyd, Johan Schalkwyk, Pankaj Risbood
-
Publication number: 20160203127Abstract: The present disclosure describes a teleconferencing system that may use a virtual participant processor to translate language content of the teleconference into each participant's spoken language without additional user inputs. The virtual participant processor may connect to the teleconference as do the other participants. The virtual participant processor may intercept all text or audio data that was previously exchanged between the participants may now be intercepted by the virtual participant processor. Upon obtaining a partial or complete language recognition result or making a language preference determination, the virtual participant processor may call a translation engine appropriate for each of the participants. The virtual participant processor may send the resulting translation to a teleconference management processor. The teleconference management processor may deliver the respective translated text or audio data to the appropriate participant.Type: ApplicationFiled: March 21, 2016Publication date: July 14, 2016Applicant: Google Inc.Inventors: Jakob David Uszkoreit, Ashish Venugopal, Johan Schalkwyk, Joshua James Estelle
-
Patent number: 9378733Abstract: Embodiments pertain to automatic speech recognition in mobile devices to establish the presence of a keyword. An audio waveform is received at a mobile device. Front-end feature extraction is performed on the audio waveform, followed by acoustic modeling, high level feature extraction, and output classification to detect the keyword. Acoustic modeling may use a neural network or a vector quantization dictionary and high level feature extraction may use pooling.Type: GrantFiled: April 11, 2013Date of Patent: June 28, 2016Assignee: Google Inc.Inventors: Vincent O. Vanhoucke, Oriol Vinyals, Patrick An Phu Nguyen, Maria Carolina Parada San Martin, Johan Schalkwyk
-
Publication number: 20160133259Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining hotword suitability. In one aspect, a method includes receiving speech data that encodes a candidate hotword spoken by a user, evaluating the speech data or a transcription of the candidate hotword, using one or more predetermined criteria, generating a hotword suitability score for the candidate hotword based on evaluating the speech data or a transcription of the candidate hotword, using one or more predetermined criteria, and providing a representation of the hotword suitability score for display to the user.Type: ApplicationFiled: January 20, 2016Publication date: May 12, 2016Inventors: Andrew E. Rubin, Johan Schalkwyk, Maria Carolina Parada San Martin
-
Publication number: 20160132293Abstract: A computer-implemented input-method editor process includes receiving a request from a user for an application-independent input method editor having written and spoken input capabilities, identifying that the user is about to provide spoken input to the application-independent input method editor, and receiving a spoken input from the user. The spoken input corresponds to input to an application and is converted to text that represents the spoken input. The text is provided as input to the application.Type: ApplicationFiled: January 5, 2016Publication date: May 12, 2016Inventors: Brandon M. Ballinger, Johan Schalkwyk, Michael H. Cohen, William J. Byrne, Gudmundur Hafsteinsson, Michael J. LeBeau
-
Patent number: 9318104Abstract: Methods and systems for sharing of adapted voice profiles are provided. The method may comprise receiving, at a computing system, one or more speech samples, and the one or more speech samples may include a plurality of spoken utterances. The method may further comprise determining, at the computing system, a voice profile associated with a speaker of the plurality of spoken utterances, and including an adapted voice of the speaker. Still further, the method may comprise receiving, at the computing system, an authorization profile associated with the determined voice profile, and the authorization profile may include one or more user identifiers associated with one or more respective users. Yet still further, the method may comprise the computing system providing the voice profile to at least one computing device associated with the one or more respective users, based at least in part on the authorization profile.Type: GrantFiled: July 10, 2015Date of Patent: April 19, 2016Assignee: Google Inc.Inventors: Javier Gonzalvo Fructuoso, Johan Schalkwyk
-
Patent number: 9311915Abstract: A processing system receives an audio signal encoding a portion of an utterance. The processing system receives context information associated with the utterance, wherein the context information is not derived from the audio signal or any other audio signal. The processing system provides, as input to a neural network, data corresponding to the audio signal and the context information, and generates a transcription for the utterance based on at least an output of the neural network.Type: GrantFiled: September 18, 2013Date of Patent: April 12, 2016Assignee: Google Inc.Inventors: Eugene Weinstein, Pedro J. Mengibar, Johan Schalkwyk
-
Patent number: 9292500Abstract: The present disclosure describes a teleconferencing system that may use a virtual participant processor to translate language content of the teleconference into each participant's spoken language without additional user inputs. The virtual participant processor may connect to the teleconference as do the other participants. The virtual participant processor may intercept all text or audio data that was previously exchanged between the participants may now be intercepted by the virtual participant processor. Upon obtaining a partial or complete language recognition result or making a language preference determination, the virtual participant processor may call a translation engine appropriate for each of the participants. The virtual participant processor may send the resulting translation to a teleconference management processor. The teleconference management processor may deliver the respective translated text or audio data to the appropriate participant.Type: GrantFiled: September 15, 2014Date of Patent: March 22, 2016Assignee: Google Inc.Inventors: Jakob David Uszkoreit, Ashish Venugopal, Johan Schalkwyk, Joshua James Estelle
-
Patent number: 9251791Abstract: A computer-implemented input-method editor process includes receiving a request from a user for an application-independent input method editor having written and spoken input capabilities, identifying that the user is about to provide spoken input to the application-independent input method editor, and receiving a spoken input from the user. The spoken input corresponds to input to an application and is converted to text that represents the spoken input. The text is provided as input to the application.Type: GrantFiled: June 9, 2014Date of Patent: February 2, 2016Assignee: Google Inc.Inventors: Brandon M. Ballinger, Johan Schalkwyk, Michael H. Cohen, William J. Byrne, Gudmundur Hafsteinsson, Michael J. LeBeau
-
Publication number: 20150340034Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for recognizing speech using neural networks. One of the methods includes receiving an audio input; processing the audio input using an acoustic model to generate a respective phoneme score for each of a plurality of phoneme labels; processing one or more of the phoneme scores using an inverse pronunciation model to generate a respective grapheme score for each of a plurality of grapheme labels; and processing one or more of the grapheme scores using a language model to generate a respective text label score for each of a plurality of text labels.Type: ApplicationFiled: May 22, 2015Publication date: November 26, 2015Inventors: Johan Schalkwyk, Francoise Beaufays, Hasim Sak, John Giannandrea
-
Patent number: 9190054Abstract: A data processing apparatus is configured to receive a first string related to a natural-language voice user entry and a second string including at least one natural-language refinement to the user entry; parse the first string into a first set of one or more tokens and the second string into a second set of one or more tokens; determine at least one refining instruction from the second set of one or more tokens; generate, from at least a portion of each of the first string and the second string and based on the at least one refining instruction, a group of candidate refined user entries; select a refined user entry from the group of candidate refined user entries; and output the selected, refined user entry.Type: GrantFiled: March 13, 2013Date of Patent: November 17, 2015Assignee: Google Inc.Inventors: Michael D Riley, Johan Schalkwyk, Cyril Georges Luc Allauzen, Ciprian Ioan Chelba, Edward Oscar Benson
-
Publication number: 20150279351Abstract: Embodiments pertain to automatic speech recognition in mobile devices to establish the presence of a keyword. An audio waveform is received at a mobile device. Front-end feature extraction is performed on the audio waveform, followed by acoustic modeling, high level feature extraction, and output classification to detect the keyword. Acoustic modeling may use a neural network or Gaussian mixture modeling, and high level feature extraction may be done by aligning the results of the acoustic modeling with expected event vectors that correspond to a keyword.Type: ApplicationFiled: April 11, 2013Publication date: October 1, 2015Inventors: Patrick An Phu Nguyen, Maria Carolina Parada San Martin, Johan Schalkwyk