Patents by Inventor Michael J. Lebeau

Michael J. Lebeau has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170084276
    Abstract: Embodiments may be implemented by a computing device, such as a head-mountable display, in order to use a single guard phrase to enable different voice commands in different interface modes. An example device includes an audio sensor and a computing system configured to analyze audio data captured by the audio sensor to detect speech that includes a predefined guard phrase, and to operate in a plurality of different interface modes comprising at least a first and a second interface mode. During operation in the first interface mode, the computing system may initially disable one or more first-mode speech commands, and respond to detection of the guard phrase by enabling the one or more first-mode speech commands. During operation in the second interface mode, the computing system may initially disable a second-mode speech command, and to respond to the guard phrase by enabling the second-mode speech command.
    Type: Application
    Filed: December 6, 2016
    Publication date: March 23, 2017
    Inventors: Michael J. LeBeau, Mat Balez
  • Patent number: 9600229
    Abstract: A method for receiving processed information at a remote device is described. The method includes transmitting from the remote device a verbal request to a first information provider and receiving a digital message from the first information provider in response to the transmitted verbal request. The digital message includes a symbolic representation indicator associated with a symbolic representation of the verbal request and data used to control an application. The method also includes transmitting, using the application, the symbolic representation indicator to a second information provider for generating results to be displayed on the remote device.
    Type: Grant
    Filed: September 5, 2014
    Date of Patent: March 21, 2017
    Assignee: Google Inc.
    Inventors: Gudmundur Hafsteinsson, Michael J. LeBeau, Natalia Marmasse, Sumit Agarwal, Dipchand Nishar
  • Publication number: 20170069322
    Abstract: The subject matter of this specification can be implemented in, among other things, a computer-implemented method for correcting words in transcribed text including receiving speech audio data from a microphone. The method further includes sending the speech audio data to a transcription system. The method further includes receiving a word lattice transcribed from the speech audio data by the transcription system. The method further includes presenting one or more transcribed words from the word lattice. The method further includes receiving a user selection of at least one of the presented transcribed words. The method further includes presenting one or more alternate words from the word lattice for the selected transcribed word. The method further includes receiving a user selection of at least one of the alternate words. The method further includes replacing the selected transcribed word in the presented transcribed words with the selected alternate word.
    Type: Application
    Filed: November 14, 2016
    Publication date: March 9, 2017
    Inventors: Michael J. LeBeau, William J. Byrne, John Nicholas Jitkoff, Brandon M. Ballinger, Trausti T. Kristjansson
  • Patent number: 9582549
    Abstract: A computer-implemented search method includes receiving a registration request from each of one or more computer applications installed on a computing device and registering the applications in response to the request, wherein the registration request indicates an intent by the application to receive search query information from a search application associated with the device. The method also includes receiving user input on the device in the form of a query, providing the query to the one or more registered applications, receiving responses from the one or more registered applications that include data that is managed by the one or more registered applications; integrating the responses into a result set; and presenting the result set with the computing device.
    Type: Grant
    Filed: April 14, 2015
    Date of Patent: February 28, 2017
    Assignee: Google Inc.
    Inventors: Michael J. LeBeau, Prasenjit Phukan
  • Patent number: 9570077
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving a voice query at a mobile computing device and generating data that represents content of the voice query. The data is provided to a server system. A textual query that has been determined by a speech recognizer at the server system to be a textual form of at least part of the data is received at the mobile computing device. The textual query is determined to include a carrier phrase of one or more words that is reserved by a first third-party application program installed on the computing device. The first third-party application is selected, from a group of one or more third-party applications, to receive all or a part of the textual query. All or a part of the textual query is provided to the selected first application program.
    Type: Grant
    Filed: April 23, 2014
    Date of Patent: February 14, 2017
    Assignee: Google Inc.
    Inventors: Michael J. LeBeau, John Nicholas Jitkoff, William J. Byrne
  • Patent number: 9570094
    Abstract: A computer-implemented method of multisensory speech detection is disclosed. The method comprises determining an orientation of a mobile device and determining an operating mode of the mobile device based on the orientation of the mobile device. The method further includes identifying speech detection parameters that specify when speech detection begins or ends based on the determined operating mode and detecting speech from a user of the mobile device based on the speech detection parameters.
    Type: Grant
    Filed: June 29, 2015
    Date of Patent: February 14, 2017
    Assignee: Google Inc.
    Inventors: Dave Burke, Michael J. LeBeau, Konrad Gianno, Trausti T. Kristjansson, John Nicholas Jitkoff, Andrew W. Senior
  • Patent number: 9542932
    Abstract: The subject matter of this specification can be implemented in, among other things, a computer-implemented method for correcting words in transcribed text including receiving speech audio data from a microphone. The method further includes sending the speech audio data to a transcription system. The method further includes receiving a word lattice transcribed from the speech audio data by the transcription system. The method further includes presenting one or more transcribed words from the word lattice. The method further includes receiving a user selection of at least one of the presented transcribed words. The method further includes presenting one or more alternate words from the word lattice for the selected transcribed word. The method further includes receiving a user selection of at least one of the alternate words. The method further includes replacing the selected transcribed word in the presented transcribed words with the selected alternate word.
    Type: Grant
    Filed: February 17, 2016
    Date of Patent: January 10, 2017
    Assignee: Google Inc.
    Inventors: Michael J. LeBeau, William J. Byrne, John Nicholas Jitkoff, Brandon M. Ballinger, Trausti T. Kristjansson
  • Patent number: 9530410
    Abstract: Embodiments may be implemented by a computing device, such as a head-mountable display, in order to use a single guard phrase to enable different voice commands in different interface modes. An example device includes an audio sensor and a computing system configured to analyze audio data captured by the audio sensor to detect speech that includes a predefined guard phrase, and to operate in a plurality of different interface modes comprising at least a first and a second interface mode. During operation in the first interface mode, the computing system may initially disable one or more first-mode speech commands, and respond to detection of the guard phrase by enabling the one or more first-mode speech commands. During operation in the second interface mode, the computing system may initially disable a second-mode speech command, and to respond to the guard phrase by enabling the second-mode speech command.
    Type: Grant
    Filed: April 9, 2013
    Date of Patent: December 27, 2016
    Assignee: Google Inc.
    Inventors: Michael J. LeBeau, Mathieu Balez
  • Publication number: 20160370200
    Abstract: A computer-implemented method includes receiving at a computer server system, from a computing device that is remote from the server system, a string of text that comprises a search query. The method also includes identifying one or more search results that are responsive to the search query, parsing a document that is a target of one of the one or more results, identifying geographical address information from the parsing, generating a specific geographical indicator corresponding to the one search result, and transmitting for use by the computing device, data for automatically generating a navigational application having a destination at the specific geographical indicator.
    Type: Application
    Filed: August 31, 2016
    Publication date: December 22, 2016
    Inventors: Michael J. LeBeau, Ole CaveLie, Keith Ito, John Nicholas Jitkoff
  • Publication number: 20160314788
    Abstract: In one implementation, a computer-implemented method includes receiving, at a mobile computing device, ambiguous user input that indicates more than one of a plurality of commands; and determining a current context associated with the mobile computing device that indicates where the mobile computing device is currently located. The method can further include disambiguating the ambiguous user input by selecting a command from the plurality of commands based on the current context associated with the mobile computing device; and causing output associated with performance of the selected command to be provided by the mobile computing device.
    Type: Application
    Filed: July 1, 2016
    Publication date: October 27, 2016
    Inventors: John Nicholas Jitkoff, Michael J. LeBeau
  • Publication number: 20160299641
    Abstract: Methods, apparatuses, and computer-readable media are described herein related to a user interface and interactions for a head-mountable device. An HMD can display a first interaction screen of an ordered plurality of interaction screens. The first interaction screen may include information corresponding to a first interaction associated with a contact of the HMD. While displaying the first interaction screen, the HMD can receive an input at the HMD. The input may be associated with the contact and comprises a second interaction. The HMD can also associate a second interaction screen with the ordered plurality of interaction screens. The second interaction screen may include information corresponding to the second interaction. The HMD can further display the second interaction screen using the HMD.
    Type: Application
    Filed: April 1, 2013
    Publication date: October 13, 2016
    Applicant: Google Inc.
    Inventors: Michael J. LEBEAU, Mathieu BALEZ, Richard THE
  • Patent number: 9466287
    Abstract: The subject matter of this specification can be implemented in, among other things, a computer-implemented method for correcting words in transcribed text including receiving speech audio data from a microphone. The method further includes sending the speech audio data to a transcription system. The method further includes receiving a word lattice transcribed from the speech audio data by the transcription system. The method further includes presenting one or more transcribed words from the word lattice. The method further includes receiving a user selection of at least one of the presented transcribed words. The method further includes presenting one or more alternate words from the word lattice for the selected transcribed word. The method further includes receiving a user selection of at least one of the alternate words. The method further includes replacing the selected transcribed word in the presented transcribed words with the selected alternate word.
    Type: Grant
    Filed: January 5, 2016
    Date of Patent: October 11, 2016
    Assignee: Google Inc.
    Inventors: Michael J. LeBeau, William J. Byrne, John Nicholas Jitkoff, Brandon M. Ballinger, Trausti T. Kristjansson
  • Publication number: 20160232896
    Abstract: Methods, apparatus, and computer-readable media are described herein related to a user interface (UI) that can be implemented on a head-mountable device (HMD). The UI can include a voice-navigable UI. The voice-navigable UI can include a voice navigable menu that includes one or more menu items. The voice-navigable UI can also present a first visible menu that includes at least a portion of the voice navigable menu. In response to a first utterance comprising one of the one or more menu items, the voice-navigable UI can modify the first visible menu to display one or more commands associated with the first menu item. In response to a second utterance comprising a first command, the voice-navigable UI can invoke the first command. In some embodiments, the voice-navigable UI can display a second visible menu, where the first command can be displayed above other menu items in the second visible menu.
    Type: Application
    Filed: April 15, 2016
    Publication date: August 11, 2016
    Inventors: Michael J. LeBeau, Clifford Ivar Nass
  • Patent number: 9401147
    Abstract: In one implementation, a computer-implemented method includes receiving, at a mobile computing device, ambiguous user input that indicates more than one of a plurality of commands; and determining a current context associated with the mobile computing device that indicates where the mobile computing device is currently located. The method can further include disambiguating the ambiguous user input by selecting a command from the plurality of commands based on the current context associated with the mobile computing device; and causing output associated with performance of the selected command to be provided by the mobile computing device.
    Type: Grant
    Filed: June 8, 2015
    Date of Patent: July 26, 2016
    Assignee: Google Inc.
    Inventors: John Nicholas Jitkoff, Michael J. LeBeau
  • Patent number: 9368113
    Abstract: Methods, apparatus, and computer-readable media are described herein related to a user interface (UI) that can be implemented on a head-mountable device (HMD). The UI can include a voice-navigable UI. The voice-navigable UI can include a voice navigable menu that includes one or more menu items comprising an original menu item and an added command menu item. The original menu item can be associated with one or more original commands, and the added menu item can be associated with one or more added commands, including a first added command. The interface can also present a first visible menu that includes at least a portion of the voice navigable menu. Responsive to a first utterance comprising the first added command, the interface can invoke the first added command. In some embodiments, the interface can display a second visible menu, wherein the first added command can be displayed above other menu items.
    Type: Grant
    Filed: January 30, 2013
    Date of Patent: June 14, 2016
    Assignee: Google Inc.
    Inventors: Michael J. LeBeau, Mat Balez, Yury Pinsky
  • Publication number: 20160163308
    Abstract: The subject matter of this specification can be implemented in, among other things, a computer-implemented method for correcting words in transcribed text including receiving speech audio data from a microphone. The method further includes sending the speech audio data to a transcription system. The method further includes receiving a word lattice transcribed from the speech audio data by the transcription system. The method further includes presenting one or more transcribed words from the word lattice. The method further includes receiving a user selection of at least one of the presented transcribed words. The method further includes presenting one or more alternate words from the word lattice for the selected transcribed word. The method further includes receiving a user selection of at least one of the alternate words. The method further includes replacing the selected transcribed word in the presented transcribed words with the selected alternate word.
    Type: Application
    Filed: February 17, 2016
    Publication date: June 9, 2016
    Inventors: Michael J. LeBeau, William J. Byrne, John Nicholas Jitkoff, Brandon M. Ballinger, Trausti T. Kristjansson
  • Publication number: 20160154881
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving user input that defines a search query, and providing the search query to a server system. Information that a search engine system determined was responsive to the search query is received at a computing device. The computing device is identified as in a first state, and a first output mode for audibly outputting at least a portion of the information is selected. The first output mode is selected from a collection of the first output mode and a second output mode. The second output mode is selected in response to the computing device being in a second state and is for visually outputting at least the portion of the information and not audibly outputting the at least portion of the information. At least the portion of information is audibly output.
    Type: Application
    Filed: February 5, 2016
    Publication date: June 2, 2016
    Inventors: John Nicholas Jitkoff, Michael J. LeBeau, William J. Byrne, David P. Singleton
  • Publication number: 20160156758
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving user input that defines a search query, and providing the search query to a server system. Information that a search engine system determined was responsive to the search query is received at a computing device. The computing device is identified as in a first state, and a first output mode for audibly outputting at least a portion of the information is selected. The first output mode is selected from a collection of the first output mode and a second output mode. The second output mode is selected in response to the computing device being in a second state and is for visually outputting at least the portion of the information and not audibly outputting the at least portion of the information. At least the portion of information is audibly output.
    Type: Application
    Filed: February 5, 2016
    Publication date: June 2, 2016
    Inventors: John Nicholas Jitkoff, Michael J. LeBeau, William J. Byrne, David P. Singleton
  • Patent number: 9349368
    Abstract: A computer-implemented method of determining when an audio notification should be generated includes detecting receipt of a triggering event that occurs on a user device; generating, based on detecting, the audio notification for the triggering event; receiving, from the user device, a user voice command responding to the audio notification; and generating a response to the user voice command based on one or more of (i) information associated with the audio notification, and (ii) information associated with the user voice command.
    Type: Grant
    Filed: August 5, 2010
    Date of Patent: May 24, 2016
    Assignee: Google Inc.
    Inventors: Michael J. Lebeau, John Nicholas Jitkoff
  • Patent number: 9342268
    Abstract: Methods, apparatus, and computer-readable media are described herein related to a user interface (UI) that can be implemented on a head-mountable device (HMD). The UI can include a voice-navigable UI. The voice-navigable UI can include a voice navigable menu that includes one or more menu items. The voice-navigable UI can also present a first visible menu that includes at least a portion of the voice navigable menu. In response to a first utterance comprising one of the one or more menu items, the voice-navigable UI can modify the first visible menu to display one or more commands associated with the first menu item. In response to a second utterance comprising a first command, the voice-navigable UI can invoke the first command. In some embodiments, the voice-navigable UI can display a second visible menu, where the first command can be displayed above other menu items in the second visible menu.
    Type: Grant
    Filed: October 15, 2015
    Date of Patent: May 17, 2016
    Assignee: Google Inc.
    Inventors: Michael J. LeBeau, Clifford Ivar Nass