Patents by Inventor William J. Byrne

William J. Byrne has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8438023
    Abstract: Methods and devices are disclosed for notifying a user of a likelihood of successful recognition in an environment by a voice recognition application. In one embodiment, the method includes a device recording a noise sample in an environment and making a comparison of the noise sample and at least one predetermined threshold. The method further includes, based on the comparison, determining a likelihood of successful recognition in the environment by a voice recognition application, and triggering a notification indicating the likelihood. In another embodiment, the device includes a microphone configured to record a noise sample in an environment, a processor, and data storage comprising instructions executable by the processor to make a comparison of the noise sample and at least one predetermined threshold, based on the comparison, determine a likelihood of successful recognition by a voice recognition application, and trigger a notification indicating the likelihood.
    Type: Grant
    Filed: August 17, 2012
    Date of Patent: May 7, 2013
    Assignee: Google Inc.
    Inventors: Robert William Hamilton, Bjorn Erik Bringert, Michael J. LeBeau, William J. Byrne, John Nicholas Jitkoff
  • Patent number: 8412532
    Abstract: A method, computer program product, and system are provided for performing a voice command on a client device. The method can include translating, using a first speech recognizer located on the client device, an audio stream of a voice command to a first machine-readable voice command and generating a first query result using the first machine-readable voice command to query a client database. In addition, the audio stream can be transmitted to a remote server device that translates the audio stream to a second machine-readable voice command using a second speech recognizer. Further, the method can include receiving a second query result from the remote server device, where the second query result is generated by the remote server device using the second machine-readable voice command and displaying the first query result and the second query result on the client device.
    Type: Grant
    Filed: November 2, 2011
    Date of Patent: April 2, 2013
    Assignee: Google Inc.
    Inventors: Alexander Gruenstein, William J. Byrne
  • Patent number: 8392411
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for providing search results automatically to a user of a computing device. A spoken input provided by a user to a computing device is received. The spoken input is transmitted to a computer server system that is remote from the computing device. Search result information that is responsive to the spoken input is receiving by the computing device and in response to the transmitted spoken input. An alert is provided to the user that the device will connect the user to a target of the search result information if the user does not intervene to stop the connecting of the user. The user is connected to the target of the search result information based on a determination that the user has not intervened to stop the connecting of the user.
    Type: Grant
    Filed: August 6, 2010
    Date of Patent: March 5, 2013
    Assignee: Google Inc.
    Inventors: Michael J. Lebeau, John Nicholas Jitkoff, William J. Byrne
  • Publication number: 20120310645
    Abstract: A method, computer program product, and system are provided for performing a voice command on a client device. The method can include translating, using a first speech recognizer located on the client device, an audio stream of a voice command to a first machine-readable voice command and generating a first query result using the first machine-readable voice command to query a client database. In addition, the audio stream can be transmitted to a remote server device that translates the audio stream to a second machine-readable voice command using a second speech recognizer. Further, the method can include receiving a second query result from the remote server device, where the second query result is generated by the remote server device using the second machine-readable voice command and displaying the first query result and the second query result on the client device.
    Type: Application
    Filed: August 14, 2012
    Publication date: December 6, 2012
    Applicant: GOOGLE INC.
    Inventors: Alexander Gruenstein, William J. Byrne
  • Patent number: 8312042
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for providing search results automatically to a user of a computing device. A spoken input provided by a user to a computing device is received. The spoken input is transmitted to a computer server system that is remote from the computing device. Search result information that is responsive to the spoken input is receiving by the computing device and in response to the transmitted spoken input. An alert is provided to the user that the device will connect the user to a target of the search result information if the user does not intervene to stop the connecting of the user. The user is connected to the target of the search result information based on a determination that the user has not intervened to stop the connecting of the user.
    Type: Grant
    Filed: September 26, 2011
    Date of Patent: November 13, 2012
    Assignee: Google Inc.
    Inventors: Michael J. LeBeau, John Nicholas Jitkoff, William J. Byrne
  • Patent number: 8244544
    Abstract: A computer-implemented method of generating a voice command to perform an action includes receiving a voice request to perform the action, wherein the voice request comprises first audio information for one or more first data fields associated with the action; generating a GUI that when rendered on a display device comprises a prompt message prompting a user to speak second audio information for one or more second data fields associated with the action; and inserting into the one or more second data fields data indicative of one or more of (i) the first audio information, and (ii) the second audio information.
    Type: Grant
    Filed: September 29, 2011
    Date of Patent: August 14, 2012
    Assignee: Google Inc.
    Inventors: Michael J. LeBeau, John Nicholas Jitkoff, William J. Byrne
  • Patent number: 8224654
    Abstract: A computer-implemented method of generating a voice command to perform an action includes receiving a voice request to perform the action, wherein the voice request comprises first audio information for one or more first data fields associated with the action; generating a GUI that when rendered on a display device comprises a prompt message prompting a user to speak second audio information for one or more second data fields associated with the action; and inserting into the one or more second data fields data indicative of one or more of (i) the first audio information, and (ii) the second audio information.
    Type: Grant
    Filed: August 6, 2010
    Date of Patent: July 17, 2012
    Assignee: Google Inc.
    Inventors: Michael J. LeBeau, John Nicholas Jitkoff, William J. Byrne
  • Publication number: 20120084079
    Abstract: A method, computer program product, and system are provided for performing a voice command on a client device. The method can include translating, using a first speech recognizer located on the client device, an audio stream of a voice command to a first machine-readable voice command and generating a first query result using the first machine-readable voice command to query a client database. In addition, the audio stream can be transmitted to a remote server device that translates the audio stream to a second machine-readable voice command using a second speech recognizer. Further, the method can include receiving a second query result from the remote server device, where the second query result is generated by the remote server device using the second machine-readable voice command and displaying the first query result and the second query result on the client device.
    Type: Application
    Filed: November 2, 2011
    Publication date: April 5, 2012
    Applicant: Google Inc.
    Inventors: Alexander Gruenstein, William J. Byrne
  • Publication number: 20120036151
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving user input that defines a search query, and providing the search query to a server system. Information that a search engine system determined was responsive to the search query is received at a computing device. The computing device is identified as in a first state, and a first output mode for audibly outputting at least a portion of the information is selected. The first output mode is selected from a collection of the first output mode and a second output mode. The second output mode is selected in response to the computing device being in a second state and is for visually outputting at least the portion of the information and not audibly outputting the at least portion of the information. At least the portion of information is audibly output.
    Type: Application
    Filed: September 30, 2011
    Publication date: February 9, 2012
    Inventors: John Nicholas Jitkoff, Michael J. LeBeau, William J. Byrne, David P. Singleton
  • Publication number: 20120036121
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving user input that defines a search query, and providing the search query to a server system. Information that a search engine system determined was responsive to the search query is received at a computing device. The computing device is identified as in a first state, and a first output mode for audibly outputting at least a portion of the information is selected. The first output mode is selected from a collection of the first output mode and a second output mode. The second output mode is selected in response to the computing device being in a second state and is for visually outputting at least the portion of the information and not audibly outputting the at least portion of the information. At least the portion of information is audibly output.
    Type: Application
    Filed: August 6, 2010
    Publication date: February 9, 2012
    Inventors: John Nicholas Jitkoff, Michael J. Lebeau, William J. Byrne, David P. Singleton
  • Publication number: 20120022853
    Abstract: A computer-implemented input-method editor process includes receiving a request from a user for an application-independent input method editor having written and spoken input capabilities, identifying that the user is about to provide spoken input to the application-independent input method editor, and receiving a spoken input from the user. The spoken input corresponds to input to an application and is converted to text that represents the spoken input. The text is provided as input to the application.
    Type: Application
    Filed: September 29, 2011
    Publication date: January 26, 2012
    Inventors: Brandon M. Ballinger, Johan Schalkwyk, Michael H. Cohen, William J. Byrne, Gudmundur Hafsteinsson, Michael J. LeBeau
  • Publication number: 20120022868
    Abstract: The subject matter of this specification can be implemented in, among other things, a computer-implemented method for correcting words in transcribed text including receiving speech audio data from a microphone. The method further includes sending the speech audio data to a transcription system. The method further includes receiving a word lattice transcribed from the speech audio data by the transcription system. The method further includes presenting one or more transcribed words from the word lattice. The method further includes receiving a user selection of at least one of the presented transcribed words. The method further includes presenting one or more alternate words from the word lattice for the selected transcribed word. The method further includes receiving a user selection of at least one of the alternate words. The method further includes replacing the selected transcribed word in the presented transcribed words with the selected alternate word.
    Type: Application
    Filed: September 30, 2011
    Publication date: January 26, 2012
    Inventors: Michael J. LeBeau, William J. Byrne, John Nicholas Jitkoff, Brandon M. Ballinger, Trausti Kristjansson
  • Publication number: 20120015674
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for providing search results automatically to a user of a computing device. A spoken input provided by a user to a computing device is received. The spoken input is transmitted to a computer server system that is remote from the computing device. Search result information that is responsive to the spoken input is receiving by the computing device and in response to the transmitted spoken input. An alert is provided to the user that the device will connect the user to a target of the search result information if the user does not intervene to stop the connecting of the user. The user is connected to the target of the search result information based on a determination that the user has not intervened to stop the connecting of the user.
    Type: Application
    Filed: September 26, 2011
    Publication date: January 19, 2012
    Applicant: GOOGLE INC.
    Inventors: Michael J. LeBeau, John Nicholas Jitkoff, William J. Byrne
  • Publication number: 20110301955
    Abstract: Predicting and learning users' intended actions on an electronic device based on free-form speech input. Users' actions can be monitored to develop of a list of carrier phrases having one or more actions that correspond to the carrier phrases. A user can speak a command into a device to initiate an action. The spoken command can be parsed and compared to a list of carrier phrases. If the spoken command matches one of the known carrier phrases, the corresponding action(s) can be presented to the user for selection. If the spoken command does not match one of the known carrier phrases, search results (e.g., Internet search results) corresponding to the spoken command can be presented to the user. The actions of the user in response to the presented action(s) and/or the search results can be monitored to update the list of carrier phrases.
    Type: Application
    Filed: June 7, 2010
    Publication date: December 8, 2011
    Applicant: GOOGLE INC.
    Inventors: William J. Byrne, Alexander H. Gruenstein, Douglas Beeferman
  • Publication number: 20110289064
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for providing search results automatically to a user of a computing device. A spoken input provided by a user to a computing device is received. The spoken input is transmitted to a computer server system that is remote from the computing device. Search result information that is responsive to the spoken input is receiving by the computing device and in response to the transmitted spoken input. An alert is provided to the user that the device will connect the user to a target of the search result information if the user does not intervene to stop the connecting of the user. The user is connected to the target of the search result information based on a determination that the user has not intervened to stop the connecting of the user.
    Type: Application
    Filed: August 6, 2010
    Publication date: November 24, 2011
    Inventors: Michael J. Lebeau, John Nicholas Jitkoff, William J. Byrne
  • Patent number: 8041568
    Abstract: A method of operating a voice-enabled business directory search system includes prompting a user to provide a type of business and an identifier of a specific business, receiving from the user a speech input having information about the type of business and the identifier, and recognizing, using a speech recognition module, the identifier based on the type of business.
    Type: Grant
    Filed: October 13, 2006
    Date of Patent: October 18, 2011
    Assignee: Google Inc.
    Inventors: Brian Strope, William J. Byrne, Francoise Beaufays
  • Publication number: 20110246944
    Abstract: A text processing module can allow a user to compose text prior to selecting another application with which to use or communicate the text. A device can include the text processing module, which receives text input from a user via text input means. The device can display the text in a user interface, along with one or more icons associated with software applications with which the text can be used or communicated. After the user has entered text, the user can activate a displayed icon to select the applications. The text processing module receives the selection and interacts with the selected application to display the text in the selected application and/or communicate the text to another person using the selected application. The text processing module can interact with user contacts to identify possible recipients for the text based on information in the text.
    Type: Application
    Filed: April 6, 2010
    Publication date: October 6, 2011
    Applicant: Google Inc.
    Inventors: William J. Byrne, Brett Rolston Lider, Nicholas Jitkoff, Alexander H. Gruenstein, Benedict Davies
  • Publication number: 20110246203
    Abstract: A dynamic voice user interface system is provided. The dynamic voice user interface system interacts with a user at a first level of formality. The voice user interface system then monitors history of user interaction and adjusts the voice user interface to interact with the user with a second level of formality based on the history of user interaction.
    Type: Application
    Filed: April 4, 2011
    Publication date: October 6, 2011
    Applicant: Ben Franklin Patent Holding LLC
    Inventors: William J. Byrne, Henry W. Gardella, Jeffrey A. Gilmore
  • Publication number: 20110239110
    Abstract: Systems and methods allow a user to select a subset of displayed content using a touch screen. A user can touch the screen at or near a portion of the displayed content that the user would like to select. The touch module can display the selection of the selected portion on the touch screen using an indicator (e.g., highlighting, underlining, change in color). While the user continues to touch the screen, the selection of displayed content can expand to select additional content based on at least one rule. The rule(s) define how the selection of displayed content expands using characteristics of the user's touch. For example, these characteristics can include an amount of pressure exerted on the screen, a direction of finger roll at the point of contact with the screen, and an amount of time that the user has touched the screen.
    Type: Application
    Filed: March 25, 2010
    Publication date: September 29, 2011
    Applicant: Google Inc.
    Inventors: Maryam Kamvar Garrett, William J. Byrne
  • Publication number: 20110184740
    Abstract: A method, computer program product, and system are provided for performing a voice command on a client device. The method can include translating, using a first speech recognizer located on the client device, an audio stream of a voice command to a first machine-readable voice command and generating a first query result using the first machine-readable voice command to query a client database. In addition, the audio stream can be transmitted to a remote server device that translates the audio stream to a second machine-readable voice command using a second speech recognizer. Further, the method can include receiving a second query result from the remote server device, where the second query result is generated by the remote server device using the second machine-readable voice command and displaying the first query result and the second query result on the client device.
    Type: Application
    Filed: June 7, 2010
    Publication date: July 28, 2011
    Applicant: Google Inc.
    Inventors: Alexander GRUENSTEIN, William J. Byrne