Patents by Inventor William J. Byrne

William J. Byrne has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200302931
    Abstract: A method of providing navigation directions includes receiving, at a user terminal, a query spoken by a user, wherein the query spoken by the user includes a speech utterance indicating (i) a category of business, (ii) a name of the business, and (iii) a location at which or near which the business is disposed; identifying, by processing hardware, the business based on the speech utterance; and providing navigation directions to the business via the user terminal.
    Type: Application
    Filed: June 8, 2020
    Publication date: September 24, 2020
    Inventors: Brian Strope, Francoise Beaufays, William J. Byrne
  • Publication number: 20200293276
    Abstract: A computer-implemented input-method editor process includes receiving a request from a user for an application-independent input method editor having written and spoken input capabilities, identifying that the user is about to provide spoken input to the application-independent input method editor, and receiving a spoken input from the user. The spoken input corresponds to input to an application and is converted to text that represents the spoken input. The text is provided as input to the application.
    Type: Application
    Filed: June 4, 2020
    Publication date: September 17, 2020
    Applicant: Google LLC
    Inventors: Brandon M. Ballinger, Johan Schalkwyk, Michael H. Cohen, William J. Byrne, Gudmundur Hafsteinsson, Michael J. Lebeau
  • Publication number: 20200251113
    Abstract: The subject matter of this specification can be implemented in, among other things, a computer-implemented method for correcting words in transcribed text including receiving speech audio data from a microphone. The method further includes sending the speech audio data to a transcription system. The method further includes receiving a word lattice transcribed from the speech audio data by the transcription system. The method further includes presenting one or more transcribed words from the word lattice. The method further includes receiving a user selection of at least one of the presented transcribed words. The method further includes presenting one or more alternate words from the word lattice for the selected transcribed word. The method further includes receiving a user selection of at least one of the alternate words. The method further includes replacing the selected transcribed word in the presented transcribed words with the selected alternate word.
    Type: Application
    Filed: April 21, 2020
    Publication date: August 6, 2020
    Applicant: Google LLC
    Inventors: Michael J. LeBeau, William J. Byrne, John Nicholas Jitkoff, Brandon M. Ballinger, Trausti T. Kristjansson
  • Patent number: 10713010
    Abstract: A computer-implemented input-method editor process includes receiving a request from a user for an application-independent input method editor having written and spoken input capabilities, identifying that the user is about to provide spoken input to the application-independent input method editor, and receiving a spoken input from the user. The spoken input corresponds to input to an application and is converted to text that represents the spoken input. The text is provided as input to the application.
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: July 14, 2020
    Assignee: Google LLC
    Inventors: Brandon M. Ballinger, Johan Schalkwyk, Michael H. Cohen, William J. Byrne, Gudmundur Hafsteinsson, Michael J. Lebeau
  • Publication number: 20200201924
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving user input that defines a search query, and providing the search query to a server system. Information that a search engine system determined was responsive to the search query is received at a computing device. The computing device is identified as in a first state, and a first output mode for audibly outputting at least a portion of the information is selected. The first output mode is selected from a collection of the first output mode and a second output mode. The second output mode is selected in response to the computing device being in a second state and is for visually outputting at least the portion of the information and not audibly outputting the at least portion of the information. At least the portion of information is audibly output.
    Type: Application
    Filed: March 5, 2020
    Publication date: June 25, 2020
    Applicant: Google LLC
    Inventors: John Nicholas Jitkoff, Michael J. Lebeau, William J. Byrne, David P. Singleton
  • Patent number: 10679624
    Abstract: A method of providing a personal directory service includes receiving, over the Internet, from a user terminal, a query spoken by a user, where the query spoken by the user includes a speech utterance representing a category of persons. The method also includes determining a geographic location of the user terminal, recognizing the category of persons with the speech recognition engine based on the speech utterance representing the category of persons a listing of persons within or near the determined geographic location matching the query to select persons responsive to the query spoken by the user, and sending to the user terminal information related to at least some of the responsive persons.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: June 9, 2020
    Assignee: GOOGLE LLC
    Inventors: Brian Strope, Francoise Beaufays, William J. Byrne
  • Patent number: 10672394
    Abstract: The subject matter of this specification can be implemented in, among other things, a computer-implemented method for correcting words in transcribed text including receiving speech audio data from a microphone. The method further includes sending the speech audio data to a transcription system. The method further includes receiving a word lattice transcribed from the speech audio data by the transcription system. The method further includes presenting one or more transcribed words from the word lattice. The method further includes receiving a user selection of at least one of the presented transcribed words. The method further includes presenting one or more alternate words from the word lattice for the selected transcribed word. The method further includes receiving a user selection of at least one of the alternate words. The method further includes replacing the selected transcribed word in the presented transcribed words with the selected alternate word.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: June 2, 2020
    Assignee: Google LLC
    Inventors: Michael J. LeBeau, William J. Byrne, John Nicholas Jitkoff, Brandon M. Ballinger, Trausti T. Kristjansson
  • Publication number: 20200160611
    Abstract: Methods, systems, and devices for creating a model of a medical device for use in an extended reality (XR) system are described. The method may include receiving a three-dimensional model of the medical device, where the three-dimensional model is represented by a plurality of vectors. The method may further include reducing a number of the plurality of vectors to at least below a maximum vector count threshold while maintaining at least a minimum model resolution threshold. In some cases, the method may include assigning one or more components to the reduced number of the plurality of vectors. The method may further include configuring a software-executable file for displaying an XR version of the three-dimensional model of the medical device.
    Type: Application
    Filed: November 19, 2018
    Publication date: May 21, 2020
    Inventors: Ryan H. Gertenbach, Michael J. Ferguson, William C. Harding, Patrick W. Kinzie, Emily Clare Byrne
  • Publication number: 20200129136
    Abstract: Methods, systems, and devices for medical imaging are described. Examples may include an augmented reality (AR) server receiving a set of medical imaging data acquired by at least a first imaging modality. The set of medical imaging data may include a visual representation of a biological structure of a body. Next, the medical imaging data can be used to render an isolated anatomical model of a least a portion of the biological structure. The isolated anatomical model can be received by an AR viewing device such as AR glasses. The AR viewing device may display on a display of the AR viewing device, a first view perspective of the isolated anatomical model in a first orientation. The first orientation may be based on a position of the first AR viewing device relative to the body. Examples include displaying a virtual position of the medical instrument in the AR viewing device.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventors: William C. Harding, Martha De Cunha Maluf-Burgman, Brian Lee Bechard, Michael J. Ferguson, Patrick W. Kinzie, Ryan H. Gertenbach, Emily Clare Byrne
  • Patent number: 10621253
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving user input that defines a search query, and providing the search query to a server system. Information that a search engine system determined was responsive to the search query is received at a computing device. The computing device is identified as in a first state, and a first output mode for audibly outputting at least a portion of the information is selected. The first output mode is selected from a collection of the first output mode and a second output mode. The second output mode is selected in response to the computing device being in a second state and is for visually outputting at least the portion of the information and not audibly outputting the at least portion of the information. At least the portion of information is audibly output.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: April 14, 2020
    Assignee: Google LLC
    Inventors: John Nicholas Jitkoff, Michael J. LeBeau, William J. Byrne, David P. Singleton
  • Patent number: 10599729
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving user input that defines a search query, and providing the search query to a server system. Information that a search engine system determined was responsive to the search query is received at a computing device. The computing device is identified as in a first state, and a first output mode for audibly outputting at least a portion of the information is selected. The first output mode is selected from a collection of the first output mode and a second output mode. The second output mode is selected in response to the computing device being in a second state and is for visually outputting at least the portion of the information and not audibly outputting the at least portion of the information. At least the portion of information is audibly output.
    Type: Grant
    Filed: February 5, 2016
    Date of Patent: March 24, 2020
    Assignee: Google LLC
    Inventors: John Nicholas Jitkoff, Michael J. LeBeau, William J. Byrne, David P. Singleton
  • Patent number: 10582355
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving a voice query at a mobile computing device and generating data that represents content of the voice query. The data is provided to a server system. A textual query that has been determined by a speech recognizer at the server system to be a textual form of at least part of the data is received at the mobile computing device. The textual query is determined to include a carrier phrase of one or more words that is reserved by a first third-party application program installed on the computing device. The first third-party application is selected, from a group of one or more third-party applications, to receive all or a part of the textual query. All or a part of the textual query is provided to the selected first application program.
    Type: Grant
    Filed: January 24, 2018
    Date of Patent: March 3, 2020
    Assignee: Google LLC
    Inventors: Michael J. LeBeau, John Nicholas Jitkoff, William J. Byrne
  • Patent number: 10496718
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving user input that defines a search query, and providing the search query to a server system. Information that a search engine system determined was responsive to the search query is received at a computing device. The computing device is identified as in a first state, and a first output mode for audibly outputting at least a portion of the information is selected. The first output mode is selected from a collection of the first output mode and a second output mode. The second output mode is selected in response to the computing device being in a second state and is for visually outputting at least the portion of the information and not audibly outputting the at least portion of the information. At least the portion of information is audibly output.
    Type: Grant
    Filed: February 5, 2016
    Date of Patent: December 3, 2019
    Assignee: Google LLC
    Inventors: John Nicholas Jitkoff, Michael J. LeBeau, William J. Byrne, David P. Singleton
  • Patent number: 10496714
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving user input that defines a search query, and providing the search query to a server system. Information that a search engine system determined was responsive to the search query is received at a computing device. The computing device is identified as in a first state, and a first output mode for audibly outputting at least a portion of the information is selected. The first output mode is selected from a collection of the first output mode and a second output mode. The second output mode is selected in response to the computing device being in a second state and is for visually outputting at least the portion of the information and not audibly outputting the at least portion of the information. At least the portion of information is audibly output.
    Type: Grant
    Filed: August 6, 2010
    Date of Patent: December 3, 2019
    Assignee: Google LLC
    Inventors: John Nicholas Jitkoff, Michael J. Lebeau, William J. Byrne, David P. Singleton
  • Publication number: 20190295550
    Abstract: Predicting and learning users' intended actions on an electronic device based on free-form speech input. Users' actions can be monitored to develop a list of carrier phrases having one or more actions that correspond to the carrier phrases. A user can speak a command into a device to initiate an action. The spoken command can be parsed and compared to a list of carrier phrases. If the spoken command matches one of the known carrier phrases, the corresponding action(s) can be presented to the user for selection. If the spoken command does not match one of the known carrier phrases, search results (e.g., Internet search results) corresponding to the spoken command can be presented to the user. The actions of the user in response to the presented action(s) and/or the search results can be monitored to update the list of carrier phrases.
    Type: Application
    Filed: April 4, 2019
    Publication date: September 26, 2019
    Inventors: William J. Byrne, Alexander H. Gruenstein, Douglas H. Beeferman
  • Patent number: 10297252
    Abstract: Predicting and learning users' intended actions on an electronic device based on free-form speech input. Users' actions can be monitored to develop a list of carrier phrases having one or more actions that correspond to the carrier phrases. A user can speak a command into a device to initiate an action. The spoken command can be parsed and compared to a list of carrier phrases. If the spoken command matches one of the known carrier phrases, the corresponding action(s) can be presented to the user for selection. If the spoken command does not match one of the known carrier phrases, search results (e.g., Internet search results) corresponding to the spoken command can be presented to the user. The actions of the user in response to the presented action(s) and/or the search results can be monitored to update the list of carrier phrases.
    Type: Grant
    Filed: July 5, 2016
    Date of Patent: May 21, 2019
    Assignee: Google LLC
    Inventors: William J. Byrne, Alexander H. Gruenstein, Douglas H. Beeferman
  • Publication number: 20190056909
    Abstract: A computer-implemented input-method editor process includes receiving a request from a user for an application-independent input method editor having written and spoken input capabilities, identifying that the user is about to provide spoken input to the application-independent input method editor, and receiving a spoken input from the user. The spoken input corresponds to input to an application and is converted to text that represents the spoken input. The text is provided as input to the application.
    Type: Application
    Filed: October 24, 2018
    Publication date: February 21, 2019
    Applicant: Google LLC
    Inventors: Brandon M Ballinger, Johan Schalkwyk, Michael H. Cohen, William J. Byrne, Gudmundur Hafsteinsson, Michael J. Lebeau
  • Patent number: 10205815
    Abstract: A dynamic voice user interface system is provided. The dynamic voice user interface system interacts with a user at a first level of formality. The voice user interface system then monitors history of user interaction and adjusts the voice user interface to interact with the user with a second level of formality based on the history of user interaction.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: February 12, 2019
    Assignee: Intellectual Ventures I LLC
    Inventors: William J. Byrne, Henry W. Gardella, Jeffrey A. Gilmore
  • Patent number: D848371
    Type: Grant
    Filed: May 17, 2017
    Date of Patent: May 14, 2019
    Inventors: Daniel P. Byrne, Chad Zimmerman, William F. Schacht, Randell E. Pate, Brent A. Reame, Timothy J. Warwick
  • Patent number: D864117
    Type: Grant
    Filed: May 17, 2017
    Date of Patent: October 22, 2019
    Inventors: Daniel P. Byrne, Chad Zimmerman, William F. Schacht, Randell E. Pate, Brent A. Reame, Timothy J. Warwick