Patents by Inventor Glen Shires

Glen Shires has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230315987
    Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.
    Type: Application
    Filed: April 27, 2023
    Publication date: October 5, 2023
    Applicant: Google LLC
    Inventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
  • Patent number: 11669683
    Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: June 6, 2023
    Assignee: Google LLC
    Inventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
  • Patent number: 11334182
    Abstract: In some implementations, data indicating a touch received on a proximity-sensitive display is received while the proximity-sensitive display is presenting one or more items. In one aspect, the techniques describe may involve a process for disambiguating touch selections of hypothesized items, such as text or graphical objects that have been generated based on input data, on a proximity-sensitive display. This process may allow a user to more easily select hypothesized items that the user may wish to correct, by determining whether a touch received through the proximity-sensitive display represents a selection of each hypothesized item based at least on a level of confidence that the hypothesized item accurately represents the input data.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: May 17, 2022
    Assignee: Google LLC
    Inventors: Jakob Nicolaus Foerster, Diego Melendo Casado, Glen Shires
  • Publication number: 20210090554
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated b
    Type: Application
    Filed: December 8, 2020
    Publication date: March 25, 2021
    Applicant: Google LLC
    Inventors: Petar Aleksic, Glen Shires, Michael Buchanan
  • Patent number: 10885898
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated b
    Type: Grant
    Filed: September 21, 2017
    Date of Patent: January 5, 2021
    Assignee: Google LLC
    Inventors: Petar Aleksic, Glen Shires, Michael Buchanan
  • Publication number: 20200279074
    Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.
    Type: Application
    Filed: May 18, 2020
    Publication date: September 3, 2020
    Applicant: Google LLC
    Inventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
  • Publication number: 20200183569
    Abstract: In some implementations, data indicating a touch received on a proximity-sensitive display is received while the proximity-sensitive display is presenting one or more items. In one aspect, the techniques describe may involve a process for disambiguating touch selections of hypothesized items, such as text or graphical objects that have been generated based on input data, on a proximity-sensitive display. This process may allow a user to more easily select hypothesized items that the user may wish to correct, by determining whether a touch received through the proximity-sensitive display represents a selection of each hypothesized item based at least on a level of confidence that the hypothesized item accurately represents the input data.
    Type: Application
    Filed: December 10, 2019
    Publication date: June 11, 2020
    Inventors: Jakob Nicolaus Foerster, Diego Melendo Casado, Glen Shires
  • Patent number: 10679005
    Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: June 9, 2020
    Assignee: Google LLC
    Inventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
  • Publication number: 20200065379
    Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.
    Type: Application
    Filed: October 30, 2019
    Publication date: February 27, 2020
    Applicant: Google LLC
    Inventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
  • Patent number: 10545647
    Abstract: In some implementations, data indicating a touch received on a proximity-sensitive display is received while the proximity-sensitive display is presenting one or more items. In one aspect, the techniques describe may involve a process for disambiguating touch selections of hypothesized items, such as text or graphical objects that have been generated based on input data, on a proximity-sensitive display. This process may allow a user to more easily select hypothesized items that the user may wish to correct, by determining whether a touch received through the proximity-sensitive display represents a selection of each hypothesized item based at least on a level of confidence that the hypothesized item accurately represents the input data.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: January 28, 2020
    Assignee: Google LLC
    Inventors: Jakob Nicolaus Foerster, Diego Melendo Casado, Glen Shires
  • Patent number: 10496746
    Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: December 3, 2019
    Assignee: Google LLC
    Inventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
  • Patent number: 10339917
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated b
    Type: Grant
    Filed: September 3, 2015
    Date of Patent: July 2, 2019
    Assignee: Google LLC
    Inventors: Petar Aleksic, Glen Shires, Michael Buchanan
  • Publication number: 20190121851
    Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.
    Type: Application
    Filed: December 11, 2018
    Publication date: April 25, 2019
    Applicant: Google LLC
    Inventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
  • Patent number: 10185711
    Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.
    Type: Grant
    Filed: July 5, 2016
    Date of Patent: January 22, 2019
    Assignee: Google LLC
    Inventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
  • Publication number: 20190012064
    Abstract: In some implementations, data indicating a touch received on a proximity-sensitive display is received while the proximity-sensitive display is presenting one or more items. In one aspect, the techniques describe may involve a process for disambiguating touch selections of hypothesized items, such as text or graphical objects that have been generated based on input data, on a proximity-sensitive display. This process may allow a user to more easily select hypothesized items that the user may wish to correct, by determining whether a touch received through the proximity-sensitive display represents a selection of each hypothesized item based at least on a level of confidence that the hypothesized item accurately represents the input data.
    Type: Application
    Filed: July 25, 2018
    Publication date: January 10, 2019
    Inventors: Jakob Nicolaus Foerster, Diego Melendo Casado, Glen Shires
  • Patent number: 10048842
    Abstract: In some implementations, data indicating a touch received on a proximity-sensitive display is received while the proximity-sensitive display is presenting one or more items. In one aspect, the techniques describe may involve a process for disambiguating touch selections of hypothesized items, such as text or graphical objects that have been generated based on input data, on a proximity-sensitive display. This process may allow a user to more easily select hypothesized items that the user may wish to correct, by determining whether a touch received through the proximity-sensitive display represents a selection of each hypothesized item based at least on a level of confidence that the hypothesized item accurately represents the input data.
    Type: Grant
    Filed: June 15, 2015
    Date of Patent: August 14, 2018
    Assignee: Google LLC
    Inventors: Jakob Nicolaus Foerster, Diego Melendo Casado, Glen Shires
  • Publication number: 20180012591
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated b
    Type: Application
    Filed: September 21, 2017
    Publication date: January 11, 2018
    Inventors: Petar Aleksic, Glen Shires, Michael Buchanan
  • Patent number: 9842489
    Abstract: The disclosed subject matter provides a main device and at least one secondary device. The at least one secondary device and the main device may operate in cooperation with one another and other networked components to provide improved performance, such as improved speech and other signal recognition operations. Using the improved recognition results, a higher probability of generating the proper commands to a controllable device is provided.
    Type: Grant
    Filed: February 14, 2013
    Date of Patent: December 12, 2017
    Assignee: GOOGLE LLC
    Inventor: Glen Shires
  • Publication number: 20170069309
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated b
    Type: Application
    Filed: June 24, 2016
    Publication date: March 9, 2017
    Inventors: Petar Aleksic, Glen Shires, Michael Buchanan
  • Publication number: 20170069308
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated b
    Type: Application
    Filed: September 3, 2015
    Publication date: March 9, 2017
    Inventors: Petar Aleksic, Glen Shires, Michael Buchanan