Patents by Inventor Glen Shires

Glen Shires has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190121851
    Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.
    Type: Application
    Filed: December 11, 2018
    Publication date: April 25, 2019
    Applicant: Google LLC
    Inventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
  • Patent number: 10185711
    Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.
    Type: Grant
    Filed: July 5, 2016
    Date of Patent: January 22, 2019
    Assignee: Google LLC
    Inventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
  • Publication number: 20190012064
    Abstract: In some implementations, data indicating a touch received on a proximity-sensitive display is received while the proximity-sensitive display is presenting one or more items. In one aspect, the techniques describe may involve a process for disambiguating touch selections of hypothesized items, such as text or graphical objects that have been generated based on input data, on a proximity-sensitive display. This process may allow a user to more easily select hypothesized items that the user may wish to correct, by determining whether a touch received through the proximity-sensitive display represents a selection of each hypothesized item based at least on a level of confidence that the hypothesized item accurately represents the input data.
    Type: Application
    Filed: July 25, 2018
    Publication date: January 10, 2019
    Inventors: Jakob Nicolaus Foerster, Diego Melendo Casado, Glen Shires
  • Patent number: 10048842
    Abstract: In some implementations, data indicating a touch received on a proximity-sensitive display is received while the proximity-sensitive display is presenting one or more items. In one aspect, the techniques describe may involve a process for disambiguating touch selections of hypothesized items, such as text or graphical objects that have been generated based on input data, on a proximity-sensitive display. This process may allow a user to more easily select hypothesized items that the user may wish to correct, by determining whether a touch received through the proximity-sensitive display represents a selection of each hypothesized item based at least on a level of confidence that the hypothesized item accurately represents the input data.
    Type: Grant
    Filed: June 15, 2015
    Date of Patent: August 14, 2018
    Assignee: Google LLC
    Inventors: Jakob Nicolaus Foerster, Diego Melendo Casado, Glen Shires
  • Publication number: 20180012591
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated b
    Type: Application
    Filed: September 21, 2017
    Publication date: January 11, 2018
    Inventors: Petar Aleksic, Glen Shires, Michael Buchanan
  • Patent number: 9842489
    Abstract: The disclosed subject matter provides a main device and at least one secondary device. The at least one secondary device and the main device may operate in cooperation with one another and other networked components to provide improved performance, such as improved speech and other signal recognition operations. Using the improved recognition results, a higher probability of generating the proper commands to a controllable device is provided.
    Type: Grant
    Filed: February 14, 2013
    Date of Patent: December 12, 2017
    Assignee: GOOGLE LLC
    Inventor: Glen Shires
  • Publication number: 20170069309
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated b
    Type: Application
    Filed: June 24, 2016
    Publication date: March 9, 2017
    Inventors: Petar Aleksic, Glen Shires, Michael Buchanan
  • Publication number: 20170069308
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated b
    Type: Application
    Filed: September 3, 2015
    Publication date: March 9, 2017
    Inventors: Petar Aleksic, Glen Shires, Michael Buchanan
  • Publication number: 20160364118
    Abstract: In some implementations, data indicating a touch received on a proximity-sensitive display is received while the proximity-sensitive display is presenting one or more items. In one aspect, the techniques describe may involve a process for disambiguating touch selections of hypothesized items, such as text or graphical objects that have been generated based on input data, on a proximity-sensitive display. This process may allow a user to more easily select hypothesized items that the user may wish to correct, by determining whether a touch received through the proximity-sensitive display represents a selection of each hypothesized item based at least on a level of confidence that the hypothesized item accurately represents the input data.
    Type: Application
    Filed: June 15, 2015
    Publication date: December 15, 2016
    Inventors: Jakob Nicolaus Foerster, Diego Melendo Casado, Glen Shires
  • Patent number: 9420227
    Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.
    Type: Grant
    Filed: November 13, 2013
    Date of Patent: August 16, 2016
    Assignee: Google Inc.
    Inventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
  • Patent number: 9035996
    Abstract: A method of adding a computing device to a multi-device video communication session. A server receives recorded content from a plurality of multi-device video communication sessions and a search request from a computing device. The server identifies a first multi-device video communication session based on the search request. The first multi-device video communication session includes a weighted list of text elements. The server transmits information based on the weighted list of text elements to the computing device, receives a selection from the computing device corresponding to a first text element, and transmits at least a portion of the recorded content from the first multi-device video communication session to the computing device based on the first text element. The server receives an add request for the computing device to be added to the first multi-device video communication session and transmits the add request to the first multi-device video communication session.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: May 19, 2015
    Assignee: Google Inc.
    Inventors: Glen Shires, Maryam Garrett
  • Publication number: 20140229184
    Abstract: The disclosed subject matter provides a main device and at least one secondary device. The at least one secondary device and the main device may operate in cooperation with one another and other networked components to provide improved performance, such as improved speech and other signal recognition operations. Using the improved recognition results, a higher probability of generating the proper commands to a controllable device is provided.
    Type: Application
    Filed: February 14, 2013
    Publication date: August 14, 2014
    Applicant: GOOGLE INC.
    Inventor: Glen Shires
  • Patent number: 8681954
    Abstract: A method of adding a computing device to a multi-device video communication session. A server receives recorded content from a plurality of multi-device video communication sessions and a search request from a computing device. The server identifies a first multi-device video communication session based on the search request. The first multi-device video communication session includes a weighted list of text elements. The server transmits information based on the weighted list of text elements to the computing device, receives a selection from the computing device corresponding to a first text element, and transmits at least a portion of the recorded content from the first multi-device video communication session to the computing device based on the first text element. The server receives an add request for the computing device to be added to the first multi-device video communication session and transmits the add request to the first multi-device video communication session.
    Type: Grant
    Filed: April 16, 2012
    Date of Patent: March 25, 2014
    Assignee: Google Inc.
    Inventors: Glen Shires, Maryam Garrett
  • Patent number: 8654942
    Abstract: A method of multi-device video communication. A server receives recorded content from a multi-device video communication session and processes the recorded content to detect vocal expressions from a plurality of participants. The server generates a plurality of text elements each corresponding to one or more of the vocal expressions. The server receives at least one rating for at least one participant of the plurality of participants and generates a word cloud based on the plurality of text elements and at least in part on the at least one rating for the at least one participant.
    Type: Grant
    Filed: April 16, 2012
    Date of Patent: February 18, 2014
    Assignee: Google Inc.
    Inventors: Maryam Garrett, Glen Shires
  • Patent number: 8612211
    Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.
    Type: Grant
    Filed: January 17, 2013
    Date of Patent: December 17, 2013
    Assignee: Google Inc.
    Inventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
  • Publication number: 20070250311
    Abstract: A method for managing audio data includes identifying a condition in the audio data. A rate of playback of the audio data is automatically adjusted in response to identifying the condition. Other embodiments are disclosed.
    Type: Application
    Filed: April 25, 2006
    Publication date: October 25, 2007
    Inventor: Glen Shires
  • Publication number: 20060293897
    Abstract: A distributed voice user interface system includes a local device which receives speech input issued from a user. Such speech input may specify a command or a request by the user. The local device performs preliminary processing of the speech input and determines whether it is able to respond to the command or request by itself. If not, the local device initiates communication with a remote system for further processing of the speech input.
    Type: Application
    Filed: August 31, 2006
    Publication date: December 28, 2006
    Applicant: Ben Franklin Patent Holding LLC
    Inventors: George White, James Buteau, Glen Shires, Kevin Surace, Steven Markman