Patents by Inventor Glen Shires
Glen Shires has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230315987Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.Type: ApplicationFiled: April 27, 2023Publication date: October 5, 2023Applicant: Google LLCInventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
-
Patent number: 11669683Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.Type: GrantFiled: May 18, 2020Date of Patent: June 6, 2023Assignee: Google LLCInventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
-
Patent number: 11334182Abstract: In some implementations, data indicating a touch received on a proximity-sensitive display is received while the proximity-sensitive display is presenting one or more items. In one aspect, the techniques describe may involve a process for disambiguating touch selections of hypothesized items, such as text or graphical objects that have been generated based on input data, on a proximity-sensitive display. This process may allow a user to more easily select hypothesized items that the user may wish to correct, by determining whether a touch received through the proximity-sensitive display represents a selection of each hypothesized item based at least on a level of confidence that the hypothesized item accurately represents the input data.Type: GrantFiled: December 10, 2019Date of Patent: May 17, 2022Assignee: Google LLCInventors: Jakob Nicolaus Foerster, Diego Melendo Casado, Glen Shires
-
Publication number: 20210090554Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated bType: ApplicationFiled: December 8, 2020Publication date: March 25, 2021Applicant: Google LLCInventors: Petar Aleksic, Glen Shires, Michael Buchanan
-
Patent number: 10885898Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated bType: GrantFiled: September 21, 2017Date of Patent: January 5, 2021Assignee: Google LLCInventors: Petar Aleksic, Glen Shires, Michael Buchanan
-
Publication number: 20200279074Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.Type: ApplicationFiled: May 18, 2020Publication date: September 3, 2020Applicant: Google LLCInventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
-
Publication number: 20200183569Abstract: In some implementations, data indicating a touch received on a proximity-sensitive display is received while the proximity-sensitive display is presenting one or more items. In one aspect, the techniques describe may involve a process for disambiguating touch selections of hypothesized items, such as text or graphical objects that have been generated based on input data, on a proximity-sensitive display. This process may allow a user to more easily select hypothesized items that the user may wish to correct, by determining whether a touch received through the proximity-sensitive display represents a selection of each hypothesized item based at least on a level of confidence that the hypothesized item accurately represents the input data.Type: ApplicationFiled: December 10, 2019Publication date: June 11, 2020Inventors: Jakob Nicolaus Foerster, Diego Melendo Casado, Glen Shires
-
Patent number: 10679005Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.Type: GrantFiled: October 30, 2019Date of Patent: June 9, 2020Assignee: Google LLCInventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
-
Publication number: 20200065379Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.Type: ApplicationFiled: October 30, 2019Publication date: February 27, 2020Applicant: Google LLCInventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
-
Patent number: 10545647Abstract: In some implementations, data indicating a touch received on a proximity-sensitive display is received while the proximity-sensitive display is presenting one or more items. In one aspect, the techniques describe may involve a process for disambiguating touch selections of hypothesized items, such as text or graphical objects that have been generated based on input data, on a proximity-sensitive display. This process may allow a user to more easily select hypothesized items that the user may wish to correct, by determining whether a touch received through the proximity-sensitive display represents a selection of each hypothesized item based at least on a level of confidence that the hypothesized item accurately represents the input data.Type: GrantFiled: July 25, 2018Date of Patent: January 28, 2020Assignee: Google LLCInventors: Jakob Nicolaus Foerster, Diego Melendo Casado, Glen Shires
-
Patent number: 10496746Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.Type: GrantFiled: December 11, 2018Date of Patent: December 3, 2019Assignee: Google LLCInventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
-
Patent number: 10339917Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated bType: GrantFiled: September 3, 2015Date of Patent: July 2, 2019Assignee: Google LLCInventors: Petar Aleksic, Glen Shires, Michael Buchanan
-
Publication number: 20190121851Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.Type: ApplicationFiled: December 11, 2018Publication date: April 25, 2019Applicant: Google LLCInventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
-
Patent number: 10185711Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving two or more data sets each representing speech of a corresponding individual attending an internet-based social networking video conference session, decoding the received data sets to produce corresponding text for each individual attending the internet-based social networking video conference, and detecting characteristics of the session from a coalesced transcript produced from the decoded text of the attending individuals for providing context to the internet-based social networking video conference session.Type: GrantFiled: July 5, 2016Date of Patent: January 22, 2019Assignee: Google LLCInventors: Glen Shires, Sterling Swigart, Jonathan Zolla, Jason J. Gauci
-
Publication number: 20190012064Abstract: In some implementations, data indicating a touch received on a proximity-sensitive display is received while the proximity-sensitive display is presenting one or more items. In one aspect, the techniques describe may involve a process for disambiguating touch selections of hypothesized items, such as text or graphical objects that have been generated based on input data, on a proximity-sensitive display. This process may allow a user to more easily select hypothesized items that the user may wish to correct, by determining whether a touch received through the proximity-sensitive display represents a selection of each hypothesized item based at least on a level of confidence that the hypothesized item accurately represents the input data.Type: ApplicationFiled: July 25, 2018Publication date: January 10, 2019Inventors: Jakob Nicolaus Foerster, Diego Melendo Casado, Glen Shires
-
Patent number: 10048842Abstract: In some implementations, data indicating a touch received on a proximity-sensitive display is received while the proximity-sensitive display is presenting one or more items. In one aspect, the techniques describe may involve a process for disambiguating touch selections of hypothesized items, such as text or graphical objects that have been generated based on input data, on a proximity-sensitive display. This process may allow a user to more easily select hypothesized items that the user may wish to correct, by determining whether a touch received through the proximity-sensitive display represents a selection of each hypothesized item based at least on a level of confidence that the hypothesized item accurately represents the input data.Type: GrantFiled: June 15, 2015Date of Patent: August 14, 2018Assignee: Google LLCInventors: Jakob Nicolaus Foerster, Diego Melendo Casado, Glen Shires
-
Publication number: 20180012591Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated bType: ApplicationFiled: September 21, 2017Publication date: January 11, 2018Inventors: Petar Aleksic, Glen Shires, Michael Buchanan
-
Patent number: 9842489Abstract: The disclosed subject matter provides a main device and at least one secondary device. The at least one secondary device and the main device may operate in cooperation with one another and other networked components to provide improved performance, such as improved speech and other signal recognition operations. Using the improved recognition results, a higher probability of generating the proper commands to a controllable device is provided.Type: GrantFiled: February 14, 2013Date of Patent: December 12, 2017Assignee: GOOGLE LLCInventor: Glen Shires
-
Publication number: 20170069309Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated bType: ApplicationFiled: June 24, 2016Publication date: March 9, 2017Inventors: Petar Aleksic, Glen Shires, Michael Buchanan
-
Publication number: 20170069308Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated bType: ApplicationFiled: September 3, 2015Publication date: March 9, 2017Inventors: Petar Aleksic, Glen Shires, Michael Buchanan