Patents by Inventor Vinay Venkataraman
Vinay Venkataraman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10656910Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.Type: GrantFiled: July 24, 2018Date of Patent: May 19, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jonathan Lenchner, Vinay Venkataraman
-
Patent number: 10656909Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.Type: GrantFiled: July 24, 2018Date of Patent: May 19, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jonathan Lenchner, Vinay Venkataraman
-
Patent number: 10248652Abstract: Systems, methods, and apparatus of providing a visual writing aid are provided. In one example embodiment, a method includes obtaining data descriptive of a first set of information, wherein the first set of information is presented in a first language. The method includes determining a translation of the first set of information to a second language. The method includes presenting a visual representation of the translation of the first set of information in the second language via a display device. The method includes obtaining data descriptive of a second set of information. The second set of information includes a transcription of at least a portion of the first set of information in the second language generated via a mobile writing device. The method includes determining whether the second set of information corresponds to the visual representation of the translation of the first set of information in the second language.Type: GrantFiled: September 29, 2017Date of Patent: April 2, 2019Assignee: Google LLCInventors: Vinay Venkataraman, Eric Aboussouan, David Frakes
-
Publication number: 20180329679Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.Type: ApplicationFiled: July 24, 2018Publication date: November 15, 2018Inventors: Jonathan Lenchner, Vinay Venkataraman
-
Publication number: 20180329680Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.Type: ApplicationFiled: July 24, 2018Publication date: November 15, 2018Inventors: Jonathan Lenchner, Vinay Venkataraman
-
Patent number: 10048934Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.Type: GrantFiled: February 16, 2015Date of Patent: August 14, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jonathan Lenchner, Vinay Venkataraman
-
Patent number: 10048935Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.Type: GrantFiled: June 24, 2015Date of Patent: August 14, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jonathan Lenchner, Vinay Venkataraman
-
Publication number: 20180158348Abstract: Systems and methods for providing instructional guidance relating to an instructive writing instrument are provided. For instance, a first visual contextual signal instructing a user to actuate an instructive writing instrument in a first direction can be provided based at least in part on a model object. The model object can correspond to an object to be rendered on a writing surface by a user using the instructive writing instrument. A first image depicting the writing surface can be obtained. First position data associated with the instructive writing instrument can be determined based at least in part on the first image. A second visual contextual signal instructing the user to actuate the instructive writing instrument in a second direction can be provided based at least in part on the model object and the first position data associated with the instructive writing instrument.Type: ApplicationFiled: October 31, 2017Publication date: June 7, 2018Inventors: Vinay Venkataraman, Eric Aboussouan, David Frakes
-
Patent number: 9785741Abstract: A method for providing a remote user with an experience in an environment, comprising building a three-dimensional (3D) model of the environment, capturing one or more video feeds of at least a portion of the environment using one or more cameras in the environment, mapping the one or more video feeds onto one or more planes in the 3D model, providing a view of the mapped one or more video feeds on the one or more planes in the 3D model through a display device viewed by the remote user, capturing a gestural input from the remote user, and applying the gestural input to the portion of the environment.Type: GrantFiled: December 30, 2015Date of Patent: October 10, 2017Assignee: International Business Machines CorporationInventors: Jonathan Lenchner, Vinay Venkataraman
-
Publication number: 20170193711Abstract: A method for providing a remote user with an experience in an environment, comprising building a three-dimensional (3D) model of the environment, capturing one or more video feeds of at least a portion of the environment using one or more cameras in the environment, mapping the one or more video feeds onto one or more planes in the 3D model, providing a view of the mapped one or more video feeds on the one or more planes in the 3D model through a display device viewed by the remote user, capturing a gestural input from the remote user, and applying the gestural input to the portion of the environment.Type: ApplicationFiled: December 30, 2015Publication date: July 6, 2017Inventors: Jonathan Lenchner, Vinay Venkataraman
-
Publication number: 20160239259Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.Type: ApplicationFiled: June 24, 2015Publication date: August 18, 2016Inventors: Jonathan Lenchner, Vinay Venkataraman
-
Publication number: 20160239258Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.Type: ApplicationFiled: February 16, 2015Publication date: August 18, 2016Inventors: Jonathan Lenchner, Vinay Venkataraman