Patents by Inventor Vinay Venkataraman

Vinay Venkataraman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10656910
    Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.
    Type: Grant
    Filed: July 24, 2018
    Date of Patent: May 19, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan Lenchner, Vinay Venkataraman
  • Patent number: 10656909
    Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.
    Type: Grant
    Filed: July 24, 2018
    Date of Patent: May 19, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan Lenchner, Vinay Venkataraman
  • Patent number: 10248652
    Abstract: Systems, methods, and apparatus of providing a visual writing aid are provided. In one example embodiment, a method includes obtaining data descriptive of a first set of information, wherein the first set of information is presented in a first language. The method includes determining a translation of the first set of information to a second language. The method includes presenting a visual representation of the translation of the first set of information in the second language via a display device. The method includes obtaining data descriptive of a second set of information. The second set of information includes a transcription of at least a portion of the first set of information in the second language generated via a mobile writing device. The method includes determining whether the second set of information corresponds to the visual representation of the translation of the first set of information in the second language.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: April 2, 2019
    Assignee: Google LLC
    Inventors: Vinay Venkataraman, Eric Aboussouan, David Frakes
  • Publication number: 20180329679
    Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.
    Type: Application
    Filed: July 24, 2018
    Publication date: November 15, 2018
    Inventors: Jonathan Lenchner, Vinay Venkataraman
  • Publication number: 20180329680
    Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.
    Type: Application
    Filed: July 24, 2018
    Publication date: November 15, 2018
    Inventors: Jonathan Lenchner, Vinay Venkataraman
  • Patent number: 10048934
    Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.
    Type: Grant
    Filed: February 16, 2015
    Date of Patent: August 14, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan Lenchner, Vinay Venkataraman
  • Patent number: 10048935
    Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.
    Type: Grant
    Filed: June 24, 2015
    Date of Patent: August 14, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan Lenchner, Vinay Venkataraman
  • Publication number: 20180158348
    Abstract: Systems and methods for providing instructional guidance relating to an instructive writing instrument are provided. For instance, a first visual contextual signal instructing a user to actuate an instructive writing instrument in a first direction can be provided based at least in part on a model object. The model object can correspond to an object to be rendered on a writing surface by a user using the instructive writing instrument. A first image depicting the writing surface can be obtained. First position data associated with the instructive writing instrument can be determined based at least in part on the first image. A second visual contextual signal instructing the user to actuate the instructive writing instrument in a second direction can be provided based at least in part on the model object and the first position data associated with the instructive writing instrument.
    Type: Application
    Filed: October 31, 2017
    Publication date: June 7, 2018
    Inventors: Vinay Venkataraman, Eric Aboussouan, David Frakes
  • Patent number: 9785741
    Abstract: A method for providing a remote user with an experience in an environment, comprising building a three-dimensional (3D) model of the environment, capturing one or more video feeds of at least a portion of the environment using one or more cameras in the environment, mapping the one or more video feeds onto one or more planes in the 3D model, providing a view of the mapped one or more video feeds on the one or more planes in the 3D model through a display device viewed by the remote user, capturing a gestural input from the remote user, and applying the gestural input to the portion of the environment.
    Type: Grant
    Filed: December 30, 2015
    Date of Patent: October 10, 2017
    Assignee: International Business Machines Corporation
    Inventors: Jonathan Lenchner, Vinay Venkataraman
  • Publication number: 20170193711
    Abstract: A method for providing a remote user with an experience in an environment, comprising building a three-dimensional (3D) model of the environment, capturing one or more video feeds of at least a portion of the environment using one or more cameras in the environment, mapping the one or more video feeds onto one or more planes in the 3D model, providing a view of the mapped one or more video feeds on the one or more planes in the 3D model through a display device viewed by the remote user, capturing a gestural input from the remote user, and applying the gestural input to the portion of the environment.
    Type: Application
    Filed: December 30, 2015
    Publication date: July 6, 2017
    Inventors: Jonathan Lenchner, Vinay Venkataraman
  • Publication number: 20160239259
    Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.
    Type: Application
    Filed: June 24, 2015
    Publication date: August 18, 2016
    Inventors: Jonathan Lenchner, Vinay Venkataraman
  • Publication number: 20160239258
    Abstract: A method and system are provided. The method includes receiving, by a microphone and camera, user utterances indicative of user commands and associated user gestures for the user utterances. The method further includes parsing, by a hardware-based recognizer, sample utterances and the user utterances into verb parts and noun parts. The method also includes recognizing, by a hardware-based recognizer, the user utterances and the associated user gestures based on the sample utterances and descriptions of associated supporting gestures for the sample utterances. The recognizing step includes comparing the verb parts and the noun parts from the user utterances individually and as pairs to the verb parts and the noun parts of the sample utterances. The method additionally includes selectively performing a given one of the user commands responsive to a recognition result.
    Type: Application
    Filed: February 16, 2015
    Publication date: August 18, 2016
    Inventors: Jonathan Lenchner, Vinay Venkataraman