Patents by Inventor Ramadevi VENNELAKANTI

Ramadevi VENNELAKANTI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9619018
    Abstract: In one example, a method for multimodal human-machine interaction includes sensing a body posture of a participant using a camera (605) and evaluating the body posture to determine a posture-based probability of communication modalities from the participant (610). The method further includes detecting control input through a communication modality from the participant to the multimedia device (615) and weighting the control input by the posture-based probability (620).
    Type: Grant
    Filed: May 23, 2011
    Date of Patent: April 11, 2017
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Ramadevi Vennelakanti, Anbumani Subramanian, Prasenjit Dey, Sriganesh Madhvanath, Dinesh Mandalapu
  • Publication number: 20150301725
    Abstract: Creating a multimodal object of a user response to a media object can include capturing a multimodal user response to the media object, mapping the multimodal user response to a file of the media object, and creating a multimodal object including the mapped multimodal user response and the media object.
    Type: Application
    Filed: December 7, 2012
    Publication date: October 22, 2015
    Inventors: Sriganesh Madhvanath, Ramadevi Vennelakanti, Prasenjit Dey
  • Patent number: 9129604
    Abstract: System and method for using information extracted from intuitive multimodal interactions in the context of media for media tagging are disclosed. In one embodiment, multimodal information related to media is captured during multimodal interactions of a plurality of users. The multimodal information includes speech information and gesture information. Further, the multimodal information is analyzed to identify speech portions of interest. Furthermore, relevant tags for tagging the media are extracted from the speech portions of interest.
    Type: Grant
    Filed: November 16, 2010
    Date of Patent: September 8, 2015
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Ramadevi Vennelakanti, Prasenjit Dey, Sriganesh Madhvanath
  • Patent number: 8869073
    Abstract: Provided is a method of hand pose interaction. The method recognizes a user input related to selection of an object displayed on a computing device and displays a graphical user interface (GUI) corresponding to the object. The graphical user interface comprises at least one representation of a hand pose, wherein each representation of a hand pose corresponds to a unique function associated with the object. Upon recognition of a user hand pose corresponding to a hand pose representation in the graphical user interface, the function associated with the hand pose representation is executed.
    Type: Grant
    Filed: July 27, 2012
    Date of Patent: October 21, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Dustin Freeman, Sriganesh Madhvanath, Ankit Shekhawat, Ramadevi Vennelakanti
  • Publication number: 20140132505
    Abstract: In one example, a method for multimodal human-machine interaction includes sensing a body posture of a participant using a camera (605) and evaluating the body posture to determine a posture-based probability of communication modalities from the participant (610). The method further includes detecting control input through a communication modality from the participant to the multimedia device (615) and weighting the control input by the posture-based probability (620).
    Type: Application
    Filed: May 23, 2011
    Publication date: May 15, 2014
    Inventors: Ramadevi Vennelakanti, Anbumani Subramanian, Prasenjit Dey, Sriganesh Madhvanath, Dinesh Mandalapu
  • Publication number: 20130241834
    Abstract: System and method for using information extracted from intuitive multimodal interactions in the context of media for media tagging are disclosed. In one embodiment, multimodal information related to media is captured during multimodal interactions of a plurality of users. The multimodal information includes speech information and gesture information. Further, the multimodal information is analyzed to identify speech portions of interest. Furthermore, relevant tags for tagging the media are extracted from the speech portions of interest.
    Type: Application
    Filed: November 16, 2010
    Publication date: September 19, 2013
    Applicant: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
    Inventors: Ramadevi Vennelakanti, Prasenjit Dey, Sriganesh Madhvanath
  • Patent number: 8464183
    Abstract: A method and system of distinguishing multimodal HCI from ambient human interactions using wake up commands is disclosed. In one embodiment, in a method of distinguishing multimodal HCI from ambient human interactions, a wake up command is detected by a computing system. The computing system is then woken up to receive a valid user command from a user upon detecting the wake up command. A countdown timer is substantially simultaneously turned on upon waking up the computing system to receive valid user commands. The countdown timer is set based on application usage parameters such as semantics of the valid user command and context of an application associated with the valid user command.
    Type: Grant
    Filed: July 23, 2010
    Date of Patent: June 11, 2013
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Ramadevi Vennelakanti, Prasenjit Dey
  • Publication number: 20130031517
    Abstract: Provided is a method of hand pose interaction. The method recognizes a user input related to selection of an object displayed on a computing device and displays a graphical user interface (GUI) corresponding to the object. The graphical user interface comprises at least one representation of a hand pose, wherein each representation of a hand pose corresponds to a unique function associated with the object. Upon recognition of a user hand pose corresponding to a hand pose representation in the graphical user interface, the function associated with the hand pose representation is executed.
    Type: Application
    Filed: July 27, 2012
    Publication date: January 31, 2013
    Inventors: Dustin Freeman, Sriganesh Madhvanath, Ankit Shekhawat, Ramadevi Vennelakanti
  • Publication number: 20120278729
    Abstract: Provided is a method of assigning user interaction controls. The method assigns, in a scenario where multiple co-present users are simultaneously providing user inputs to a computing device, a first level of user interaction controls related to an object on the computing device to a single user and a second level of user interaction controls related to the object to all co-present simultaneous users of the computing device.
    Type: Application
    Filed: April 27, 2012
    Publication date: November 1, 2012
    Inventors: Ramadevi Vennelakanti, Prasenjit Dey, Sriganesh Madhvanath, Anbumani Subramanian, Dinesh Mandalapu
  • Publication number: 20120254717
    Abstract: Provided is a method of tagging media. The method identifies at least one region of interest in a media based on a user input and assigns a higher weighted tag to an object identified in at least one region of interest compared to an object present in another region of the media.
    Type: Application
    Filed: January 25, 2012
    Publication date: October 4, 2012
    Inventors: Prasenjit Dey, Sriganesh Madhvanath, Praphul Chandra, Ramadevi Vennelakanti, Pooja A
  • Publication number: 20120215849
    Abstract: Provided is a method of consuming virtual media. The method allows a user to identify virtual media that the user wants to consume in the presence of at least one other user. The method recognizes co-presence of the user and the at least one other user and recommends the identified virtual media for consumption by the user and the at least one other user.
    Type: Application
    Filed: January 26, 2012
    Publication date: August 23, 2012
    Inventors: Ankit Shekhawat, Ramadevi Vennelakanti, Suvodeep Das
  • Publication number: 20120188164
    Abstract: Presented is method and system for processing a gesture performed by a user of a first input device. The method comprises detecting the gesture and detecting a user-provided parameter for disambiguating the gesture. A user command is then determined based on the detected gesture and the detected parameter.
    Type: Application
    Filed: October 16, 2009
    Publication date: July 26, 2012
    Inventors: Prasenjit Dey, Sriganesh Madhvanath, Ramadevi Vennelakanti, Rahul Ajmera
  • Publication number: 20120059855
    Abstract: A method for enabling organization of a plurality of media objects is disclosed. The method comprises playing a digital media object to a user; capturing the interaction of the user with the played digital media object; and tagging the played digital media object based on said interaction. A software program product implementing this method, a system comprising the software program product and a digital media object tagged in accordance with this method are also disclosed.
    Type: Application
    Filed: May 26, 2009
    Publication date: March 8, 2012
    Applicant: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
    Inventors: Prasenjit Dey, Sriganesh Madhvanath, Ramadevi Vennelakanti
  • Publication number: 20110302538
    Abstract: A method and system of distinguishing multimodal HCI from ambient human interactions using wake up commands is disclosed. In one embodiment, in a method of distinguishing multimodal HCI from ambient human interactions, a wake up command is detected by a computing system. The computing system is then woken up to receive a valid user command from a user upon detecting the wake up command. A countdown timer is substantially simultaneously turned on upon waking up the computing system to receive valid user commands. The countdown timer is set based on application usage parameters such as semantics of the valid user command and context of an application associated with the valid user command.
    Type: Application
    Filed: July 23, 2010
    Publication date: December 8, 2011
    Inventors: Ramadevi VENNELAKANTI, Prasenjit Dey