Patents by Inventor Suranjit Adhikari

Suranjit Adhikari has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20120113145
    Abstract: A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality.
    Type: Application
    Filed: November 8, 2011
    Publication date: May 10, 2012
    Inventors: Suranjit Adhikari, Ted Dunn, Eric Hsiao
  • Publication number: 20120113144
    Abstract: A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality.
    Type: Application
    Filed: November 8, 2011
    Publication date: May 10, 2012
    Inventors: Suranjit ADHIKARI, Ted DUNN, Eric HSIAO
  • Publication number: 20120113142
    Abstract: A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality.
    Type: Application
    Filed: November 8, 2011
    Publication date: May 10, 2012
    Inventors: Suranjit Adhikari, Ted Dunn, Eric Hsiao
  • Publication number: 20120113143
    Abstract: A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality.
    Type: Application
    Filed: November 8, 2011
    Publication date: May 10, 2012
    Inventors: Suranjit Adhikari, Ted Dunn, Eric Hsiao
  • Publication number: 20120113274
    Abstract: A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality.
    Type: Application
    Filed: November 8, 2011
    Publication date: May 10, 2012
    Inventors: SURANJIT ADHIKARI, Ted Dunn, Eric Hsiao
  • Publication number: 20120116920
    Abstract: A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality.
    Type: Application
    Filed: November 8, 2011
    Publication date: May 10, 2012
    Inventors: Suranjit Adhikari, Ted Dunn, Eric Hsiao
  • Publication number: 20120095302
    Abstract: A TV can receive user health information from a health monitoring sensor and enable a user to view a user interface on the TV responsive to current and/or historical user health information received by the health monitoring sensor. The TV may also communicate through the Internet with a user's health care provider to send user health information to the health care provider.
    Type: Application
    Filed: October 14, 2010
    Publication date: April 19, 2012
    Inventor: Suranjit Adhikari
  • Publication number: 20120092327
    Abstract: Responsive to metadata sent with 3D signals from an audio video display device, 3D glasses overlay graphical assets onto the 3D visual plane.
    Type: Application
    Filed: October 14, 2010
    Publication date: April 19, 2012
    Inventor: Suranjit Adhikari
  • Publication number: 20110298980
    Abstract: The size of UI elements on a TV display are enlarged responsive to a determination from sensors that a viewer is beyond a nominal distance from the TV. As well, the graphics plane in which the UI elements are presented can be rotated relative to the video plane to account for a viewer being positioned at an oblique angle relative to the plane of the display.
    Type: Application
    Filed: June 8, 2010
    Publication date: December 8, 2011
    Inventors: Suranjit Adhikari, Eric Hsiao
  • Patent number: 8051376
    Abstract: A method for creating a customized music visualization display for a music input involves presenting a user with a plurality of effects icons and a visualizer canvas as a portion of the user interface display. A user places one or more of the visual effects icons on the visualizer canvas. A sweep arm travels in a continuous sweeping motion through an arc and at a speed determined by a musical input and where each effect icon is detected and the effect is displayed when the sweep arm impacts the location of the video effect icon within the visualizer canvas.
    Type: Grant
    Filed: February 12, 2009
    Date of Patent: November 1, 2011
    Assignees: Sony Corporation, Sony Electronics Inc.
    Inventors: Suranjit Adhikari, Eric Hsiao
  • Publication number: 20110110560
    Abstract: A hand gesture from a camera input is detected using an image processing module of a consumer electronics device. The detected hand gesture is identified from a vocabulary of hand gestures. The electronics device is controlled in response to the identified hand gesture. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
    Type: Application
    Filed: October 4, 2010
    Publication date: May 12, 2011
    Inventor: Suranjit Adhikari
  • Publication number: 20110096073
    Abstract: A music visualization system and methods involving a central processing unit capable of converting waveform data to geometry data, a graphics processing unit capable of recognizing and accepting the geometry data and rendering a plurality of graphical images, a custom shader software program being operable on the graphics processing unit, an embeddable platform being in electronic communication with the graphics processing unit, and an audiovisual display device in electronic communication with the graphics processing unit and the embeddable platform.
    Type: Application
    Filed: October 23, 2009
    Publication date: April 28, 2011
    Applicants: Sony Corporation, a Japanese corporation, Sony Electronics Inc., a Delaware corporation
    Inventors: Suranjit Adhikari, Eric Hsiao
  • Publication number: 20100205532
    Abstract: Audio/music visualizers have become standard features in most music/video software applications available for music/video players. The music visualizer presents the user with a beautiful presentation of music coupled with visuals that are synchronized to the music to create a compelling experience. The presented music visualizer provides a new ability to create a synchronized and personalized music visualization experience by a user without the need for programming. There are no preset effects, rather the user interacts with the visualizer system through a User Interface to create a visualization design through the use of video effects available through the UI. Once the design has been completed the system will synchronize the user's customized visualization design with an input musical selection. In this manner, the user has created their own customized music/video visualization which may also be stored for later playback or modification.
    Type: Application
    Filed: February 12, 2009
    Publication date: August 12, 2010
    Inventors: Suranjit Adhikari, Eric Hsiao