Patents by Inventor Ted Dunn

Ted Dunn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20140043537
    Abstract: The present invention includes a method and device that allows efficient mixing of multiple video images with a graphics screen while utilizing only one video buffer. The present invention partitions the sole video buffer, pre-scales the plurality of video images and inserts them into the partitioned video buffer in a predetermined range of buffer addresses. The present invention mixes the partitioned video including the pre-scaled video images with the graphics screen to produce a video display including both a video screen and a graphics screen.
    Type: Application
    Filed: October 16, 2013
    Publication date: February 13, 2014
    Inventors: Ted Dunn, James Amendolagine
  • Patent number: 8631137
    Abstract: A web protocol request is received from a web-based device for aggregated A/V content information associated with A/V content stored within the DLNA home network. The web protocol request is converted to one or more DLNA search messages each associated with one or more active DLNA servers. A/V content information associated with each of the one or more active DLNA servers is aggregated using the one or more DLNA search messages. The aggregated A/V content information is formatted into a web protocol response. The web protocol response is sent to the web-based device. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
    Type: Grant
    Filed: June 27, 2008
    Date of Patent: January 14, 2014
    Assignees: Sony Corporation, Sony Corporation Inc.
    Inventors: Ludovic Douillet, Ted Dunn, David Tao
  • Publication number: 20140009679
    Abstract: A live video is directed to a display buffer of the device. The device is preferably a set top box and a live video frame stored in the display buffer is preferably displayed by a display device coupled to the set top box. The display device is preferably a television. A capture command preferably signals the set top box to store one or more frames of the currently displayed live video. Upon receiving the capture command, the live video is paused, thereby preventing the display buffer from loading subsequent live video frames. The live video is then re-directed to a capture buffer, the live video is un-paused, and a current live video frame is captured from the capture buffer. The captured frame is then stored using a conventional storage medium. After the frame is captured, the live video is re-directed from the capture buffer to the display buffer to resume display of the live video.
    Type: Application
    Filed: September 12, 2013
    Publication date: January 9, 2014
    Applicants: SONY ELECTRONICS INC., SONY CORPORATION
    Inventors: Ted Dunn, James Amendolagine
  • Patent number: 8587727
    Abstract: The present invention includes a method and device that allows efficient mixing of multiple video images with a graphics screen while utilizing only one video buffer. The present invention partitions the sole video buffer, pre-scales the plurality of video images and inserts them into the partitioned video buffer in a predetermined range of buffer addresses. The present invention mixes the partitioned video including the pre-scaled video images with the graphics screen to produce a video display including both a video screen and a graphics screen.
    Type: Grant
    Filed: December 5, 2011
    Date of Patent: November 19, 2013
    Assignees: Sony Corporation, Sony Electronics Inc.
    Inventors: Ted Dunn, James Amendolagine
  • Patent number: 8561123
    Abstract: A live video is directed to a display buffer of the device. The device is preferably a set top box and a live video frame stored in the display buffer is preferably displayed by a display device coupled to the set top box. The display device is preferably a television. A capture command preferably signals the set top box to store one or more frames of the currently displayed live video. Upon receiving the capture command, the live video is paused, thereby preventing the display buffer from loading subsequent live video frames. The live video is then re-directed to a capture buffer, the live video is un-paused, and a current live video frame is captured from the capture buffer. The captured frame is then stored using a conventional storage medium. After the frame is captured, the live video is re-directed from the capture buffer to the display buffer to resume display of the live video.
    Type: Grant
    Filed: February 10, 2011
    Date of Patent: October 15, 2013
    Assignees: Sony Corporation, Sony Electronics Inc.
    Inventors: Ted Dunn, James Amendolagine
  • Patent number: 8490131
    Abstract: A television receiver device consistent with certain implementations has a display associated with the television receiver device. A filter converts a stream of audio/video content that is to be displayed on the display associated with the television receiver device into a stream of digital audio data. A buffer stores a sample of the digital audio data. A modem transmits the sample of audio data from the buffer to a content identification server and that receives metadata identifying the audio data from the content identification server. A display processor renders at least a portion of the metadata to the display. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
    Type: Grant
    Filed: November 5, 2009
    Date of Patent: July 16, 2013
    Assignees: Sony Corporation, Sony Electronics Inc.
    Inventors: Nobukazu Sugiyama, Jaime Chee, Ted Dunn, Utkarsh Pandya, Ling Jun Wong
  • Patent number: 8261210
    Abstract: An example television Widget movement method involves receiving a dedicated command from a remote controller that activates the plurality of Widget programs, when in Display mode, display Widget representations on a display; receiving a command that establishes one of the plurality of displayed Widget programs as the Widget program that is currently in focus; and receiving a command that places the Widget program currently in focus in Move Mode. The Widget can be moved about the display while in Move Mode, where when the Widget is in Move Mode, the Widget representation is responsive to navigation commands to move about the display. Such movement is animated using a 3 dimensional graphics engine and accompanied by an audio sound effect. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
    Type: Grant
    Filed: April 2, 2009
    Date of Patent: September 4, 2012
    Assignees: Sony Corporation, Sony Electronics Inc.
    Inventors: Ted Dunn, Tracy Ho, Yuko Nishikawa, Hiroki Sugimoto, Steven Friedlander
  • Patent number: 8212928
    Abstract: Embodiments of the present invention provide a method and apparatus for maintaining smooth video transition between distinct applications. Preferably, the apparatus implementing the present invention includes a processor, a secondary memory and a system memory. In providing a smooth transition between two applications, the apparatus and method provides synchronization of the video and graphics components while transitioning from a first application to a second application. If there is no video component in either application, no action is needed to provide a smooth transition between applications, and when only the first application includes a video component, the video component need only be turned off for smooth transition between the applications to occur.
    Type: Grant
    Filed: April 13, 2010
    Date of Patent: July 3, 2012
    Assignees: Sony Corporation, Sony Electronics Inc.
    Inventors: James Amendolagine, Ted Dunn
  • Publication number: 20120134645
    Abstract: A method and apparatus for maintaining smooth video transition between distinct applications includes a processor, a secondary memory and a system memory. In providing a smooth transition between applications, the apparatus and method provides synchronization of the video and graphics components while transitioning from a first application to a second application. If there is no video component in either application, no action is needed to provide a smooth transition between applications, and when only the first application includes a video component, the video component need only be turned off for smooth transition between the applications to occur. When both the first application and the second application include video components, smooth transition between the applications is dependent upon the display window size of the first application in comparison to the second application. A process is triggered according to the size of the display windows of the first and second applications.
    Type: Application
    Filed: January 17, 2012
    Publication date: May 31, 2012
    Applicants: SONY ELECTRONICS INC., SONY CORPORATION
    Inventors: James Amendolagine, Ted Dunn
  • Publication number: 20120120101
    Abstract: A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality.
    Type: Application
    Filed: November 8, 2011
    Publication date: May 17, 2012
    Inventors: Suranjit Adhikari, Ted Dunn, Eric Hsiao
  • Patent number: 8181120
    Abstract: An example television Widget movement method involves receiving a dedicated command from a remote controller that activates the plurality of Widget programs, wherein the Widget programs, when in Display mode, display Widget representations on a display; receiving a command from the remote controller that establishes one of the plurality of displayed Widget programs as being in focus; and receiving a command from the remote controller that places the Widget program that is currently in focus in a Move Mode, where the Widget representation can be moved about the display while in Move Mode, wherein when the Widget is in Move Mode, the Widget representation is responsive to navigation commands from the remote controller to move about the display, and wherein such movement is animated using a 3 dimensional graphics engine. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
    Type: Grant
    Filed: April 2, 2009
    Date of Patent: May 15, 2012
    Assignees: Sony Corporation, Sony Electronics Inc.
    Inventors: Steven Friedlander, Thomas Patrick Dawson, Seth Hill, Ted Dunn
  • Publication number: 20120114297
    Abstract: A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality.
    Type: Application
    Filed: November 8, 2011
    Publication date: May 10, 2012
    Inventors: Suranajit Adhikari, Ted Dunn, Eric Hsiao
  • Publication number: 20120117502
    Abstract: A method consistent with certain implementation involves presenting a graphical user interface (GUI) to a user on a display, where the GUI presents a visual representation of a room that is adapted to be adjusted in size and shape by user manipulation of a controller; the GUI has a drop and drag menu adapted to selection of an object from a plurality of objects for placement at any selected position within the room; at least one of the objects comprising a loudspeaker; and where the GUI provides for input of data characterizing the loudspeaker. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
    Type: Application
    Filed: September 1, 2011
    Publication date: May 10, 2012
    Inventors: Djung Nguyen, Ted Dunn, Andy Nguyen, Nobukazu Sugiyama, Lobrenzo Wingo
  • Publication number: 20120113224
    Abstract: A method consistent with certain implementations involves at a listening position, capturing a plurality of photographic images with a camera of a corresponding plurality of loudspeakers forming part of an audio system; determining from the plurality of captured images, a geometric configuration representing a positioning of the plurality of loudspeakers connected to the audio system; and outputting the geometric configuration of the plurality of loudspeakers to the audio system. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
    Type: Application
    Filed: September 1, 2011
    Publication date: May 10, 2012
    Inventors: Andy Nguyen, Djung Nguyen, Lobrenzo Wingo, Ted Dunn, Nobukazu Sugiyama
  • Publication number: 20120113144
    Abstract: A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality.
    Type: Application
    Filed: November 8, 2011
    Publication date: May 10, 2012
    Inventors: Suranjit ADHIKARI, Ted DUNN, Eric HSIAO
  • Publication number: 20120113145
    Abstract: A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality.
    Type: Application
    Filed: November 8, 2011
    Publication date: May 10, 2012
    Inventors: Suranjit Adhikari, Ted Dunn, Eric Hsiao
  • Publication number: 20120116920
    Abstract: A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality.
    Type: Application
    Filed: November 8, 2011
    Publication date: May 10, 2012
    Inventors: Suranjit Adhikari, Ted Dunn, Eric Hsiao
  • Publication number: 20120114151
    Abstract: A method consistent with certain implementations involves in an audio system having an array of a plurality of loudspeakers and a stored speaker map identifying the geometric relationship between the plurality of loudspeakers and a listening position, identifying a location on the speaker map of a Source Origin of a sound; selecting a method of localizing the Source Origin from a plurality of methods of localizing the Source Origin utilizing the array of loudspeakers; and reproducing the sound emanating from the Source Origin using the selected method. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
    Type: Application
    Filed: September 1, 2011
    Publication date: May 10, 2012
    Inventors: Andy Nguyen, Ted Dunn, Lobrenzo Wingo, Djung Nguyen, Nobukazu Sugiyama
  • Publication number: 20120113143
    Abstract: A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality.
    Type: Application
    Filed: November 8, 2011
    Publication date: May 10, 2012
    Inventors: Suranjit Adhikari, Ted Dunn, Eric Hsiao
  • Publication number: 20120113274
    Abstract: A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera's physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality.
    Type: Application
    Filed: November 8, 2011
    Publication date: May 10, 2012
    Inventors: SURANJIT ADHIKARI, Ted Dunn, Eric Hsiao