Patents by Inventor Joern Ostermann

Joern Ostermann has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7760204
    Abstract: An animation wireframe is modified with three-dimensional (3D) range and color data having a corresponding shape surface. The animation wireframe is vertically scaled based on distances between consecutive features within the 3D range and color data and corresponding distances within the generic animation wireframe. For each animation wireframe point, the location of the animation wireframe point is adjusted to coincide with a point on the shape surface. The shape surface point lies along a scaling line connecting the animation wireframe point, the shape surface point and an origin point. The scaling line is within a horizontal point.
    Type: Grant
    Filed: April 3, 2008
    Date of Patent: July 20, 2010
    Assignee: AT&T Intellectual Property II, L.P.
    Inventor: Joern Ostermann
  • Publication number: 20100114579
    Abstract: A computing device and computer-readable medium storing instructions for controlling a computing device to customize a voice in a multi-media message created by a sender for a recipient, the multi-media message comprising a text message from the sender to be delivered by an animated entity. The instructions comprise receiving from the sender inserted voice emoticons, which may be repeated, into the text message associated with parameters of a voice used by an animated entity to deliver the text message; and transmitting the text message such that a recipient device can deliver the multi-media message at a variable level associated with a number of times a respective voice emoticon is repeated.
    Type: Application
    Filed: December 29, 2009
    Publication date: May 6, 2010
    Applicant: AT & T Corp.
    Inventors: Joern Ostermann, Mehmet Reha Civanlar, Hans Peter Graf, Thomas M. Isaacson
  • Publication number: 20100106722
    Abstract: A method and system for description of synthetic audiovisual content makes it easier for humans, software components or devices to identify, manage, categorize, search, browse and retrieve such content. For instance, a user may wish to search for specific synthetic audiovisual objects in digital libraries, Internet web sites or broadcast media; such a search is enabled by the invention. Key characteristics of synthetic audiovisual content itself such as the underlying 2d or 3d models and parameters for animation of these models are used to describe it. More precisely, to represent features of synthetic audiovisual content, depending on the description scheme to be used, a number of descriptors are selected and assigned values. The description scheme instantiated with descriptor values is used to generate the description, which is then stored for actual use during query/search.
    Type: Application
    Filed: December 29, 2009
    Publication date: April 29, 2010
    Applicant: AT&T Corp.
    Inventors: Qian Huang, Joern Ostermann, Atul Puri, Raj Kumar Rajendran
  • Patent number: 7697668
    Abstract: A computing device and computer-readable medium storing instructions for controlling a computing device to customize a voice in a multi-media message created by a sender for a recipient, the multi-media message comprising a text message from the sender to be delivered by an animated entity. The instructions comprise receiving from the sender inserted voice emoticons, which may be repeated, into the text message associated with parameters of a voice used by an animated entity to deliver the text message; and transmitting the text message such that a recipient device can deliver the multi-media message at a variable level associated with a number of times a respective voice emoticon is repeated.
    Type: Grant
    Filed: August 3, 2005
    Date of Patent: April 13, 2010
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Joern Ostermann, Mehmat Reha Civanlar, Hans Peter Graf, Thomas M. Isaacson
  • Publication number: 20100076750
    Abstract: Methods and apparatus for rendering a talking head on a client device are disclosed. The client device has a client cache capable of storing audio/visual data associated with rendering the talking head. The method comprises storing sentences in a client cache of a client device that relate to bridging delays in a dialog, storing sentence templates to be used in dialogs, generating a talking head response to a user inquiry from the client device, and determining whether sentences or stored templates stored in the client cache relate to the talking head response. If the stored sentences or stored templates relate to the talking head response, the method comprises instructing the client device to use the appropriate stored sentence or template from the client cache to render at least a part of the talking head response and transmitting a portion of the talking head response not stored in the client cache, if any, to the client device to render a complete talking head response.
    Type: Application
    Filed: November 30, 2009
    Publication date: March 25, 2010
    Applicant: AT&T Corp.
    Inventors: Eric Cosatto, Hans Peter Graf, Joern Ostermann
  • Patent number: 7673239
    Abstract: A method and system for description of synthetic audiovisual content makes it easier for humans, software components or devices to identify, manage, categorize, search, browse and retrieve such content. For instance, a user may wish to search for specific synthetic audiovisual objects in digital libraries, Internet web sites or broadcast media; such a search is enabled by the invention. Key characteristics of synthetic audiovisual content itself such as the underlying 2d or 3d models and parameters for animation of these models are used to describe it. More precisely, to represent features of synthetic audiovisual content, depending on the description scheme to be used, a number of descriptors are selected and assigned values. The description scheme instantiated with descriptor values is used to generate the description, which is then stored for actual use during query/search.
    Type: Grant
    Filed: June 30, 2003
    Date of Patent: March 2, 2010
    Assignee: AT&T Intellectual Propery II, L.P.
    Inventors: Qian Huang, Joern Ostermann, Atul Puri, Raj Kumar Rajendran
  • Patent number: 7671861
    Abstract: A method of creating an animated entity for delivering a multi-media message from a sender to a recipient comprises receiving from the sender an image file to a server, the image file having associated sender-assigned name, gender, category and indexing information. The server presents to the sender the image file and a group of generic face model templates. After the sender selects one of the generic face model templates, the server presents the image file and the selected model template to the sender and requests the sender to mark features on the image file. After the sender marks the image file, the server presents to the sender a preview of at least one expression associated with the marked image file. If the user does not accept the image file after the preview, the server presents again the image file and selected model template for the sender to redo or add marked features on the image file.
    Type: Grant
    Filed: November 2, 2001
    Date of Patent: March 2, 2010
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Joern Ostermann, Mehmet Reha Civanlar, Ana Cristina Andres del Valle, Patrick Haffner
  • Publication number: 20100042697
    Abstract: In an embodiment, a method is provided for creating a personal animated entity for delivering a multi-media message from a sender to a recipient. An image file from the sender may be received by a server. The image file may include an image of an entity. The sender may be requested to provide input with respect to facial features of the image of the entity in preparation for animating the image of the entity. After the sender provides the input with respect to the facial features of the image of the entity, the image of the entity may be presented as a personal animated entity to the sender to preview. Upon approval of the preview from the sender, the image of the entity may be presented as a sender-selectable personal animated entity for delivering the multi-media message to the recipient.
    Type: Application
    Filed: October 20, 2009
    Publication date: February 18, 2010
    Applicant: AT&T Corp.
    Inventors: Joern Ostermann, Mehmet Reha Civanlar, Ana Cristina Andres del Valle, Patrick Haffner
  • Patent number: 7627478
    Abstract: Methods and apparatus for rendering a talking head on a client device are disclosed. The client device has a client cache capable of storing audio/visual data associated with rendering the talking head. The method comprises storing sentences in a client cache of a client device that relate to bridging delays in a dialog, storing sentence templates to be used in dialogs, generating a talking head response to a user inquiry from the client device, and determining whether sentences or stored templates stored in the client cache relate to the talking head response. If the stored sentences or stored templates relate to the talking head response, the method comprises instructing the client device to use the appropriate stored sentence or template from the client cache to render at least a part of the talking head response and transmitting a portion of the talking head response not stored in the client cache, if any, to the client device to render a complete talking head response.
    Type: Grant
    Filed: July 16, 2007
    Date of Patent: December 1, 2009
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Eric Cosatto, Hans Peter Graf, Joern Ostermann
  • Patent number: 7609270
    Abstract: In an embodiment, a method is provided for creating a personal animated entity for delivering a multi-media message from a sender to a recipient. An image file from the sender may be received by a server. The image file may include an image of an entity. The sender may be requested to provide input with respect to facial features of the image of the entity in preparation for animating the image of the entity. After the sender provides the input with respect to the facial features of the image of the entity, the image of the entity may be presented as a personal animated entity to the sender to preview. Upon approval of the preview from the sender, the image of the entity may be presented as a sender-selectable personal animated entity for delivering the multi-media message to the recipient.
    Type: Grant
    Filed: April 28, 2008
    Date of Patent: October 27, 2009
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Joern Ostermann, Mehmet Reha Civanlar, Ana Cristina Andres del Valle, Patrick Haffner
  • Patent number: 7584105
    Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.
    Type: Grant
    Filed: October 31, 2007
    Date of Patent: September 1, 2009
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
  • Publication number: 20090022220
    Abstract: Standard video compression techniques apply motion-compensated prediction combined with transform coding of the prediction error. In the context of prediction with fractional-pel motion vector resolution it was shown, that aliasing components contained in an image signal are limiting the prediction efficiency obtained by motion compensation. In order to consider aliasing, quantization and motion estimation errors, camera noise, etc., we analytically developed a two dimensional (2D) non-separable interpolation filter, which is independently calculated for each frame by minimizing the prediction error energy. For every fractional-pel position to be interpolated, an individual set of 2D filter coefficients is determined. Since transmitting filter coefficients as side information results in an additional bit rate, which is almost constant for different image resolutions and total bit rates, the loss in coding gain increases when total bit rates sink.
    Type: Application
    Filed: April 13, 2006
    Publication date: January 22, 2009
    Applicant: UNIVERSITAET HANNOVER
    Inventors: Yuri Vatis, Bernd Edler, Ingolf Wassermann, Dieu Thanh Nguyen, Joern Ostermann
  • Publication number: 20080312930
    Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.
    Type: Application
    Filed: August 18, 2008
    Publication date: December 18, 2008
    Applicant: AT&T Corp.
    Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
  • Publication number: 20080201442
    Abstract: In an embodiment, a method is provided for creating a personal animated entity for delivering a multi-media message from a sender to a recipient. An image file from the sender may be received by a server. The image file may include an image of an entity. The sender may be requested to provide input with respect to facial features of the image of the entity in preparation for animating the image of the entity. After the sender provides the input with respect to the facial features of the image of the entity, the image of the entity may be presented as a personal animated entity to the sender to preview. Upon approval of the preview from the sender, the image of the entity may be presented as a sender-selectable personal animated entity for delivering the multi-media message to the recipient.
    Type: Application
    Filed: April 28, 2008
    Publication date: August 21, 2008
    Applicant: AT & T Corp.
    Inventors: Joern Ostermann, Mehmet Reha Civanlar, Ana Cristina Andres del Valle, Patrick Haffner
  • Publication number: 20080180435
    Abstract: An animation wireframe is modified with three-dimensional (3D) range and color data having a corresponding shape surface. The animation wireframe is vertically scaled based on distances between consecutive features within the 3D range and color data and corresponding distances within the generic animation wireframe. For each animation wireframe point, the location of the animation wireframe point is adjusted to coincide with a point on the shape surface. The shape surface point lies along a scaling line connecting the animation wireframe point, the shape surface point and an origin point. The scaling line is within a horizontal point.
    Type: Application
    Filed: April 3, 2008
    Publication date: July 31, 2008
    Applicant: AT&T Corp.
    Inventor: Joern Ostermann
  • Patent number: 7379066
    Abstract: In an embodiment, a method is provided for creating a personal animated entity for delivering a multi-media message from a sender to a recipient. An image file from the sender may be received by a server. The image file may include an image of an entity. The sender may be requested to provide input with respect to facial features of the image of the entity in preparation for animating the image of the entity. After the sender provides the input with respect to the facial features of the image of the entity, the image of the entity may be presented as a personal animated entity to the sender to preview. Upon approval of the preview from the sender, the image of the entity may be presented as a sender-selectable personal animated entity for delivering the multi-media message to the recipient.
    Type: Grant
    Filed: May 26, 2006
    Date of Patent: May 27, 2008
    Assignee: AT&T Corp.
    Inventors: Joern Ostermann, Mehmet Reha Civanlar, Ana Cristina Andres del Valle, Patrick Haffner
  • Publication number: 20080114723
    Abstract: A method and apparatus for displaying received data, analyze the quality of the displayed data formulating a media-parameter suggestion for the encoder to alter the characteristics of data to be sent to the receiver, and sending from the receiver, the formulated suggestion.
    Type: Application
    Filed: October 31, 2007
    Publication date: May 15, 2008
    Applicant: AT&T Corp.
    Inventors: Andrea Basso, Erich Haratsch, Barin Geoffry Haskell, Joern Ostermann
  • Patent number: 7365749
    Abstract: An animation wireframe is modified with three-dimensional (3D) range and color data having a corresponding shape surface. The animation wireframe is vertically scaled based on distances between consecutive features within the 3D range and color data and corresponding distances within the generic animation wireframe. For each animation wireframe point, the location of the animation wireframe point is adjusted to coincide with a point on the shape surface. The shape surface point lies along a scaling line connecting the animation wireframe point, the shape surface point and an origin point. The scaling line is within a horizontal point.
    Type: Grant
    Filed: August 15, 2006
    Date of Patent: April 29, 2008
    Assignee: AT&T Corp.
    Inventor: Joern Ostermann
  • Patent number: 7366670
    Abstract: Facial animation in MPEG-4 can be driven by a text stream and a Facial Animation Parameters (FAP) stream. Text input is sent to a TTS converter that drives the mouth shapes of the face. FAPs are sent from an encoder to the face over the communication channel. Disclosed are codes bookmarks in the text string transmitted to the TTS converter. Bookmarks are placed between and inside words and carry an encoder time stamp. The encoder time stamp does not relate to real-world time. The FAP stream carries the same encoder time stamp found in the bookmark of the text. The system reads the bookmark and provides the encoder time stamp as well as a real-time time stamp to the facial animation system. The facial animation system associates the correct facial animation parameter with the real-time time stamp using the encoder time stamp of the bookmark as a reference.
    Type: Grant
    Filed: August 11, 2006
    Date of Patent: April 29, 2008
    Assignee: AT&T Corp.
    Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
  • Publication number: 20080059194
    Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.
    Type: Application
    Filed: October 31, 2007
    Publication date: March 6, 2008
    Applicant: AT&T Corp.
    Inventors: Andrea Basso, Mark Beutnagel, Joern Ostermann