Patents by Inventor Joern Ostermann

Joern Ostermann has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130300824
    Abstract: Optimal resilience to errors in packetized streaming 3-D wireframe animation is achieved by partitioning the stream into layers and applying unequal error correction coding to each layer independently to maintain the same overall bitrate. The unequal error protection scheme for each of the layers combined with error concealment at the receiver achieves graceful degradation of streamed animation at higher packet loss rates than approaches that do not account for subjective parameters such as visual smoothness.
    Type: Application
    Filed: April 16, 2013
    Publication date: November 14, 2013
    Inventors: Joern Ostermann, Sokratis Varakliotis
  • Patent number: 8521533
    Abstract: A system and method of creating a customized multi-media message to a recipient is disclosed. The multi-media message is created by a sender and contains an animated entity that delivers an audible message. The sender chooses the animated entity from a plurality of animated entities. The system receives a text message from the sender and receives a sender audio message associated with the text message. The sender audio message is associated with the chosen animated entity to create the multi-media message. The multi-media message is delivered by the animated entity using as the voice the sender audio message wherein the mouth movements of the animated entity conform to the sender audio message.
    Type: Grant
    Filed: February 28, 2007
    Date of Patent: August 27, 2013
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Joern Ostermann, Mehmet Reha Civanlar, Barbara Buda, Claudio Lande
  • Patent number: 8502815
    Abstract: We present a method for predictive compression of time-consistent 3D mesh sequences supporting and exploiting scalability. The applied method decomposes each frame of a mesh sequence in layers, which provides a time-consistent multi-resolution representation. Following the predictive coding paradigm, local temporal and spatial dependencies between layers and frames are exploited for layer-wise compression. Prediction is performed vertex-wise from coarse to fine layers exploiting the motion of already encoded neighboring vertices for prediction of the current vertex location. Consequently, successive layer-wise decoding allows to reconstruct frames with increasing levels of detail.
    Type: Grant
    Filed: April 18, 2008
    Date of Patent: August 6, 2013
    Assignee: Gottfried Wilhelm Leibniz Universitat Hannover
    Inventors: Nikolce Stefanoski, Jörn Ostermann, Patrick Klie
  • Patent number: 8421804
    Abstract: Optimal resilience to errors in packetized streaming 3-D wireframe animation is achieved by partitioning the stream into layers and applying unequal error correction coding to each layer independently to maintain the same overall bitrate. The unequal error protection scheme for each of the layers combined with error concealment at the receiver achieves graceful degradation of streamed animation at higher packet loss rates than approaches that do not account for subjective parameters such as visual smoothness.
    Type: Grant
    Filed: February 16, 2005
    Date of Patent: April 16, 2013
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Joern Ostermann, Sokratis Varakliotis
  • Publication number: 20130007209
    Abstract: A method and apparatus for displaying received data, analyze the quality of the displayed data formulating a media-parameter suggestion for the encoder to alter the characteristics of data to be sent to the receiver, and sending from the receiver, the formulated suggestion.
    Type: Application
    Filed: September 11, 2012
    Publication date: January 3, 2013
    Applicant: AT&T Intellectual Property II, L.P.
    Inventors: Andrea Basso, Erioh Haratsch, Barin Geoffry Haskell, Joern Ostermann
  • Publication number: 20120262444
    Abstract: We present a method for predictive compression of time-consistent 3D mesh sequences supporting and exploiting scalability. The applied method decomposes each frame of a mesh sequence in layers, which provides a time-consistent multi-resolution representation. Following the predictive coding paradigm, local temporal and spatial dependencies between layers and frames are exploited for layer-wise compression. Prediction is performed vertex-wise from coarse to fine layers exploiting the motion of already encoded neighboring vertices for prediction of the current vertex location. Consequently, successive layer-wise decoding allows to reconstruct frames with increasing levels of detail.
    Type: Application
    Filed: April 18, 2008
    Publication date: October 18, 2012
    Applicant: GOTTFRIED WILHELM LEIBNIZ UNIVERSITAT HANNOVER
    Inventors: Nikolce Stefanoski, Jörn Ostermann, Patrick Klie
  • Patent number: 8266128
    Abstract: A method and apparatus for displaying received data, analyze the quality of the displayed data formulating a media-parameter suggestion for the encoder to alter the characteristics of data to be sent to the receiver, and sending from the receiver, the formulated suggestion.
    Type: Grant
    Filed: October 31, 2007
    Date of Patent: September 11, 2012
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Andrea Basso, Erich Haratsch, Barin Geoffry Haskell, Joern Ostermann
  • Patent number: 8115772
    Abstract: In an embodiment, a method is provided for creating a personal animated entity for delivering a multi-media message from a sender to a recipient. An image file from the sender may be received by a server. The image file may include an image of an entity. The sender may be requested to provide input with respect to facial features of the image of the entity in preparation for animating the image of the entity. After the sender provides the input with respect to the facial features of the image of the entity, the image of the entity may be presented as a personal animated entity to the sender to preview. Upon approval of the preview from the sender, the image of the entity may be presented as a sender-selectable personal animated entity for delivering the multi-media message to the recipient.
    Type: Grant
    Filed: April 8, 2011
    Date of Patent: February 14, 2012
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Joern Ostermann, Mehmet Reha Civanlar, Ana Cristina Andres del Valle, Patrick Haffner
  • Patent number: 8086751
    Abstract: A system and method of delivering a multi-media message to a recipient is disclosed. The multi-media message is created by a sender and contains a talking entity for delivering a sender message. A determination is made as to whether the recipient device has rendering software for delivering a video portion of the multi-media message. If the recipient device does not have the rendering software, the multi-media message is streamed from a server such that a generic rendering software device will deliver the multi-media message.
    Type: Grant
    Filed: February 28, 2007
    Date of Patent: December 27, 2011
    Assignee: AT&T Intellectual Property II, L.P
    Inventors: Joern Ostermann, Mehmet Reha Civanlar
  • Patent number: 8086464
    Abstract: Methods and apparatus for rendering a talking head on a client device are disclosed. The client device has a client cache capable of storing audio/visual data associated with rendering the talking head. The method comprises storing sentences in a client cache of a client device that relate to bridging delays in a dialog, storing sentence templates to be used in dialogs, generating a talking head response to a user inquiry from the client device, and determining whether sentences or stored templates stored in the client cache relate to the talking head response. If the stored sentences or stored templates relate to the talking head response, the method comprises instructing the client device to use the appropriate stored sentence or template from the client cache to render at least a part of the talking head response and transmitting a portion of the talking head response not stored in the client cache, if any, to the client device to render a complete talking head response.
    Type: Grant
    Filed: November 30, 2009
    Date of Patent: December 27, 2011
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Eric Cosatto, Hans Peter Graf, Joern Ostermann
  • Publication number: 20110234588
    Abstract: An animation wireframe is modified with three-dimensional (3D) range and color data having a corresponding shape surface. The animation wireframe is vertically scaled based on distances between consecutive features within the 3D range and color data and corresponding distances within the generic animation wireframe. For each animation wireframe point, the location of the animation wireframe point is adjusted to coincide with a point on the shape surface. The shape surface point lies along a scaling line connecting the animation wireframe point, the shape surface point and an origin point. The scaling line is within a horizontal point.
    Type: Application
    Filed: June 6, 2011
    Publication date: September 29, 2011
    Applicant: AT&T Intellectual Property II, L.P.
    Inventor: Joern Ostermann
  • Publication number: 20110181605
    Abstract: In an embodiment, a method is provided for creating a personal animated entity for delivering a multi-media message from a sender to a recipient. An image file from the sender may be received by a server. The image file may include an image of an entity. The sender may be requested to provide input with respect to facial features of the image of the entity in preparation for animating the image of the entity. After the sender provides the input with respect to the facial features of the image of the entity, the image of the entity may be presented as a personal animated entity to the sender to preview. Upon approval of the preview from the sender, the image of the entity may be presented as a sender-selectable personal animated entity for delivering the multi-media message to the recipient.
    Type: Application
    Filed: April 8, 2011
    Publication date: July 28, 2011
    Applicant: AT&T Intellectual Property II, L.P. via transfer from AT&T Corp.
    Inventors: Joern Ostermann, Mehmet Reha Civanlar, Ana Cristina Andres del Valle, Patrick Haffner
  • Publication number: 20110175922
    Abstract: A system and a computer-readable medium are provided for controlling a computing device to define a set of computer animation parameters for an object to be animated electronically. An electronic reference model of the object to be animated is obtained. The reference model is altered to form a modified model corresponding to a first animation parameter. Physical differences between the electronic reference model and the modified model are determined and a representation of the physical differences are stored as the first animation parameter. Altering of the reference model and determining of the physical differences are repeated. The stored parameters are provided to a rendering device for generation of the animation in accordance with the stored parameters. Determining physical differences between the electronic reference model and the modified model and storing a representation of the physical differences as the first animation parameter include comparing vertex positions of the reference model.
    Type: Application
    Filed: March 30, 2011
    Publication date: July 21, 2011
    Applicant: AT&T Corp.
    Inventors: Erich Haratsch, Joern Ostermann
  • Patent number: 7956863
    Abstract: An animation wireframe is modified with three-dimensional (3D) range and color data having a corresponding shape surface. The animation wireframe is vertically scaled based on distances between consecutive features within the 3D range and color data and corresponding distances within the generic animation wireframe. For each animation wireframe point, the location of the animation wireframe point is adjusted to coincide with a point on the shape surface. The shape surface point lies along a scaling line connecting the animation wireframe point, the shape surface point and an origin point. The scaling line is within a horizontal point.
    Type: Grant
    Filed: June 18, 2010
    Date of Patent: June 7, 2011
    Assignee: AT&T Intellectual Property II, L.P.
    Inventor: Joern Ostermann
  • Patent number: 7949109
    Abstract: A computing device and computer-readable medium storing instructions for controlling a computing device to customize a voice in a multi-media message created by a sender for a recipient, the multi-media message comprising a text message from the sender to be delivered by an animated entity. The instructions comprise receiving from the sender inserted voice emoticons, which may be repeated, into the text message associated with parameters of a voice used by an animated entity to deliver the text message; and transmitting the text message such that a recipient device can deliver the multi-media message at a variable level associated with a number of times a respective voice emoticon is repeated.
    Type: Grant
    Filed: December 29, 2009
    Date of Patent: May 24, 2011
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Joern Ostermann, Mehmat Reha Civanlar, Hans Peter Graf, Thomas M. Isaacson
  • Patent number: 7924286
    Abstract: In an embodiment, a method is provided for creating a personal animated entity for delivering a multi-media message from a sender to a recipient. An image file from the sender may be received by a server. The image file may include an image of an entity. The sender may be requested to provide input with respect to facial features of the image of the entity in preparation for animating the image of the entity. After the sender provides the input with respect to the facial features of the image of the entity, the image of the entity may be presented as a personal animated entity to the sender to preview. Upon approval of the preview from the sender, the image of the entity may be presented as a sender-selectable personal animated entity for delivering the multi-media message to the recipient.
    Type: Grant
    Filed: October 20, 2009
    Date of Patent: April 12, 2011
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Joern Ostermann, Mehmet Reha Civanlar, Ana Cristina Andres del Valle, Patrick Haffner
  • Patent number: 7920143
    Abstract: A system and a computer-readable medium are provided for controlling a computing device to define a set of computer animation parameters for an object to be animated electronically. An electronic reference model of the object to be animated is obtained. The reference model is altered to form a modified model corresponding to a first animation parameter. Physical differences between the electronic reference model and the modified model are determined and a representation of the physical differences are stored as the first animation parameter. Altering of the reference model and determining of the physical differences are repeated. The stored parameters are provided to a rendering device for generation of the animation in accordance with the stored parameters. Determining physical differences between the electronic reference model and the modified model and storing a representation of the physical differences as the first animation parameter include comparing vertex positions of the reference model.
    Type: Grant
    Filed: August 20, 2007
    Date of Patent: April 5, 2011
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Erich Haratsch, Joern Ostermann
  • Patent number: 7921013
    Abstract: A system and method of providing sender-customization of multi-media messages through the use of emoticons is disclosed. The sender inserts the emoticons into a text message. As an animated face audibly delivers the text, emoticons associated with the message are started a predetermined period of time or number of words prior to the position of the emoticon in the message text and completed a predetermined length of time or number of words following the location of the emoticon. The sender may insert emoticons through the use of emoticon buttons that are icons available for choosing. Upon sender selections of an emoticon, an icon representing the emoticon is inserted into the text at the position of the cursor. Once an emoticon is chosen, the sender may also choose the amplitude for the emoticon and increased or decreased amplitude will be displayed in the icon inserted into the message text.
    Type: Grant
    Filed: August 30, 2005
    Date of Patent: April 5, 2011
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Joern Ostermann, Mehmet Reha Civanlar, Eric Cosatto, Hans Peter Graf, Yann Andre LeCun
  • Patent number: 7844463
    Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text and Facial Animation Parameters. A Text-To-Speech converter drives the mouth shapes of the face. An encoder sends Facial Animation Parameters to the face. The text input can include codes, or bookmarks, transmitted to the Text-to-Speech converter, which are placed between and inside words. The bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. The Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text. The system reads the bookmark and provides the encoder time stamp and a real-time time stamp. The facial animation system associates the correct facial animation parameter with the real-time time stamp using the encoder time stamp of the bookmark as a reference.
    Type: Grant
    Filed: August 18, 2008
    Date of Patent: November 30, 2010
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
  • Publication number: 20100253703
    Abstract: An animation wireframe is modified with three-dimensional (3D) range and color data having a corresponding shape surface. The animation wireframe is vertically scaled based on distances between consecutive features within the 3D range and color data and corresponding distances within the generic animation wireframe. For each animation wireframe point, the location of the animation wireframe point is adjusted to coincide with a point on the shape surface. The shape surface point lies along a scaling line connecting the animation wireframe point, the shape surface point and an origin point. The scaling line is within a horizontal point.
    Type: Application
    Filed: June 18, 2010
    Publication date: October 7, 2010
    Applicant: AT&T Intellectual Property II, L.P. via transfer from AT&T Corp.
    Inventor: Joern Ostermann