Patents by Inventor Joern Ostermann
Joern Ostermann has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20080040227Abstract: A system method of advertising using a multi-media application system is disclosed. The multi-media application relates to the delivery of multi-media messages using animated entities that audibly deliver messages created by a sender using text-to-speech technologies. The method provides targeted advertising based on information learned about both the sender of a multi-media message and the recipient of the multi-media message. The information may relate to an analysis of a text message created by the sender, emoticons chosen by the sender and inserted into the text of the message, the choice by the sender of an animated entity, or other parameters such as background music chosen for which template is chosen by the sender. Advertising messages may be delivered before the recipient receives the multi-media message, during the reception by the recipient of the multi-media message or following the reception of the multi-media message.Type: ApplicationFiled: August 14, 2007Publication date: February 14, 2008Applicant: AT&T Corp.Inventors: Joern Ostermann, Mehmet Civanlar, Barbara Buda, Thomas Isaacson
-
Publication number: 20080015861Abstract: Methods and apparatus for rendering a talking head on a client device are disclosed. The client device has a client cache capable of storing audio/visual data associated with rendering the talking head. The method comprises storing sentences in a client cache of a client device that relate to bridging delays in a dialog, storing sentence templates to be used in dialogs, generating a talking head response to a user inquiry from the client device, and determining whether sentences or stored templates stored in the client cache relate to the talking head response. If the stored sentences or stored templates relate to the talking head response, the method comprises instructing the client device to use the appropriate stored sentence or template from the client cache to render at least a part of the talking head response and transmitting a portion of the talking head response not stored in the client cache, if any, to the client device to render a complete talking head response.Type: ApplicationFiled: July 16, 2007Publication date: January 17, 2008Applicant: AT&T Corp.Inventors: Eric Cosatto, Hans Graf, Joern Ostermann
-
Patent number: 7310811Abstract: A method and apparatus for displaying received data, analyze the quality of the displayed data formulating a media-parameter suggestion for the encoder to alter the characteristics of data to be sent to the receiver, and sending from the receiver, the formulated suggestion.Type: GrantFiled: July 10, 1998Date of Patent: December 18, 2007Assignee: AT&T Corp.Inventors: Andrea Basso, Erich Haratsch, Barin Geoffry Haskell, Joern Ostermann
-
Patent number: 7274367Abstract: A system and a computer-readable medium are provided for controlling a computing device to define a set of computer animation parameters for an object to be animated electronically. An electronic reference model of the object to be animated is obtained. The reference model is altered to form a modified model corresponding to a first animation parameter. Physical differences between the electronic reference model and the modified model are determined and a representation of the physical differences are stored as the first animation parameter. Altering of the reference model and determining of the physical differences are repeated. The stored parameters are provided to a rendering device for generation of the animation in accordance with the stored parameters. Determining physical differences between the electronic reference model and the modified model and storing a representation of the physical differences as the first animation parameter include comparing vertex positions of the reference model.Type: GrantFiled: July 12, 2005Date of Patent: September 25, 2007Assignee: AT&T Corp.Inventors: Erich Haratsch, Joern Ostermann
-
Patent number: 7260539Abstract: Methods and apparatus for rendering a talking head on a client device are disclosed. The client device has a client cache capable of storing audio/visual data associated with rendering the talking head. The method comprises storing sentences in a client cache of a client device that relate to bridging delays in a dialog, storing sentence templates to be used in dialogs, generating a talking head response to a user inquiry from the client device, and determining whether sentences or stored templates stored in the client cache relate to the talking head response. If the stored sentences or stored templates relate to the talking head response, the method comprises instructing the client device to use the appropriate stored sentence or template from the client cache to render at least a part of the talking head response and transmitting a portion of the talking head response not stored in the client cache, if any, to the client device to render a complete talking head response.Type: GrantFiled: April 25, 2003Date of Patent: August 21, 2007Assignee: AT&T Corp.Inventors: Eric Cosatto, Hans Peter Graf, Joern Ostermann
-
Patent number: 7203759Abstract: A system and method of delivering a multi-media message to a recipient is disclosed. The multi-media message is created by a sender and contains a talking entity for delivering a sender message. A determination is made as to whether the recipient device has rendering software for delivering a video portion of the multi-media message. If the recipient device does not have the rendering software, the multi-media message is streamed from a server such that a generic rendering software device will deliver the multi-media message.Type: GrantFiled: August 27, 2005Date of Patent: April 10, 2007Assignee: AT&T Corp.Inventors: Joern Ostermann, Mehmet Reha Civanlar
-
Patent number: 7203648Abstract: A system and method of creating a customized multi-media message to a recipient is disclosed. The multi-media message is created by a sender and contains an animated entity that delivers an audible message. The sender chooses the animated entity from a plurality of animated entities. The system receives a text message from the sender and receives a sender audio message associated with the text message. The sender audio message is associated with the chosen animated entity to create the multi-media message. The multi-media message is delivered by the animated entity using as the voice the sender audio message wherein the mouth movements of the animated entity conform to the sender audio message.Type: GrantFiled: November 2, 2001Date of Patent: April 10, 2007Assignee: AT&T Corp.Inventors: Joern Ostermann, Mehmet Reha Civanlar, Barbara Buda, Claudio Lande
-
Patent number: 7177811Abstract: A method is provided for customizing a multi-media message created by a sender for a recipient, in which the multi-media message includes an animated entity audibly presenting speech converted from text by the sender. At least one image is received from the sender. Each of the at least one image is associated with a tag. The sender is presented with options to insert the tag associated with one of the at least one image into the sender text.Type: GrantFiled: March 6, 2006Date of Patent: February 13, 2007Assignee: AT&T Corp.Inventors: Joern Ostermann, Barbara Buda, Mehmet Reha Civanlar, Eric Cosatto, Hans Peter Graf, Thomas M. Isaacson, Yann Andre LeCun
-
Patent number: 7148889Abstract: An animation wireframe is modified with three-dimensional (3D) range and color data having a corresponding shape surface. The animation wireframe is vertically scaled based on distances between consecutive features within the 3D range and color data and corresponding distances within the generic animation wireframe. For each animation wireframe point, the location of the animation wireframe point is adjusted to coincide with a point on the shape surface. The shape surface point lies along a scaling line connecting the animation wireframe point, the shape surface point and an origin point. The scaling line is within a horizontal plane.Type: GrantFiled: September 27, 2005Date of Patent: December 12, 2006Assignee: AT&T Corp.Inventor: Joern Ostermann
-
Patent number: 7110950Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.Type: GrantFiled: January 7, 2005Date of Patent: September 19, 2006Assignee: AT&T Corp.Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
-
Publication number: 20060181536Abstract: Optimal resilience to errors in packetized streaming 3-D wireframe animation is achieved by partitioning the stream into layers and applying unequal error correction coding to each layer independently to maintain the same overall bitrate. The unequal error protection scheme for each of the layers combined with error concealment at the receiver achieves graceful degradation of streamed animation at higher packet loss rates than approaches that do not account for subjective parameters such as visual smoothness.Type: ApplicationFiled: February 16, 2005Publication date: August 17, 2006Applicant: AT&T Corp.Inventors: Joern Ostermann, Sokratis Varakliotis
-
Patent number: 7091976Abstract: A method of creating a personal animated entity for delivering a multi-media message from a sender to a recipient is disclosed. The method comprises receiving from the sender an image file at a server, the image file having an entity and a background image. The server presents to the sender the image file and requests the sender to mark features on the image file. After the sender marks the image file, the server presents to the sender the image file as an optional animated entity when the sender chooses an animated entity to deliver a multi-media message. If the sender selects the image file for delivering the multi-media message, the server delivers the multi-media message using the personal animated entity in the context of the background image of the image file. Extrapolation is used to fill in background voids created by the movement of the personal animated entity in the context of the background image.Type: GrantFiled: November 2, 2001Date of Patent: August 15, 2006Assignee: AT&T Corp.Inventors: Joern Ostermann, Mehmet Reha Civanlar, Ana Cristina Andres del Valle, Patrick Haffner
-
Patent number: 7076426Abstract: An enhanced system is achieved by allowing bookmarks which can specify that the stream of bits that follow corresponds to phonemes and a plurality of prosody information, including duration information, that is specified for times within the duration of the phonemes. Illustratively, such a stream comprises a flag to enable a duration flag, a flag to enable a pitch contour flag, a flag to enable an energy contour flag, a specification of the number of phonemes that follow, and, for each phoneme, one or more sets of specific prosody information that relates to the phoneme, such as a set of pitch values and their durations.Type: GrantFiled: January 27, 1999Date of Patent: July 11, 2006Assignee: AT&T Corp.Inventors: Mark Charles Beutnagel, Joern Ostermann, Schuyler Reynier Quackenbush
-
Patent number: 7035803Abstract: A system and method of providing sender customization of multi-media messages through the use of inserted images or video. The images or video may be sender-created or predefined and available to the sender via a web server. The method relates to customizing a multi-media message created by a sender for a recipient, the multi-media message having an animated entity audibly presenting speech converted from text created by the sender. The method comprises receiving at least one image from the sender, associating each at least one image with a tag, presenting the sender with options to insert the tag associated with one of the at least one image into the sender text, and after the sender inserts the tag associated with one of the at least one images into the sender text, delivering the multi-media message with the at least one image presented as background to the animated entity according to a position of the tag associated with the at least one image in the sender text.Type: GrantFiled: November 2, 2001Date of Patent: April 25, 2006Assignee: AT&T Corp.Inventors: Joern Ostermann, Barbara Buda, Mehmet Reha Civanlar, Eric Cosatto, Hans Peter Graf, Thomas M. Isaacson, Yann Andre LeCun
-
Patent number: 6989834Abstract: An animation wireframe is modified with three-dimensional (3D) range and color data having a corresponding shape surface. The animation wireframe is vertically scaled based on distances between consecutive features within the 3D range and color data and corresponding distances within the generic animation wireframe. For each animation wireframe point, the location of the animation wireframe point is adjusted to coincide with a point on the shape surface. The shape surface point lies along a scaling line connecting the animation wireframe point, the shape surface point and an origin point. The scaling line is within a horizontal plane.Type: GrantFiled: June 11, 2001Date of Patent: January 24, 2006Assignee: AT&T Corp.Inventor: Joern Ostermann
-
Patent number: 6990452Abstract: A system and method of providing sender-customization of multi-media messages through the use of emoticons is disclosed. The sender inserts the emoticons into a text message. As an animated face audibly delivers the text, emoticons associated with the message are started a predetermined period of time or number of words prior to the position of the emoticon in the message text and completed a predetermined length of time or number of words following the location of the emoticon. The sender may insert emoticons through the use of emoticon buttons that are icons available for choosing. Upon sender selections of an emoticon, an icon representing the emoticon is inserted into the text at the position of the cursor. Once an emoticon is chosen, the sender may also choose the amplitude for the emoticon and increased or decreased amplitude will be displayed in the icon inserted into the message text.Type: GrantFiled: November 2, 2001Date of Patent: January 24, 2006Assignee: AT&T Corp.Inventors: Joern Ostermann, Mehmet Reha Civanlar, Eric Cosatto, Hans Peter Graf, Yann Andre LeCun
-
Patent number: 6976082Abstract: A system and method of delivering a multi-media message to a recipient is disclosed. The multi-media message is created by a sender and contains a talking entity for delivering a text message using text-to-speech means. The method comprises transmitting to the recipient a message containing a link to the multi-media message, wherein the multi-media message is contained on a server. Upon the recipient clicking the link to the multi-media message, the method comprises determining whether a client terminal associated with the recipient contains client software to deliver the multi-media message. If client software exists on the client terminal, the method comprises determining whether permission is granted for delivering the multi-media message. If client software exists on the client terminal and permission is granted, the multi-media message is delivered to the recipient using the client software.Type: GrantFiled: November 2, 2001Date of Patent: December 13, 2005Assignee: AT&T Corp.Inventors: Joern Ostermann, Mehmet Reha Civanlar
-
Patent number: 6970172Abstract: A process is defined for the rapid definition of new animation parameters for proprietary renderers. The process accommodates the peculiarities of proprietary models. In a first step, a proprietary model is animated in a standard modeler and the animated models are saved as VRML files. A converter is used to extract the meaning of a newly defined animation parameter by comparing two or more of the VRML files. Thus, the output of this process is the model and a table describing the new animation parameter. This information is read by the renderer and used whenever the animation parameter is required. The process can easily be used to generate new shapes from the original model.Type: GrantFiled: November 12, 2002Date of Patent: November 29, 2005Assignee: AT&T Corp.Inventors: Erich Haratsch, Joern Ostermann
-
Patent number: 6963839Abstract: A method for customizing a voice in a multi-media message created by a sender for a recipient is disclosed. The multi-media message comprises a text message from the sender to be delivered by an animated entity. The method comprises presenting an option to the sender to insert voice emoticons into the text message associated with parameters of a voice used by the animated entity to deliver the text message. The message is then delivered wherein the voice of the animated entity is modified throughout the message according to the voice emoticons. The voice emoticons may relate to features such as voice stress, volume, pauses, emotion, yelling, or whispering. After the sender inserts various voice emoticons into the text of the message, the animated entity delivers the multi-media message giving effect to each voice emoticon in the text. A volume or intensity of the voice emoticons may be given effect by repeating the tags.Type: GrantFiled: November 2, 2001Date of Patent: November 8, 2005Assignee: AT&T Corp.Inventors: Joern Ostermann, Mehmet Reha Civanlar, Hans Peter Graf, Thomas M. Isaacson
-
Publication number: 20050243092Abstract: A system and a computer-readable medium are provided for controlling a computing device to define a set of computer animation parameters for an object to be animated electronically. An electronic reference model of the object to be animated is obtained. The reference model is altered to form a modified model corresponding to a first animation parameter. Physical differences between the electronic reference model and the modified model are determined and a representation of the physical differences are stored as the first animation parameter. Altering of the reference model and determining of the physical differences are repeated. The stored parameters are provided to a rendering device for generation of the animation in accordance with the stored parameters. Determining physical differences between the electronic reference model and the modified model and storing a representation of the physical differences as the first animation parameter include comparing vertex positions of the reference model.Type: ApplicationFiled: July 12, 2005Publication date: November 3, 2005Applicant: AT&T Corp.Inventors: Erich Haratsch, Joern Ostermann