Language Driven Animation Patents (Class 345/956)

Cross-Reference Art Collections

Actor (Class 345/957)
  • Patent number: 8433299
    Abstract: A system for mobile devices that facilitates the creation and dissemination of interactive media to a plurality of mobile devices. A computer or PC comprising an interactive media generator is used to generate interactive media and communicate it to a distribution server. Mobile devices have an interactive media client component to receive and present interactive media to a user.
    Type: Grant
    Filed: February 15, 2012
    Date of Patent: April 30, 2013
    Inventor: Bindu Rama Rao
  • Patent number: 8068107
    Abstract: In a multimedia presentation, having speech and graphic contributions, a list of graphic objects is provided. Each graphic is associated to a graphic file capable of being executed by a computer to display a corresponding graphic contribution on a screen. A speech file comprising a sequence of phrases is also created, each phrase comprising a speech contribution explaining at least one graphic contribution associated to a respective graphic object. Then, an arrangement string is created obtained as a sequence of a first graphic object and a respective first phrase, and then a second graphic object and a respective second phrase, and so on up to completion of all graphic objects and phrases of said list and of said speech file respectively. A processing speed for displaying the graphic objects is chosen.
    Type: Grant
    Filed: November 22, 2004
    Date of Patent: November 29, 2011
    Inventor: Mario Pirchio
  • Patent number: 8063905
    Abstract: Animating speech of an avatar representing a participant in a mobile communication including selecting one or more images; selecting a generic animation template; fitting the one or more images with the generic animation template; texture wrapping the one more images over the generic animation template; and displaying the one or more images texture wrapped over the generic animation template. Receiving an audio speech signal; identifying a series of phonemes; and for each phoneme: identifying a new mouth position for the mouth of the generic animation template; altering the mouth position to the new mouth position; texture wrapping a portion of the one or more images corresponding to the altered mouth position; displaying the texture wrapped portion of the one or more images corresponding to the altered mouth position of the mouth of the generic animation template; and playing the portion of the audio speech signal represented by the phoneme.
    Type: Grant
    Filed: October 11, 2007
    Date of Patent: November 22, 2011
    Assignee: International Business Machines Corporation
    Inventors: William A. Brown, Richard W. Muirhead, Francis X. Reddington, Martin A. Wolfe
  • Patent number: 7990384
    Abstract: A system and method for generating photo-realistic talking-head animation from a text input utilizes an audio-visual unit selection process. The lip-synchronization is obtained by optimally selecting and concatenating variable-length video units of the mouth area. The unit selection process utilizes the acoustic data to determine the target costs for the candidate images and utilizes the visual data to determine the concatenation costs. The image database is prepared in a hierarchical fashion, including high-level features (such as a full 3D modeling of the head, geometric size and position of elements) and pixel-based, low-level features (such as a PCA-based metric for labeling the various feature bitmaps).
    Type: Grant
    Filed: September 15, 2003
    Date of Patent: August 2, 2011
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Eric Cosatto, Hans Peter Graf, Gerasimos Potamianos, Juergen Schroeter
  • Patent number: 7652669
    Abstract: A system for creating an on-line book with an animated cover. The system includes an animation program for inserting an animation sequence at the beginning of an on-line book which is compiled into the M14 format. The animation program includes: a user interface module configured to receive input from a user; a data sequencing module which arranges at least two graphical images in a sequence; and an update module which modifies at least one compilation control file. The animation program modifies the control files for an on-line book compiler to provide for the display of an animated object upon the opening of the on-line book.
    Type: Grant
    Filed: September 23, 2005
    Date of Patent: January 26, 2010
    Assignee: Micron Technology, Inc.
    Inventor: James A. McKeeth
  • Patent number: 7624019
    Abstract: A system is configured to enable a user to assert voice-activated commands. When the user issues a non-ambiguous command, the system activates a corresponding control. The area of activity on the user interface is visually highlighted to emphasize to the user that what they spoke caused an action. In one specific embodiment, the highlighting involves floating text the user uttered to a visible user interface component.
    Type: Grant
    Filed: October 17, 2005
    Date of Patent: November 24, 2009
    Assignee: Microsoft Corporation
    Inventor: Felix Andrew
  • Patent number: 7546581
    Abstract: A mechanism for incorporating user input modes in a scripting language are provided for. A context allows use of user input modes in a scripting language in a manner that corresponds to their use in a GUI. A programming construct, referred to as a context, specifies at least one user input mode and a state for the user input mode that are applied to a set of instructions. The operations specified by the instructions that are referenced by a context are executed as if the user input modes referred to by the contexts have the state specified by the contexts.
    Type: Grant
    Filed: February 24, 2005
    Date of Patent: June 9, 2009
    Assignee: Autodesk, Inc.
    Inventor: John Wainwright
  • Patent number: 7477253
    Abstract: An animation image generating program is provided which allows animation images to be readily generated by CG without complicated setups, and more particularly the animation image generating program is suited to generate a plurality of types of face animation images by CG. The animation image generating program includes the steps of controlling selection of specific vertices of a standard model and a user model, providing control such that first target vertices are associated with second target vertices where the first target vertices are the selected specific vertices of the standard model and the second target vertices are the selected specific vertices of the user model, providing control by arithmetic means such that coordinates of the first target vertices approximate to those of the second target vertices, to generate fitting information, and animating the user model based on animation data of the standard model and on the fitting information.
    Type: Grant
    Filed: May 20, 2003
    Date of Patent: January 13, 2009
    Assignee: Sega Corporation
    Inventors: Hirokazu Kudoh, Kazunori Nakamura, Shigeo Morishima
  • Patent number: 7450126
    Abstract: The illustrated and described embodiments describe techniques for capturing data that describes 3-dimensional (3-D) aspects of a face, transforming facial motion from one individual to another in a realistic manner, and modeling skin reflectance.
    Type: Grant
    Filed: April 24, 2006
    Date of Patent: November 11, 2008
    Assignee: Microsoft Corporation
    Inventors: Stephen Marschner, Brian K. Guenter, Sashi Raghupathy, Kirk Olynyk, Sing Bing Kang
  • Patent number: 7420564
    Abstract: Shape animation is described. In one aspect, examples that pertain to a shape or motion that is to be animated are provided. The examples are placed within a multi-dimensional abstract space. Each dimension of the abstract space is defined by at least one of an adjective and an adverb. A point within the multi-dimensional abstract space is selected. The selected point does not coincide with a point that is associated with any of the examples. The selected point corresponds to a shape or motion within the abstract space. A single weight value for each of the examples is computed. The single weight values for each of the examples are combined in a manner that defines an interpolated shape or motion that is a blended combination of each of the examples of the set of examples.
    Type: Grant
    Filed: April 29, 2005
    Date of Patent: September 2, 2008
    Assignee: Microsoft Corporation
    Inventors: Michael F. Cohen, Charles F. Rose, III, Peter-Pike Sloan
  • Patent number: 7388586
    Abstract: Methods and apparatus for representing speech in an animated image. In one embodiment, key point on the object to be animated are defined, and a table of trajectories is generated to map positions of the key points over time as the object performs defined actions accompanied by corresponding sounds. In another embodiment, the table of trajectories and a sound rate of the video are used to generate a frame list that includes information to render an animated image of the object in real time at a rate determined by the sound rate. In still another embodiment, a 2D animation of a human speaker is produced. Key points are selected from the Motion Picture Expert Group 4 (MPEG4) defined points for human lips and teeth.
    Type: Grant
    Filed: March 31, 2005
    Date of Patent: June 17, 2008
    Assignee: Intel Corporation
    Inventors: Minerva Yeung, Ping Du, Chao Huang
  • Patent number: 7253817
    Abstract: A virtual human interface for conducting survey questioning comprises a system and method which may include a script file which may include survey question data, response pattern data, expression data, advertising data, entertainment data, lobbying data and/or processing instructions. The expression data may include mouth and eye expression data along with respective duration data indicative of the length of time of respective visual representations. The virtual human interface may interact With a user by representing a character object that personifies communicative behavior to make interaction more natural and enjoyable.
    Type: Grant
    Filed: August 20, 2004
    Date of Patent: August 7, 2007
    Assignee: Virtual Personalities, Inc.
    Inventors: Peter M. Plantec, Michael L. Mauldin, Jeremy Shay Romero, Aaron J. McBride
  • Patent number: 7209153
    Abstract: An auditory representation of a personal profile of a subject is generated, by first, acquiring a plurality of personal attributes of the subject through an assessment tool. The plurality of personal attributes is stored on a first computer system. The plurality of personal attributes is converted to a plurality of musical elements. The plurality of musical elements is arranged to form an auditory representation of the personal profile of the subject.
    Type: Grant
    Filed: March 2, 2005
    Date of Patent: April 24, 2007
    Inventor: Barbara Lehman
  • Patent number: 7198490
    Abstract: A computer based training tool and method that emulates human behavior using a computer-simulated person in a realistic scenario. It provides an interactive experience in detecting deception during interviews and acceptance of statements during interpersonal conversations. The simulated person provides verbal responses in combination with an animated video display reflecting the body language of the simulated person in response to questions asked and during and after responses to the questions. The questions and responses are pre-programmed and interrelated groups of questions and responses are maintained in dynamic tables which are constantly adjusted as a function of questions asked and responses generated. The system provides a critique and numerical score for each training session.
    Type: Grant
    Filed: November 23, 1999
    Date of Patent: April 3, 2007
    Assignee: The Johns Hopkins University
    Inventor: Dale E. Olsen
  • Patent number: 7084874
    Abstract: A real-time virtual presentation method is provided. The method includes capturing motion of a user, capturing audio of the user, transforming the audio of the user into audio of an opposite gender of the user and animating a character with the motion and transformed audio in real-time.
    Type: Grant
    Filed: December 21, 2001
    Date of Patent: August 1, 2006
    Assignee: Kurzweil AINetworks, Inc.
    Inventor: Raymond C. Kurzweil
  • Patent number: 6611278
    Abstract: A method for controlling and automatically animating lip synchronization and facial expressions of three dimensional animated characters using weighted morph targets and time aligned phonetic transcriptions of recorded text. The method utilizes a set of rules that determine the systems output comprising a stream of morph weight sets when a sequence of timed phonemes and/or other timed data is encountered. Other data, such as timed emotional state data or emotemes such as “surprise, “disgust, “embarrassment”, “timid smile”, or the like, may be inputted to affect the output stream of morph weight sets, or create additional streams.
    Type: Grant
    Filed: September 21, 2001
    Date of Patent: August 26, 2003
    Inventor: Maury Rosenfeld
  • Patent number: 6492990
    Abstract: A method using computer software for automatic audio visual dubbing (5) using an efficient computerized automatic method for audio visual dubbing of movies by computerized image copying of the characteristic features of the lip movements of the dubber onto the mouth area of the original speaker. The invention uses a method of vicinity-searching, three-dimensional head modeling of the original speaker (3), and texture mapping (10) technique to produce new images which correspond to the dubbed sound track: The invention thus overcomes the well known disadvantage of the correlation problems between lip movement in an original movie and the sound track of the dubbed movie.
    Type: Grant
    Filed: July 15, 1998
    Date of Patent: December 10, 2002
    Assignee: Yissum Research Development Company of the Hebrew University of Jerusalem
    Inventors: Shmuel Peleg, Ran Cohen, David Avnir
  • Patent number: 6307576
    Abstract: A method for controlling and automatically animating lip synchronization and facial expressions of three dimensional animated characters using weighted morph targets and time aligned phonetic transcriptions of recorded text. The method utilizes a set of rules that determine the systems output comprising a stream of morph weight sets when a sequence of timed phonemes and/or other timed data is encountered. Other data, such as timed emotional state data or emotemes such as “surprise, “disgust, “embarrassment”, “timid smile”, or the like, may be inputted to affect the output stream of morph weight sets, or create additional streams.
    Type: Grant
    Filed: October 2, 1997
    Date of Patent: October 23, 2001
    Inventor: Maury Rosenfeld