Actor Patents (Class 345/957)
-
Patent number: 8681158Abstract: A computer-implemented method includes comparing one or more surface features to a motion model. The surface feature or surface features represent a portion of an object in an image. The method also includes identifying a representation of the object from the motion model, based upon the comparison.Type: GrantFiled: March 5, 2012Date of Patent: March 25, 2014Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Steve Sullivan, Francesco G. Callari
-
Patent number: 8542236Abstract: A computer-implemented method includes transferring motion information from one or more motion meshes to an animation mesh. The motion mesh represents the motion of surface features of an object. A shape mesh provides a portion of the shape of the object to the animation mesh.Type: GrantFiled: January 16, 2007Date of Patent: September 24, 2013Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Steve Sullivan, Francesco G. Callari
-
Patent number: 8125485Abstract: Animating speech of an avatar representing a participant in a mobile communication including selecting one or more images; selecting a generic animation template; fitting the one or more images with the generic animation template; texture wrapping the one more images over the generic animation template; and displaying the one or more images texture wrapped over the generic animation template. Receiving an audio speech signal; identifying a series of phonemes; and for each phoneme: identifying a new mouth position for the mouth of the generic animation template; altering the mouth position to the new mouth position; texture wrapping a portion of the one or more images corresponding to the altered mouth position; displaying the texture wrapped portion of the one or more images corresponding to the altered mouth position of the mouth of the generic animation template; and playing the portion of the audio speech signal represented by the phoneme.Type: GrantFiled: November 20, 2009Date of Patent: February 28, 2012Assignee: International Business Machines CorporationInventors: William A. Brown, Richard W. Muirhead, Francis X. Reddington, Martin A. Wolfe
-
Patent number: 7990384Abstract: A system and method for generating photo-realistic talking-head animation from a text input utilizes an audio-visual unit selection process. The lip-synchronization is obtained by optimally selecting and concatenating variable-length video units of the mouth area. The unit selection process utilizes the acoustic data to determine the target costs for the candidate images and utilizes the visual data to determine the concatenation costs. The image database is prepared in a hierarchical fashion, including high-level features (such as a full 3D modeling of the head, geometric size and position of elements) and pixel-based, low-level features (such as a PCA-based metric for labeling the various feature bitmaps).Type: GrantFiled: September 15, 2003Date of Patent: August 2, 2011Assignee: AT&T Intellectual Property II, L.P.Inventors: Eric Cosatto, Hans Peter Graf, Gerasimos Potamianos, Juergen Schroeter
-
Patent number: 7916143Abstract: Provided are a system and a method that automatically produce natural locomotion animation without an applicable discontinuity portion with respect to various moving distance and timing by using motion capture data. The system includes a motion capture data storage, a simulation calculator, and an animation calculator. The method includes defining a speed calculated in the moving motion capture data as a maximum moving speed of a simulation in order to calculate an entire moving distance, a stopped time when starting and arriving, and a stopped time before starting and after arriving regarding to respective characters; extracting a portion of the arriving motion capture data to be appropriate for the entire moving distance in order to produce the locomotion animation when the entire moving distance is less than a moving distance of the arriving motion capture data; and satisfying an entire time corresponding to an entire motion of animation.Type: GrantFiled: July 20, 2007Date of Patent: March 29, 2011Assignee: Electronics and Telecommunications Research InstituteInventors: Sung June Chang, Se Hoon Park, In Ho Lee
-
Patent number: 7822783Abstract: The present invention is a method of obtaining information regarding an individual's environment using a programmable device. The first step of the method is sensing a psychomotor behavioral element of an activity engaged by the individual. The type of activity engaged by the individual can be any sensible activity under the sun, including breathing, thinking, generating heat, etc. The next step in the inventive method is determining the preferred modalities of the individual based on the psychomotor behavioral element of the activity engaged by the individual. Provided herein are calculations used for determining the preferred modalities of the individual based on the psychomotor behavioral element of the activity. In the present context, the preferred modalities are the semi-conscious or nonconscious desires of the individual, indicated by nonconscious actions, to experience her environment in a specific manner. The information obtained by the inventive method can be used in several ways.Type: GrantFiled: March 21, 2008Date of Patent: October 26, 2010Inventor: Joseph Carrabis
-
Patent number: 7477253Abstract: An animation image generating program is provided which allows animation images to be readily generated by CG without complicated setups, and more particularly the animation image generating program is suited to generate a plurality of types of face animation images by CG. The animation image generating program includes the steps of controlling selection of specific vertices of a standard model and a user model, providing control such that first target vertices are associated with second target vertices where the first target vertices are the selected specific vertices of the standard model and the second target vertices are the selected specific vertices of the user model, providing control by arithmetic means such that coordinates of the first target vertices approximate to those of the second target vertices, to generate fitting information, and animating the user model based on animation data of the standard model and on the fitting information.Type: GrantFiled: May 20, 2003Date of Patent: January 13, 2009Assignee: Sega CorporationInventors: Hirokazu Kudoh, Kazunori Nakamura, Shigeo Morishima
-
Patent number: 7358972Abstract: A system and method for capturing motion comprises a motion capture volume adapted to contain at least one actor having body markers defining plural body points and facial markers defining plural facial points. A plurality of body motion cameras and a plurality of facial motion cameras are arranged around a periphery of the motion capture volume. The facial motion cameras each have a respective field of view narrower than a corresponding field of view of the body motion cameras. The facial motion cameras are arranged such that all laterally exposed surfaces of the actor while in motion within the motion capture volume are within the field of view of at least one of the plurality of facial motion cameras at substantially all times. A motion capture processor is coupled to the plurality of facial motion cameras and the plurality of body motion cameras to produce a digital model reflecting combined body and facial motion of the actor.Type: GrantFiled: November 6, 2006Date of Patent: April 15, 2008Assignees: Sony Corporation, Sony Pictures Entertainment Inc.Inventors: Demian Gordon, Jerome Chen, Albert Robert Hastings, Jody Echegaray
-
Patent number: 7084874Abstract: A real-time virtual presentation method is provided. The method includes capturing motion of a user, capturing audio of the user, transforming the audio of the user into audio of an opposite gender of the user and animating a character with the motion and transformed audio in real-time.Type: GrantFiled: December 21, 2001Date of Patent: August 1, 2006Assignee: Kurzweil AINetworks, Inc.Inventor: Raymond C. Kurzweil
-
Patent number: 7068277Abstract: A system and method for animating facial motion comprises an animation processor adapted to generate three-dimensional graphical images and having a user interface and a facial performance processing system operative with the animation processor to generate a three-dimensional digital model of an actor's face and overlay a virtual muscle structure onto the digital model. The virtual muscle structure includes plural muscle vectors that each respectively define a plurality of vertices along a surface of the digital model in a direction corresponding to actual facial muscles. The facial performance processing system is responsive to an input reflecting selective actuation of at least one of the plural muscle vectors to thereby reposition corresponding ones of the plurality of vertices and re-generate the digital model in a manner that simulates facial motion.Type: GrantFiled: May 23, 2003Date of Patent: June 27, 2006Assignees: Sony Corporation, Sony Pictures Entertainment Inc.Inventor: Alberto Menache
-
Patent number: 6492990Abstract: A method using computer software for automatic audio visual dubbing (5) using an efficient computerized automatic method for audio visual dubbing of movies by computerized image copying of the characteristic features of the lip movements of the dubber onto the mouth area of the original speaker. The invention uses a method of vicinity-searching, three-dimensional head modeling of the original speaker (3), and texture mapping (10) technique to produce new images which correspond to the dubbed sound track: The invention thus overcomes the well known disadvantage of the correlation problems between lip movement in an original movie and the sound track of the dubbed movie.Type: GrantFiled: July 15, 1998Date of Patent: December 10, 2002Assignee: Yissum Research Development Company of the Hebrew University of JerusalemInventors: Shmuel Peleg, Ran Cohen, David Avnir
-
Patent number: 6430523Abstract: An autonomous device behaving adaptively to a user includes a sensing unit; a recognition unit for recognizing user's command, a current user-related status, and a current user-unrelated status based on the sensed signals; a pseudo-personality-forming unit for establishing a pseudo-personality based on the result of the preceding recognition; a pseudo-emotion-forming unit for establishing pseudo-emotions based on the result of the preceding recognition and the pseudo-personality; an autonomous behavior-establishing unit for selecting autonomous behavior based on the result of the preceding recognition, the pseudo-personality, and the pseudo-emotions; a commanded behavior-establishing unit for constituting commanded behavior in accordance with the user's command; a behavior control unit for controlling behavior by combining the autonomous behavior and the commanded behavior; and an output device outputting the controlled behavior.Type: GrantFiled: April 6, 2001Date of Patent: August 6, 2002Assignee: Yamaha Hatsudoki Kabushiki KaishaInventor: Takashi Mizokawa
-
Patent number: 6230111Abstract: An autonomous device behaving adaptively to a user includes a sensing unit; a recognition unit for recognizing user's command, a current user-related status, and a current user-unrelated status based on the sensed signals; a pseudo-personality-forming unit for establishing a pseudo-personality based on the result of the preceding recognition; a pseudo-emotion-forming unit for establishing pseudo-emotions based on the result of the preceding recognition and the pseudo-personality; an autonomous behavior-establishing unit for selecting autonomous behavior based on the result of the preceding recognition, the pseudo-personality, and the pseudo-emotions; a commanded behavior-establishing unit for constituting commanded behavior in accordance with the user's command; a behavior control unit for controlling behavior by combining the autonomous behavior and the commanded behavior; and an output device outputting the controlled behavior.Type: GrantFiled: August 6, 1998Date of Patent: May 8, 2001Assignee: Yamaha Hatsudoki Kabushiki KaishaInventor: Takashi Mizokawa
-
Patent number: 6034692Abstract: An interactive entertainment apparatus is provided having means (10,14) for modelling a virtual environment populated by modelled characters, with each of the characters being controlled by respective rule-based agents. A camera control function (58) within the apparatus processor periodically monitors at least one compiled behavior per character agent, together with the respective locations within the virtual environment for each of the characters. The processor (10) generates clusters of adjacent characters within the virtual environment in accordance with predetermined clustering criteria such as relatively proximity and commonality of behavioral characteristics, and generates a respective cluster value derived from the current settings of the monitored behaviors within that cluster.Type: GrantFiled: August 1, 1997Date of Patent: March 7, 2000Assignee: U.S. Philips CorporationInventors: Richard D. Gallery, Dale R. Heron
-
Patent number: 5982390Abstract: A method and system support the definition, authentication, and enforcement of constraints on speech, appearance, movements, associations, and other properties that are used to suggest or exhibit the personality traits and behaviors of animated characters. The system includes a controlling object and one or more personality objects running in any of a wide range of software and hardware environments. Zero or more personality subobjects may be associated with each personality object. The methods provide steps for authenticating an object, controlling associations between objects and subobjects, controlling events involving one or more objects, controlling the proximity of personality objects to one another, controlling the distribution of objects, and mandating the use of auxiliary objects under specified circumstances.Type: GrantFiled: March 24, 1997Date of Patent: November 9, 1999Assignee: Stan StonekingInventors: Stan Stoneking, Brian C. Fries