Motion Planning Or Control Patents (Class 345/474)
  • Publication number: 20150029198
    Abstract: Techniques are proposed for animating a deformable object. A geometric mesh comprising a plurality of vertices is retrieved, where the geometric mesh is related to a first rest state configuration corresponding to the deformable object. A motion goal associated with the deformable object is then retrieved. The motion goal is translated into a function of one or more state variables associated with the deformable object. A second rest state configuration corresponding to the deformable object is computed by adjusting the position of at least one vertex in the plurality of vertices based at least in part on the function.
    Type: Application
    Filed: July 29, 2013
    Publication date: January 29, 2015
    Applicant: Pixar
    Inventors: Robert SUMNER, Stelian COROS, Sebastian MARTIN, Bernhard THOMASZEWSKI
  • Patent number: 8941666
    Abstract: A system includes a computing device that includes a memory configured to store instructions. The computing device also includes a processor configured to execute the instructions to perform a method that includes representing animation states of a virtual character in editable graphical representations. Each animation state represents each individual action of the character for an instance in time. The method also includes storing data that represents one or more changes in the animation states of the virtual character from the editable graphical representations. A pose of the virtual character is reconstructable upon retrieval of the stored data.
    Type: Grant
    Filed: March 24, 2011
    Date of Patent: January 27, 2015
    Assignee: Lucasfilm Entertainment Company Ltd.
    Inventor: Lucas A. Kovar
  • Patent number: 8937620
    Abstract: A system and methods are disclosed which provide simple and rapid animated content creation with simple control of scene characteristics, scene-to-scene transitions, content elements, and certain element behaviors. Tools are disclosed for conveniently applying scene characteristic and transition labels to control the look and feel of a scene, such as camera attributes, lighting, and the like. A voice input tool enables quick creation of spoken language segments for animated characters. Tools are provided to enable multiple content creators to collaborate on content creation. A warehouse model is disclosed for animation content elements. The ability to create dynamic animated content, and the ability to increase the emotional and dramatic texture of that content, are provided through use of relatively simple and intuitive tools.
    Type: Grant
    Filed: April 7, 2011
    Date of Patent: January 20, 2015
    Assignee: Google Inc.
    Inventor: Eric Teller
  • Patent number: 8938431
    Abstract: A configurable real-time environment tracking and command module (RTM) is provided to coordinate one or more than one devices or objects in a physical environment. A virtual environment is created to correlate with various objects and attributes within the physical environment. The RTM is able to receive data about attributes of physical objects and accordingly update the attributes of correlated virtual objects in the virtual environment. The RTM is also able to provide data extracted from the virtual environment to one or more than devices, such as robotic cameras, in real-time. An interface to the RTM allows multiple devices to interact with the RTM, thereby coordinating the devices.
    Type: Grant
    Filed: January 6, 2014
    Date of Patent: January 20, 2015
    Assignee: CAST Group of Companies Inc.
    Inventors: Gilray Densham, Justin Eichel
  • Patent number: 8933940
    Abstract: There is described a method for applying a control rig to an animation of a character, the method comprising: receiving a state change for the character being in a first state; determining a second state for the character using the state change; retrieving an animation clip and a control rig both corresponding to the second state, the animation clip comprising a plurality of poses for the character each defining a configuration for a body of the character, the control rig being specific to the second state and corresponding to at least one constraint to be applied on the body of the character; applying the control rig to the animation clip, thereby obtaining a rigged animation clip; and outputting the rigged animation clip.
    Type: Grant
    Filed: March 7, 2012
    Date of Patent: January 13, 2015
    Assignee: Unity Technologies Canada Company
    Inventors: Robert Lanciault, Pierre-Paul Giroux, Sonny Myette
  • Patent number: 8928672
    Abstract: Systems and methods for generating and concatenating 3D character animations are described including systems in which recommendations are made by the animation system concerning motions that smoothly transition when concatenated. One embodiment includes a server system connected to a communication network and configured to communicate with a user device that is also connected to the communication network.
    Type: Grant
    Filed: April 28, 2011
    Date of Patent: January 6, 2015
    Assignee: Mixamo, Inc.
    Inventors: Stefano Corazza, Emiliano Gambaretto
  • Patent number: 8926427
    Abstract: When there is an instruction for rotation during a game in a first field, a CPU core of a game apparatus obtains central coordinates in a second field after the rotation and coordinates (central coordinates) of a player object and BG object in steps S53 and S55. Then, the CPU core executes a rotation process in a hardware calculation circuit, for example, in steps S57 to S61. When detecting end of the rotation in a step S63, the CPU core executes a map switch process in a step S65. In the map switch process, the CPU core generates a second field according to second area data. Then, after generating the second field, the CPU core makes a hit determination according to the second area data in a step S7, for example.
    Type: Grant
    Filed: April 25, 2007
    Date of Patent: January 6, 2015
    Assignee: Nintendo Co., Ltd.
    Inventors: Shigeyuki Asuke, Masataka Takemoto, Kiyoshi Kouda
  • Patent number: 8928670
    Abstract: A moving image generation apparatus includes an image display unit, a partial image specification unit, a partial image cutout unit, and a moving image generation unit. The image display unit displays an image. The partial image specification unit specifies a partial image of a predetermined range corresponding to each of points in the displayed image. The partial image cutout unit cuts out a plurality of partial images from between two arbitrary partial images included in the specified partial images. The moving image generation unit generates a moving image based on the specified partial images and the cutout partial images.
    Type: Grant
    Filed: October 19, 2010
    Date of Patent: January 6, 2015
    Assignee: Olympus Imaging Corp.
    Inventor: Takuya Matsunaga
  • Patent number: 8928671
    Abstract: In particular embodiments, a method includes generating a 3D display of an avatar of a person, where the avatar can receive inputs identifying a type of a physiological event, a location of the physiological event in or on a person's body in three spatial dimensions, a time range of the physiological event, a quality of the physiological event, and rendering the physiological event on the avatar based on the inputs.
    Type: Grant
    Filed: November 24, 2010
    Date of Patent: January 6, 2015
    Assignee: Fujitsu Limited
    Inventors: B. Thomas Adler, David Marvit, Jawahar Jain
  • Patent number: 8928674
    Abstract: A computer-implemented method includes comparing content captured during one session and content captured during another session. A surface feature of an object represented in the content of one session corresponds to a surface feature of an object represented in the content of the other session. The method also includes substantially aligning the surface features of the sessions and combining the aligned content.
    Type: Grant
    Filed: June 8, 2012
    Date of Patent: January 6, 2015
    Assignee: Lucasfilm Entertainment Company Ltd.
    Inventors: Steve Sullivan, Francesco G. Callari
  • Publication number: 20150002517
    Abstract: Methods and systems for two-phase skinning of an object undergoing rigid and non-rigid transformations are disclosed. One method of skinning the object may include separating the object's joint transformations into rigid and non-rigid parts by determining if a joint is scale compensating or scale non-compensating, applying non-rigid joint transformations to the mesh, and applying rigid joint transformations to the mesh. Separation of the object's joint transformations into rigid and non-rigid parts may include determining a bind pose based on an initial configuration of the object's joints and determining an intermediate pose based on the configuration of the object's joints after non-rigid joint transformations are applied to the joints.
    Type: Application
    Filed: May 5, 2014
    Publication date: January 1, 2015
    Applicant: DISNEY ENTERPRISES, INC.
    Inventors: GENE LEE, CHUNG-AN LIN
  • Publication number: 20150002518
    Abstract: An image generating apparatus (100) includes an animation acquiring unit (150) that acquires a plurality of items of animation data. A terminal apparatus (200) includes an index value acquiring unit (251), a composite skeleton generating unit (252), and a drawing processing unit (253). The index value acquiring unit (251) inputs an index value regarding a motion of animation data, and the composite skeleton generating unit (252) generates animation data according to the index value input by the index value acquiring unit (251), using the plurality of items of animation data acquired by the animation acquiring unit (150).
    Type: Application
    Filed: June 20, 2014
    Publication date: January 1, 2015
    Inventor: Mitsuyasu NAKAJIMA
  • Publication number: 20150002516
    Abstract: Techniques are proposed for animating a plurality of objects in a computer graphics environment. A crowd choreography system receives a first beat description defining potential motions for the plurality of objects, where the first beat description includes a first motion characteristic. The crowd choreography system selects a first object from the plurality of objects and selects a first value for the first motion characteristic based on the first beat description. The crowd choreography system creates a first motion path for the first object based on the first value and animates the first object based on the first motion path.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 1, 2015
    Inventors: Michael FREDERICKSON, James David NORTHRUP
  • Publication number: 20140368512
    Abstract: A display system is disclosed for animation of media objects on tiled displays. The display system can include a plurality of discrete display nodes and a control module configured to determine a graphical representation of a current state of a media object. The control module can be configured to determine a graphical representation of a future state of the media object. The control module can also be configured to determine a path area on the display nodes comprising a plurality of graphical representations of the media object during a change from the current state to the future state. The control module also can be configured to cause the display nodes overlapping with at least a portion of the path area to prepare to display the media object.
    Type: Application
    Filed: June 13, 2014
    Publication date: December 18, 2014
    Inventors: Sung-Jin Kim, Stephen F. Jenks
  • Patent number: 8907984
    Abstract: Methods and systems are presented for automatically generating a slide associated with a slideshow. In one aspect, a method includes selecting an image for inclusion in a slideshow, where the image has associated facial detection information. A face location is determined in the selected image based on the facial detection information and the selected image is cropped based on the determined face location to generate a cropped image depicting the included face. The cropped image is inserted into a slide associated with the slideshow. Further, an animation having a defined animation path can be associated with the slide. Also, the face location can be identified as a position in the animation path and the slide can be animated based on the associated animation.
    Type: Grant
    Filed: September 23, 2009
    Date of Patent: December 9, 2014
    Assignee: Apple Inc.
    Inventors: Ralf Weber, Robert Van Osten
  • Patent number: 8902235
    Abstract: A computerized device implements an animation coding engine to analyze timeline data defining an animation sequence and generate a code package. The code package can represent the animation sequence using markup code that defines a rendered appearance of a plurality of frames and a structured data object also comprised in the code package and defining a parameter used by a scripting language in transitioning between frames. The markup code can also comprise a reference to a visual asset included within a frame. The code package further comprises a cascading style sheet defining an animation primitive as a style to be applied to the asset to reproduce one or more portions of the animation sequence without transitioning between frames.
    Type: Grant
    Filed: April 7, 2011
    Date of Patent: December 2, 2014
    Assignee: Adobe Systems Incorporated
    Inventor: Alexandru Chiculit{hacek over (a)}
  • Patent number: 8902232
    Abstract: Acquisition, modeling, compression, and synthesis of realistic facial deformations using polynomial displacement maps are described. An analysis phase can be included where the relationship between motion capture markers and detailed facial geometry is inferred. A synthesis phase can be included where detailed animated facial geometry is driven by a sparse set of motion capture markers. For analysis, an actor can be recorded wearing facial markers while performing a set of training expression clips. Real-time high-resolution facial deformations are captured, including dynamic wrinkle and pore detail, using interleaved structured light 3D scanning and photometric stereo. Next, displacements are calculated between a neutral mesh driven by the motion capture markers and the high-resolution captured expressions. These geometric displacements are stored in one or more polynomial displacement maps parameterized according to the local deformations of the motion capture dots.
    Type: Grant
    Filed: February 2, 2009
    Date of Patent: December 2, 2014
    Assignee: University of Southern California
    Inventors: Paul E. Debevec, Wan-Chun Ma, Timothy Hawkins
  • Patent number: 8903693
    Abstract: Boundary handling is performed in particle-based simulation. Slab cut ball processing defines the boundary volumes for interaction with particles in particle-based simulation. The slab cut balls are used for collision detection of a solid object with particles. The solid object may be divided into a plurality of independent slab cut balls for efficient collision detection without a bounding volume hierarchy. The division of the solid object may be handled in repeating binary division operations. Processing speed may be further increased by determining the orientation of each slab cut ball based on the enclosed parts of the boundary rather than testing multiple possible orientations.
    Type: Grant
    Filed: December 8, 2011
    Date of Patent: December 2, 2014
    Assignee: Siemens Aktiengesellschaft
    Inventors: Richard Gary McDaniel, Zakiya Tamimi
  • Patent number: 8902233
    Abstract: Techniques that give animators the direct control they are accustomed to with key frame animation, while providing for path-based motion. A key frame animation-based interface is used to achieve path-based motion with rotation animation variable value correction using additional animation variables for smoothing. The value of the additional animation variables for smoothing can be directly controlled using a tangent handle in a user interface.
    Type: Grant
    Filed: March 4, 2011
    Date of Patent: December 2, 2014
    Assignee: Pixar
    Inventors: Chen Shen, Bena L. Currin, Timothy S. Milliron
  • Patent number: 8896609
    Abstract: A video content generation device generates video data synchronized with music data based on motion data, representing a motion graph including nodes, edges, and weights, and metadata indicating a synchronization probability per each node between the motion graph and a musical tune. A music data storage unit stores the predetermined amount of music data and their musical characteristics in connection with the predetermined number of beats, in a reproduction order, retrieved from the musical tune. An optimum path search unit searches an optimum path connecting between nodes, each of which is selected per each beat with a high synchronization probability, on the motion graph with motion characteristics matching with musical characteristics based on the predetermined amount of music data. Video data synchronized with music data is generated based on synchronization information for correlating motion data to music data along with the optimum path.
    Type: Grant
    Filed: October 3, 2011
    Date of Patent: November 25, 2014
    Assignee: KDDI Corporation
    Inventors: Jianfeng Xu, Koichi Takagi, Ryouichi Kawada
  • Patent number: 8896607
    Abstract: A method for a computer system includes receiving a surface deformation for an object from a computer system user, wherein an object model comprises animation variables used to determine the surface of the object model, determining at least one pre-defined object pose from pre-defined object poses in response to the surface deformation, wherein the predefined object poses includes a first predefined object pose and comprises animation variable values, wherein the animation variable values are determined from physical motion capture data of surface positions of a physical representation of the object posed in a first pose, posing the object model in a pose in response to at least the animation variable values, and displaying the object model in the pose on a display to the computer system user.
    Type: Grant
    Filed: May 29, 2009
    Date of Patent: November 25, 2014
    Assignee: Two Pic MC LLC
    Inventors: Doug Epps, Nate Reid
  • Patent number: 8896608
    Abstract: The invention relates to a method for providing an animation from prerecorded still pictures where the relative positions of the pictures are known. The method is based on prerecorded still pictures and location data, associated with each still picture, that indicates the projection of the subsequent still picture into the current still picture. The method comprises the repeated steps of providing a current still picture, providing the location data associated with the still picture, generating an animation based on the current still picture and the location data, and presenting the animation on a display. The invention provides the experience of driving a virtual car through the photographed roads, either by an auto pilot or manually. The user may change speed, drive, pan, shift lane, turn in crossings or take u-turns anywhere.
    Type: Grant
    Filed: August 7, 2006
    Date of Patent: November 25, 2014
    Assignee: Movinpics AS
    Inventor: Arve Meisingset
  • Patent number: 8890875
    Abstract: A method of obtaining simulated parameters ( pos(t), vit(t), acc(t), par(t)) able to characterize the movement of an articulated structure provided with sensors, characterized in that the method comprises the following steps: calculating, from estimated movement state parameters of the structure, estimated measurement data ( H(t), ?(t)), each estimated measurement data item corresponding to a measurement delivered by a sensor, difference between the measurements delivered by the sensors and the estimated measurement data that correspond to them, global mathematical processing of the observer type of the data issuing from the difference in order to obtain at least one estimated difference for an estimated movement state parameter, and adding the estimated difference for the estimated movement state parameter and the estimated movement state parameter that corresponds to it in order to form a simulated parameter.
    Type: Grant
    Filed: April 23, 2008
    Date of Patent: November 18, 2014
    Assignees: Commissariat a l'Energie Atomique, Inria Institut National de Recherche en Informatique et en Automatique
    Inventors: Fabien Jammes, Bruno Flament, Pierre-Brice Wieber
  • Patent number: 8878873
    Abstract: Disclosed is a computer implemented method, computer program product, and apparatus to decorate visible attributes of a rendered avatar. A server may collect a first user profile of a first avatar, the first user profile having at least one interest of a user. Next, the server may receive a location of the first avatar, wherein the location is associated with a view to at least a second avatar. The server can identify the second avatar among a group of avatars visible with respect to the first avatar. Further, the server may read a target profile of the second avatar then determine whether the second user profile satisfies a criterion based on the first user profile and the target profile. In addition, the server may render a modified rendered avatar to a client, responsive to the determination that the target profile satisfies the criterion.
    Type: Grant
    Filed: December 19, 2008
    Date of Patent: November 4, 2014
    Assignee: International Business Machines Corporation
    Inventors: Dwip N. Banerjee, Aditya Mohan, Sandeep R. Patil, Dhaval K. Shah
  • Patent number: 8878880
    Abstract: A method of driving an electrophoretic display device includes changing the gradation level of image data on the basis of correction data corresponding to the gradation level, converting image data with the changed gradation level to a dithering pattern, in which the first color and the second color are combined, corresponding to the changed gradation level for each predetermined region of image data, and driving the electrophoretic particles of the first color and the electrophoretic particles of the second color on the basis of image data converted to the dithering pattern for the plurality of pixels in the display section.
    Type: Grant
    Filed: April 7, 2011
    Date of Patent: November 4, 2014
    Assignee: Seiko Epson Corporation
    Inventors: Tetsuaki Otsuki, Kota Muto
  • Patent number: 8878851
    Abstract: A method for streaming vector images to wireless devices, including receiving a request from a wireless device for a portion of a vector image and a target display width and height, the vector image including a plurality of vector primitives, determining which of the vector primitives are positioned so as to overlap the requested portion, clipping the overlapping vector primitives with the portion, and transmitting the clipped vector primitives that overlap the portion. A system and a computer readable storage medium are also described and claimed.
    Type: Grant
    Filed: November 12, 2004
    Date of Patent: November 4, 2014
    Assignee: Synchronica plc
    Inventors: Andrew Opala, Rudy Ziegler
  • Publication number: 20140320507
    Abstract: A user terminal device includes a display which displays a screen including an object drawn by a user, a sensor which senses user manipulation, and a controller which provides animation effects regarding the objects when a preset event occurs, and performs a control operation matching the object when an object is selected by user manipulation.
    Type: Application
    Filed: April 24, 2014
    Publication date: October 30, 2014
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: In-sik MYUNG, Taik-heon RHEE, Jong-woo JUNG, Dong-bin CHO
  • Publication number: 20140320508
    Abstract: An virtual character such as an on-screen object, an avatar, an on-screen character, or the like may be animated using a live motion of a user and a pre-recorded motion. For example, a live motion of a user may be captured and a pre-recorded motion such as a pre-recorded artist generated motion, a pre-recorded motion of the user, and/or a programmatically controlled transformation may be received. The live motion may then be applied to a first portion of an the virtual character and the pre-recorded motion may be applied to a second portion of the virtual character such that the virtual character may be animated with a combination of the live and pre-recorded motions.
    Type: Application
    Filed: July 14, 2014
    Publication date: October 30, 2014
    Inventors: Kathryn Stone Perez, Alex A. Kipman, Jeffery Margolis
  • Publication number: 20140313208
    Abstract: Information about a device may be emotively conveyed to a user of the device. Input indicative of an operating state of the device may be received. The input may be transformed into data representing a simulated emotional state. Data representing an avatar that expresses the simulated emotional state may be generated and displayed. A query from the user regarding the simulated emotional state expressed by the avatar may be received. The query may be responded to.
    Type: Application
    Filed: June 30, 2014
    Publication date: October 23, 2014
    Inventors: Dimitar Petrov Filev, Oleg Yurievitch Gusikhin, Fazal Urrahman Syed, Erica Klampfl, Thomas J. Giuli, Yifan Chen
  • Publication number: 20140313207
    Abstract: Systems and methods are described for animating 3D characters using synthetic motion data generated by generative models in response to a high level description of a desired sequence of motion provided by an animator. In a number of embodiments, an animation system is accessible via a server system that utilizes the ability of generative models to generate synthetic motion data across a continuum to enable multiple animators to effectively reuse the same set of previously recorded motion capture data to produce a wide variety of desired animation sequences. One embodiment of the invention includes a server system configured to communicate with a database containing motion data including repeated sequences of motion, where the differences between the repeated sequences of motion are described using at least one high level characteristic.
    Type: Application
    Filed: April 21, 2014
    Publication date: October 23, 2014
    Applicant: Mixamo, Inc.
    Inventors: Graham Taylor, Stefano Corazza, Nazim Kareemi, Edilson De Aguiar
  • Patent number: 8866823
    Abstract: Automatically creating a series of intermediate states may include receiving a start state and an end state of a reactive system, identifying one or more components of the start state and the end state and determining one or more events associated with the one or more components. One or more intermediate states between the start state and the end state, and one or more transitions from and to the one or more intermediate states are created using the one or more components of the start state and the end state and the one or more events associated with the one or more components. The one or more intermediate states and the one or more transitions form one or more time-based paths from the start state to the end state occurring in response to applying the one or more events to the associated one or more components.
    Type: Grant
    Filed: October 13, 2010
    Date of Patent: October 21, 2014
    Assignee: International Business Machines Corporation
    Inventors: Rachel K. E. Bellamy, Michael Desmond, Jacquelyn A. Martino, Paul M. Matchen, John T. Richards, Calvin B. Swart
  • Patent number: 8860752
    Abstract: Disclosed are methods and systems for multimedia scripting, including evaluating a script at runtime and invoking a process for editing multimedia in dependence upon the script. Multimedia may include a still image and video images. Multimedia scripting may also include accepting text entered into a text-input graphical user interface as a script for runtime evaluation, accepting from a non-text-based graphical user interface a designation of scripts for runtime evaluation, and effecting a disposition of the edited multimedia in dependence upon a script, such as storing the multimedia as a file, presenting the multimedia, or encoding the edited multimedia as an email attachment.
    Type: Grant
    Filed: July 13, 2006
    Date of Patent: October 14, 2014
    Assignee: Apple Inc.
    Inventor: Frank Doepke
  • Patent number: 8860755
    Abstract: Display apparatus and process is provided for displaying a static source image in a manner that it is perceived as an animated sequence of images when viewed by an observer in relative motion to the apparatus. The source image is sliced or fractured to provide a plurality of image fractions of predetermined dimension. The fractions are redistributed in a predetermined sequence to provide an output image, which is placed in a preferably illuminated display apparatus provided with a mask. An observer in relative motion to the display apparatus, sequentially views a predetermined selection of image fractions through the mask, which are perceived by the observer as a changing sequence of images. Applying the concepts of persistence of vision, the observer perceives the reconstructed imagery as live action animation, a traveling singular image or series of static images, or changing image sequences, from a plurality of lines of sight.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: October 14, 2014
    Assignee: ZMI Holdings Ltd.
    Inventor: Russell H. Train
  • Publication number: 20140292770
    Abstract: Techniques are disclosed for controlling robot pixels to display a visual representation of an input. The input to the system could be an image of a face, and the robot pixels deploy in a physical arrangement to display a visual representation of the face, and would change their physical arrangement over time to represent changing facial expressions. The robot pixels function as a display device for a given allocation of robot pixels. Techniques are also disclosed for distributed collision avoidance among multiple non-holonomic robots to guarantee smooth and collision-free motions. The collision avoidance technique works for multiple robots by decoupling path planning and coordination.
    Type: Application
    Filed: March 5, 2014
    Publication date: October 2, 2014
    Applicant: Disney Enterprises, Inc.
    Inventors: Paul BEARDSLEY, Javier ALONSO MORA, Andreas BREITENMOSER, Martin RUFLI, Roland SIEGWART, Iain MATTHEWS, Katsu YAMANE
  • Publication number: 20140285496
    Abstract: Systems and methods are described for performing spatial and temporal compression of deformable mesh based representations of 3D character motion allowing the visualization of high-resolution 3D character animations in real time. In a number of embodiments, the deformable mesh based representation of the 3D character motion is used to automatically generate an interconnected graph based representation of the same 3D character motion. The interconnected graph based representation can include an interconnected graph that is used to drive mesh clusters during the rendering of a 3D character animation. The interconnected graph based representation provides spatial compression of the deformable mesh based representation, and further compression can be achieved by applying temporal compression processes to the time-varying behavior of the mesh clusters.
    Type: Application
    Filed: June 6, 2014
    Publication date: September 25, 2014
    Applicant: Mixamo, Inc.
    Inventors: Edilson de Aguiar, Stefano Corazza, Emiliano Gambaretto
  • Publication number: 20140267312
    Abstract: A rail manipulator indicates the possible range(s) of movement of a part of a computer-generated character in a computer animation system. The rail manipulator obtains a model of the computer-generated character. The model may be a skeleton structure of bones connected at joints. The interconnected bones may constrain the movements of one another. When an artist selects one of the bones for movement, the rail manipulator determines the range of movement of the selected bone. The determination may be based on the position and/or the ranges of movements of other bones in the skeleton structure. The range of movement is displayed on-screen to the artist, together with the computer-generated character. In this way, the rail manipulator directly communicates to the artist the degree to which a portion of the computer-generated character can be moved, in response to the artist's selection of the portion of the computer-generated character.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: DreamWorks Animation LLC
    Inventor: Alexander P. Powell
  • Publication number: 20140267313
    Abstract: Programs for creating a set of behaviors for lip sync movements and nonverbal communication may include analyzing a character's speaking behavior through the use of acoustic, syntactic, semantic, pragmatic, and rhetorical analyses of the utterance. For example, a non-transitory, tangible, computer-readable storage medium may contain a program of instructions that cause a computer system running the program of instructions to: receive a text specifying words to be spoken by a virtual character; extract metaphoric elements, discourse elements, or both from the text; generate one or more mental state indicators based on the metaphoric elements, the discourse elements, or both; map each of the one or more mental state indicators to a behavior that the virtual character should display with nonverbal movements that convey the mental state indicators; and generate a set of instructions for the nonverbal movements based on the behaviors.
    Type: Application
    Filed: March 13, 2014
    Publication date: September 18, 2014
    Applicant: UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventor: Stacy Marsella
  • Patent number: 8836693
    Abstract: A course CR is set in a virtual space SP along which a player character CH can move. The player character moves freely on the course, as long as it does not run off the course. In the course, a reference moving path is set indicating a standard moving path of the player character. A camera path of a virtual camera is set along the reference moving path. In the reference moving path, a object corresponding position (CP) is determined corresponding a position (CH(X,Y,Z)) of the player character in the virtual space. A position corresponding to the object corresponding position and a photographing condition are determined for the virtual camera.
    Type: Grant
    Filed: May 9, 2007
    Date of Patent: September 16, 2014
    Assignee: Kabushiki Kaisha Sega
    Inventor: Tetsu Katano
  • Patent number: 8836707
    Abstract: At least certain embodiments of the present disclosure include a method for animating a display region, windows, or views displayed on a display of a device. The method includes starting at least two animations. The method further includes determining the progress of each animation. The method further includes completing each animation based on a single timer.
    Type: Grant
    Filed: August 26, 2013
    Date of Patent: September 16, 2014
    Assignee: Apple Inc.
    Inventors: Andrew Platzer, John Harper
  • Patent number: 8817028
    Abstract: A computer-implemented method and system creates dynamic sets to automatically arrange dimension annotations in a CAD model. The invention method/product/data storage medium/system determines a location to place a new dimension annotation based on dimension type of the entity selected to annotate. One or more sets of existing dimension annotations are created. The existing dimension annotations in the same set together with the new dimension annotation with similar characteristics as those in the same set are sorted, and then displayed in sorted order in a view of the CAD model on the computer screen.
    Type: Grant
    Filed: February 3, 2010
    Date of Patent: August 26, 2014
    Assignee: Dassault Systemes SolidWorks Corporation
    Inventors: Sumit Yadav, Vajrang Parvate, Marc Leizza, Shailesh Kandage
  • Patent number: 8810582
    Abstract: A lighting module of a hair/fur pipeline may be used to produce lighting effects in a lighting phase for a shot and an optimization module may be used to: determine if a cache hair state file including hair parameters exists; and determine if the cache hair state file includes matching hair parameters to be used in the shot, and if so, the hair parameter values from the cache hair state file are used in the lighting phase.
    Type: Grant
    Filed: May 11, 2007
    Date of Patent: August 19, 2014
    Assignees: Sony Corporation, Sony Pictures Entertainment Inc
    Inventors: Armin Walter Bruderlin, Francois Chardavoine, Clint Chua, Gustav Melich
  • Publication number: 20140225901
    Abstract: In a multi-participant modeled virtual reality environment, avatars are modeled beings that include moveable eyes creating the impression of an apparent gaze direction. Control of eye movement may be performed autonomously using software to select and prioritize targets in a visual field. Sequence and duration of apparent gaze may then be controlled using automatically determined priorities. Optionally, user preferences for object characteristics may be factored into determining priority of apparent gaze. Resulting modeled avatars are rendered on client displays to provide more lifelike and interesting avatar depictions with shifting gaze directions.
    Type: Application
    Filed: April 21, 2014
    Publication date: August 14, 2014
    Inventors: Gary Stephen Shuster, Brian Mark Shuster
  • Publication number: 20140225900
    Abstract: A method for generating real-time goal space steering for data-driven character animation is disclosed. A goal space table of sparse samplings of possible future locations is computed, indexed by the starting blend value and frame. A steer space is computed as a function of the current blend value and frame, interpolated from the nearest indices of the table lookup in the goal space. The steer space is then transformed to local coordinates of a character's position at the current frame. The steer space samples closest to a line connecting the character's position with the goal location may be selected. The blending values of the two selected steer space samples are interpolated to compute the new blending value to render subsequent frames of an animation sequence.
    Type: Application
    Filed: April 17, 2014
    Publication date: August 14, 2014
    Applicant: AUTODESK, INC.
    Inventor: MICHAEL GIRARD
  • Patent number: 8805583
    Abstract: A robot, which performs natural walking similar to a human with high energy efficiency through optimization of actuated dynamic walking, and a control method thereof. The robot includes an input unit to which a walking command of the robot is input, and a control unit to control walking of the robot by calculating torque input values through control variables, obtaining a resultant motion of the robot through calculation of forward dynamics using the torque input values, and minimizing a value of an objective function set to consist of the sum total of a plurality of performance indices through adjustment of the control variables.
    Type: Grant
    Filed: August 26, 2011
    Date of Patent: August 12, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Bok Man Lim, Kyung Shik Roh, Woong Kwon, Ju Suk Lee
  • Patent number: 8803889
    Abstract: An virtual character such as an on-screen object, an avatar, an on-screen character, or the like may be animated using a live motion of a user and a pre-recorded motion. For example, a live motion of a user may be captured and a pre-recorded motion such as a pre-recorded artist generated motion, a pre-recorded motion of the user, and/or a programmatically controlled transformation may be received. The live motion may then be applied to a first portion of an the virtual character and the pre-recorded motion may be applied to a second portion of the virtual character such that the virtual character may be animated with a combination of the live and pre-recorded motions.
    Type: Grant
    Filed: May 29, 2009
    Date of Patent: August 12, 2014
    Assignee: Microsoft Corporation
    Inventors: Kathryn Stone Perez, Alex A. Kipman, Jeffrey Margolis
  • Patent number: 8803886
    Abstract: The present invention provides a facial image display apparatus that can display moving images concentrated on the face when images of people's faces are displayed. A facial image display apparatus is provided wherein a facial area detecting unit (21) detects facial areas in which faces are displayed from within a target image for displaying a plurality of faces; a dynamic extraction area creating unit (22) creates, based on the facial areas detected by the facial area detecting means, a dynamic extraction area of which at least one of position and surface area varies over time in the target image; and a moving image output unit (27) sequentially extracts images in the dynamic extraction area and outputs the extracted images as a moving image.
    Type: Grant
    Filed: July 31, 2006
    Date of Patent: August 12, 2014
    Assignees: Sony Corporation, Sony Computer Entertainment Inc.
    Inventors: Munetaka Tsuda, Shuji Hiramatsu, Akira Suzuki
  • Patent number: 8803887
    Abstract: A computer graphic system and methods for simulating hair is provided. In accordance with aspects of the disclosure a method for hybrid hair simulation using a computer graphics system is provided. The method includes generating a plurality of modeled hair strands using a processor of the computer graphics system. Each hair strand includes a plurality of particles and a plurality of spring members coupled in between the plurality of particles. The method also includes determining a first position and a first velocity for each particle in the plurality of modeled hair strands using the processor and coarsely modeling movement of the plurality of modeled hair strands with a continuum fluid solver. Self-collisions of the plurality of modeled hair strands are computed with a discrete collision model using the processor.
    Type: Grant
    Filed: January 15, 2010
    Date of Patent: August 12, 2014
    Assignee: Disney Enterprises, Inc.
    Inventors: Aleka McAdams, Andrew Selle, Kelly Ward, Eftychios Sifakis, Joseph Teran
  • Patent number: 8797330
    Abstract: An operating system may receive transition information indicating that a user-interface of an application is to be transitioned from a first state to a second state. Transition of the user-interface from the first state to the second state comprises a change in a property of a user-interface item. The operating system may, in response to receiving the transition information, obtain from a rendering engine a value for the property of the user-interface item corresponding to the first state. The operating system may embed a module in the rendering engine so as to detect the change in the property of the user-interface item through communication from the application to the rendering engine; and obtain from the module a respective value for the property of the user-interface item corresponding to the second state. The operating system may generate an animation based on a comparison between the value and the respective value.
    Type: Grant
    Filed: October 18, 2013
    Date of Patent: August 5, 2014
    Assignee: Google Inc.
    Inventors: Chet Haase, Romain Guy
  • Patent number: 8797331
    Abstract: An information processing apparatus includes a bio-information obtaining unit configured to obtain bio-information of a subject; a kinetic-information obtaining unit configured to obtain kinetic information of the subject; and a control unit configured to determine an expression or movement of an avatar on the basis of the bio-information obtained by the bio-information obtaining unit and the kinetic information obtained by the kinetic-information obtaining unit and to perform a control operation so that the avatar with the determined expression or movement is displayed.
    Type: Grant
    Filed: August 4, 2008
    Date of Patent: August 5, 2014
    Assignee: Sony Corporation
    Inventors: Akane Sano, Masamichi Asukai, Taiji Ito, Yoichiro Sako
  • Publication number: 20140210831
    Abstract: A method of animating a computer generation of a head, the head having a mouth which moves in accordance with speech to be output by the head, said method comprising: providing an input related to the speech which is to be output by the movement of the mouth; dividing said input into a sequence of acoustic units; selecting an expression to be output by said head; converting said sequence of acoustic units to a sequence of image vectors using a statistical model, wherein said model has a plurality of model parameters describing probability distributions which relate an acoustic unit to an image vector for a selected expression, said image vector comprising a plurality of parameters which define a face of said head; and outputting said sequence of image vectors as video such that the mouth of said head moves to mime the speech associated with the input text with the selected expression, wherein the image parameters define the face of a head using an appearance model comprising a plurality of shape modes and
    Type: Application
    Filed: January 29, 2014
    Publication date: July 31, 2014
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Bjorn Stenger, Robert Anderson, Javier Latorre-Martinez, Vincent Ping Leung Wan, Roberto Cipolla