Temporal Interpolation Or Processing Patents (Class 345/475)
-
Patent number: 9001129Abstract: A processing apparatus for creating an avatar is provided. The processing apparatus calculates skeleton sizes of joints of the avatar and local coordinates corresponding to sensors attached to a target user, by minimizing a sum of a difference function and a skeleton prior function, the difference function representing a difference between a forward kinematics function regarding the joints with respect to reference poses of the target user and positions of the sensors, and the skeleton prior function based on statistics of skeleton sizes with respect to reference poses of a plurality of users.Type: GrantFiled: October 19, 2011Date of Patent: April 7, 2015Assignees: Samsung Electronics Co., Ltd., Texas A&M University SystemInventors: Taehyun Rhee, Inwoo Ha, Dokyoon Kim, Xiaolin Wei, Jinxiang Chai, Huajun Liu
-
Patent number: 8994738Abstract: System and method for rendering a sequence of images corresponding to a sequence of camera poses of a target area to generate an animation representative of a progression of camera poses are provided. An initial image and an associated initial depthmap of a target area captured from an initial camera pose, and a final image and an associated final depthmap of the target area captured from a final camera pose are identified. A plurality of intermediate images representing a plurality of intermediate camera poses directed at the target are produced by performing interpolation on the initial image, the initial depthmap, the final image and the final depthmap. Each intermediate image is associated with a point along the navigational path between the initial and the final camera poses. An animation of the plurality of intermediate images produces a transition of views between the initial camera pose and the final camera pose.Type: GrantFiled: March 21, 2012Date of Patent: March 31, 2015Assignee: Google Inc.Inventors: Carlos Hernandez Esteban, Steven Maxwell Seitz, Matthew Robert Simpson
-
Patent number: 8988439Abstract: A method or apparatus to provide motion-based display effects in a mobile device is described. The method comprises determining a motion of the mobile device using an accelerometer. The method further comprises utilizing the motion of the mobile device to overlay a motion-based display effect on the display of the mobile device, in one embodiment to enhance the three-dimensional affect of the image.Type: GrantFiled: June 6, 2008Date of Patent: March 24, 2015Assignee: DP Technologies, Inc.Inventors: Philippe Kahn, Arthur Kinsolving, Colin McClarin Cooper, John Michael Fitzgibbons
-
Patent number: 8988422Abstract: Techniques are disclosed for augmenting hand-drawn animation of human characters with three-dimensional (3D) physical effects to create secondary motion. Secondary motion, or the motion of objects in response to that of the primary character, is widely used to amplify the audience's response to the character's motion and to provide a connection to the environment. These 3D effects are largely passive and tend to be time consuming to animate by hand, yet most are very effectively simulated in current animation software. The techniques enable hand-drawn characters to interact with simulated objects such as cloth and clothing, balls and particles, and fluids. The driving points or volumes for the secondary motion are tracked in two dimensions, reconstructed into three dimensions, and used to drive and collide with the simulated objects.Type: GrantFiled: December 17, 2010Date of Patent: March 24, 2015Assignee: Disney Enterprises, Inc.Inventors: Jessica Kate Hodgins, Eakta Jain, Yaser Sheikh
-
Patent number: 8988437Abstract: In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate.Type: GrantFiled: March 20, 2009Date of Patent: March 24, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook
-
Patent number: 8982132Abstract: Methods and systems for animation timelines using value templates are disclosed. In some embodiments, a method includes generating a data structure corresponding to a graphical representation of a timeline and creating an animation of an element along the timeline, where the animation modifies a property of the element according to a function, and where the function uses a combination of a string with a numerical value to render the animation. The method also includes adding a command corresponding to the animation into the data structure, where the command is configured to return the numerical value, and where the data structure includes a value template that produces the combination of the string with the numerical value. The method further includes passing the produced combination of the string with the numerical value to the function and executing the function to animate the element.Type: GrantFiled: February 28, 2011Date of Patent: March 17, 2015Assignee: Adobe Systems IncorporatedInventors: Joaquin Cruz Blas, Jr., James W. Doubek
-
Patent number: 8982122Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter, clothing selections, and texture-region color component selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the at least one texture selection via a user interface. In addition, the application server includes a generative model and the application server is configured to generate a 3D mesh based upon the user defined model parameters using the generative model and to apply texture to the generated mesh based upon the at least one texture selection.Type: GrantFiled: March 25, 2011Date of Patent: March 17, 2015Assignee: Mixamo, Inc.Inventors: Stefano Corazza, Emiliano Gambaretto
-
Publication number: 20150070362Abstract: A transition path determinator 4 determines a transition path leading from a current screen which an output unit 9 is currently displaying to a transition destination screen for which the transition path determinator accepts a shortcut operation from a user via an input unit 1 with reference to a hierarchical structure which a transition table storage 3 stores. An animation-during-transition acquiring unit 5 acquires each animation during transition included in the transition path from an animation-during-transition table storage 6, an animation-during-transition controller 7 controls a playback speed according to the number of hierarchical layers transitioned in the transition path, and an output unit 9 displays the animation during transition in order at the playback speed, so that a transition to the transition destination screen is made.Type: ApplicationFiled: July 20, 2012Publication date: March 12, 2015Applicant: Mitsubishi Electric CorporationInventor: Masato Hirai
-
Patent number: 8976184Abstract: A game developer can “tag” an item in the game environment. When an animated character walks near the “tagged” item, the animation engine can cause the character's head to turn toward the item, and mathematically computes what needs to be done in order to make the action look real and normal. The tag can also be modified to elicit an emotional response from the character. For example, a tagged enemy can cause fear, while a tagged inanimate object may cause only indifference or indifferent interest.Type: GrantFiled: October 9, 2013Date of Patent: March 10, 2015Assignee: Nintendo Co., Ltd.Inventors: Henry Sterchi, Jeff Kalles, Shigeru Miyamoto, Denis Dyack, Carey Murray
-
Patent number: 8970592Abstract: A system includes a computing device that includes a memory configured to store instructions. The computing device also includes a processor configured to execute the instructions to perform a method that includes obtaining first data corresponding to a first simulation of matter in a space domain. The method also includes performing, using the first data, a second simulation that produces second data representative of particles in the space domain. The method also includes rasterizing the second data representative of the particles as defined by cells of a grid, wherein each cell has a common depth-to-size ratio, and, rendering an image of the particles from the rasterized second data.Type: GrantFiled: April 19, 2011Date of Patent: March 3, 2015Assignee: Lucasfilm Entertainment Company LLCInventor: Frank Losasso Petterson
-
Patent number: 8957900Abstract: Animation coordination system and methods are provided that manage animation context transitions between and/or among multiple applications. A global coordinator can obtain initial information, such as initial graphical representations and object types, initial positions, etc., from initiator applications and final information, such as final graphical representations and object types, final positions, etc. from destination applications. The global coordination creates an animation context transition between initiator applications and destination applications based upon the initial information and the final information.Type: GrantFiled: December 13, 2010Date of Patent: February 17, 2015Assignee: Microsoft CorporationInventors: Bonny Lau, Song Zou, Wei Zhang, Brian Beck, Jonathan Gleasman, Pai-Hung Chen
-
Patent number: 8957899Abstract: The present invention includes an image processing apparatus having a slide show function of displaying a plurality of images while sequentially automatically switching the images and including an adding unit which adds a transition effect at the time of switching display from a first image to a second image, an obtaining unit which obtains characteristic values indicative of luminance of the first and second images, and a control unit which controls the adding unit to add a transition effect when the difference between the characteristic value of the first image and the characteristic value of the second image is equal to or larger than a predetermined threshold.Type: GrantFiled: August 2, 2010Date of Patent: February 17, 2015Assignee: Canon Kabushiki KaishaInventor: Hirofumi Takei
-
Patent number: 8941667Abstract: The invention generally provides a method and apparatus for up-converting the frame rate of a digital video signal, the method comprising: receiving a digital video signal containing a first frame and a second frame; finding in one of the received frames, matches for objects in the other of the received frames; utilizing 3 dimensional position data in respect of the objects within the frames to determine 3 dimensional movement matrices for the matched objects; using the 3 dimensional movement matrices, determining the position of the objects in a temporally intermediate frame and thereby generating an interpolated frame, temporally between the first and second frame.Type: GrantFiled: January 29, 2010Date of Patent: January 27, 2015Assignee: Vestel Elektronik Sanayi ve Ticaret A,S.Inventors: Osman Serdar Gedik, Abdullah Aydin Alatan
-
Patent number: 8928671Abstract: In particular embodiments, a method includes generating a 3D display of an avatar of a person, where the avatar can receive inputs identifying a type of a physiological event, a location of the physiological event in or on a person's body in three spatial dimensions, a time range of the physiological event, a quality of the physiological event, and rendering the physiological event on the avatar based on the inputs.Type: GrantFiled: November 24, 2010Date of Patent: January 6, 2015Assignee: Fujitsu LimitedInventors: B. Thomas Adler, David Marvit, Jawahar Jain
-
Patent number: 8902235Abstract: A computerized device implements an animation coding engine to analyze timeline data defining an animation sequence and generate a code package. The code package can represent the animation sequence using markup code that defines a rendered appearance of a plurality of frames and a structured data object also comprised in the code package and defining a parameter used by a scripting language in transitioning between frames. The markup code can also comprise a reference to a visual asset included within a frame. The code package further comprises a cascading style sheet defining an animation primitive as a style to be applied to the asset to reproduce one or more portions of the animation sequence without transitioning between frames.Type: GrantFiled: April 7, 2011Date of Patent: December 2, 2014Assignee: Adobe Systems IncorporatedInventor: Alexandru Chiculit{hacek over (a)}
-
Patent number: 8902233Abstract: Techniques that give animators the direct control they are accustomed to with key frame animation, while providing for path-based motion. A key frame animation-based interface is used to achieve path-based motion with rotation animation variable value correction using additional animation variables for smoothing. The value of the additional animation variables for smoothing can be directly controlled using a tangent handle in a user interface.Type: GrantFiled: March 4, 2011Date of Patent: December 2, 2014Assignee: PixarInventors: Chen Shen, Bena L. Currin, Timothy S. Milliron
-
Patent number: 8902232Abstract: Acquisition, modeling, compression, and synthesis of realistic facial deformations using polynomial displacement maps are described. An analysis phase can be included where the relationship between motion capture markers and detailed facial geometry is inferred. A synthesis phase can be included where detailed animated facial geometry is driven by a sparse set of motion capture markers. For analysis, an actor can be recorded wearing facial markers while performing a set of training expression clips. Real-time high-resolution facial deformations are captured, including dynamic wrinkle and pore detail, using interleaved structured light 3D scanning and photometric stereo. Next, displacements are calculated between a neutral mesh driven by the motion capture markers and the high-resolution captured expressions. These geometric displacements are stored in one or more polynomial displacement maps parameterized according to the local deformations of the motion capture dots.Type: GrantFiled: February 2, 2009Date of Patent: December 2, 2014Assignee: University of Southern CaliforniaInventors: Paul E. Debevec, Wan-Chun Ma, Timothy Hawkins
-
Method for providing visual effect messages and associated communication system and transmitting end
Patent number: 8897821Abstract: A method for providing visual effect messages on a receiving end and associated transmitting end configuration is provided. At the transmitting end, visual effect positions and visual effects of messages are determined according to an input message. The visual effect positions and visual effect information are transmitted to the receiving end, and are displayed at the visual effect positions at the receiving end according to the visual information.Type: GrantFiled: April 19, 2012Date of Patent: November 25, 2014Assignee: MStar Semiconductor, Inc.Inventors: Chih-Hsien Huang, Sheng-Chi Yu -
Patent number: 8880044Abstract: A mobile terminal is presented. The mobile terminal includes a display including a touchscreen, and a controller for performing an editing operation on information displayed on the touchscreen according to a state of an object in near-proximity to the displayed information.Type: GrantFiled: January 26, 2009Date of Patent: November 4, 2014Assignee: LG Electronics Inc.Inventor: Jong Hwan Kim
-
Patent number: 8878880Abstract: A method of driving an electrophoretic display device includes changing the gradation level of image data on the basis of correction data corresponding to the gradation level, converting image data with the changed gradation level to a dithering pattern, in which the first color and the second color are combined, corresponding to the changed gradation level for each predetermined region of image data, and driving the electrophoretic particles of the first color and the electrophoretic particles of the second color on the basis of image data converted to the dithering pattern for the plurality of pixels in the display section.Type: GrantFiled: April 7, 2011Date of Patent: November 4, 2014Assignee: Seiko Epson CorporationInventors: Tetsuaki Otsuki, Kota Muto
-
Patent number: 8874309Abstract: A method for acquiring information from a driving operation of a vehicle, in which first information is acquired with respect to at least one operating state of the vehicle and additional second information is ascertained with respect to this at least one operating state using statistical methods, the first and second information concerning this at least one operating state being stored. A method for the assigning and diagnosis of at least one operating state of a vehicle, a control unit, a computer program and a computer-program product are also provided.Type: GrantFiled: September 10, 2008Date of Patent: October 28, 2014Assignee: Robert Bosch GmbHInventors: Andreas Genssle, Michael Kolitsch, Tobias Pfister
-
Patent number: 8866823Abstract: Automatically creating a series of intermediate states may include receiving a start state and an end state of a reactive system, identifying one or more components of the start state and the end state and determining one or more events associated with the one or more components. One or more intermediate states between the start state and the end state, and one or more transitions from and to the one or more intermediate states are created using the one or more components of the start state and the end state and the one or more events associated with the one or more components. The one or more intermediate states and the one or more transitions form one or more time-based paths from the start state to the end state occurring in response to applying the one or more events to the associated one or more components.Type: GrantFiled: October 13, 2010Date of Patent: October 21, 2014Assignee: International Business Machines CorporationInventors: Rachel K. E. Bellamy, Michael Desmond, Jacquelyn A. Martino, Paul M. Matchen, John T. Richards, Calvin B. Swart
-
Patent number: 8836859Abstract: Disclosed herein is an image processing apparatus including an interlace/progressive conversion section configured to carry out interpolation processing on image data of the current field by making use of the image data of the current field and image data of a field leading ahead of the current field by one field period in order to obtain image data of a progressive system with no delay time.Type: GrantFiled: July 19, 2012Date of Patent: September 16, 2014Assignee: Sony CorporationInventor: Kazuhide Fujita
-
Patent number: 8836707Abstract: At least certain embodiments of the present disclosure include a method for animating a display region, windows, or views displayed on a display of a device. The method includes starting at least two animations. The method further includes determining the progress of each animation. The method further includes completing each animation based on a single timer.Type: GrantFiled: August 26, 2013Date of Patent: September 16, 2014Assignee: Apple Inc.Inventors: Andrew Platzer, John Harper
-
Patent number: 8810582Abstract: A lighting module of a hair/fur pipeline may be used to produce lighting effects in a lighting phase for a shot and an optimization module may be used to: determine if a cache hair state file including hair parameters exists; and determine if the cache hair state file includes matching hair parameters to be used in the shot, and if so, the hair parameter values from the cache hair state file are used in the lighting phase.Type: GrantFiled: May 11, 2007Date of Patent: August 19, 2014Assignees: Sony Corporation, Sony Pictures Entertainment IncInventors: Armin Walter Bruderlin, Francois Chardavoine, Clint Chua, Gustav Melich
-
Patent number: 8803886Abstract: The present invention provides a facial image display apparatus that can display moving images concentrated on the face when images of people's faces are displayed. A facial image display apparatus is provided wherein a facial area detecting unit (21) detects facial areas in which faces are displayed from within a target image for displaying a plurality of faces; a dynamic extraction area creating unit (22) creates, based on the facial areas detected by the facial area detecting means, a dynamic extraction area of which at least one of position and surface area varies over time in the target image; and a moving image output unit (27) sequentially extracts images in the dynamic extraction area and outputs the extracted images as a moving image.Type: GrantFiled: July 31, 2006Date of Patent: August 12, 2014Assignees: Sony Corporation, Sony Computer Entertainment Inc.Inventors: Munetaka Tsuda, Shuji Hiramatsu, Akira Suzuki
-
Patent number: 8803887Abstract: A computer graphic system and methods for simulating hair is provided. In accordance with aspects of the disclosure a method for hybrid hair simulation using a computer graphics system is provided. The method includes generating a plurality of modeled hair strands using a processor of the computer graphics system. Each hair strand includes a plurality of particles and a plurality of spring members coupled in between the plurality of particles. The method also includes determining a first position and a first velocity for each particle in the plurality of modeled hair strands using the processor and coarsely modeling movement of the plurality of modeled hair strands with a continuum fluid solver. Self-collisions of the plurality of modeled hair strands are computed with a discrete collision model using the processor.Type: GrantFiled: January 15, 2010Date of Patent: August 12, 2014Assignee: Disney Enterprises, Inc.Inventors: Aleka McAdams, Andrew Selle, Kelly Ward, Eftychios Sifakis, Joseph Teran
-
Patent number: 8803889Abstract: An virtual character such as an on-screen object, an avatar, an on-screen character, or the like may be animated using a live motion of a user and a pre-recorded motion. For example, a live motion of a user may be captured and a pre-recorded motion such as a pre-recorded artist generated motion, a pre-recorded motion of the user, and/or a programmatically controlled transformation may be received. The live motion may then be applied to a first portion of an the virtual character and the pre-recorded motion may be applied to a second portion of the virtual character such that the virtual character may be animated with a combination of the live and pre-recorded motions.Type: GrantFiled: May 29, 2009Date of Patent: August 12, 2014Assignee: Microsoft CorporationInventors: Kathryn Stone Perez, Alex A. Kipman, Jeffrey Margolis
-
Patent number: 8797331Abstract: An information processing apparatus includes a bio-information obtaining unit configured to obtain bio-information of a subject; a kinetic-information obtaining unit configured to obtain kinetic information of the subject; and a control unit configured to determine an expression or movement of an avatar on the basis of the bio-information obtained by the bio-information obtaining unit and the kinetic information obtained by the kinetic-information obtaining unit and to perform a control operation so that the avatar with the determined expression or movement is displayed.Type: GrantFiled: August 4, 2008Date of Patent: August 5, 2014Assignee: Sony CorporationInventors: Akane Sano, Masamichi Asukai, Taiji Ito, Yoichiro Sako
-
Patent number: 8786598Abstract: Discloses herein are methods, apparatuses, and systems for preparing and displaying images in frame-sequential stereoscopic 3D. Frame-sequential stereoscopic display includes an alternating sequence of left- and right-perspective images for display. Disclosed methods include identifying pixels that modulate due to the alternating sequence of left- and right-perspective images of the frame-sequential stereoscopic display. The disclosed methods also include processing the pixels to reduce one or more residual images caused by the alternating sequence of left- and right-perspective images of the frame-sequential stereoscopic display. The disclosed methods may be implemented by a processing unit and the processing unit may be included in a system (such as, a computer or video-game console).Type: GrantFiled: November 19, 2010Date of Patent: July 22, 2014Assignee: ATI Technologies, ULCInventor: Philip L. Swan
-
Patent number: 8786612Abstract: An animation editing device includes animation data including time line data that defines frames on the basis of a time line showing temporal display order of the frames, and space line data that defines frames on the basis of a space line for showing a relative positional relationship between a display position of each of animation parts and a reference position shown by a tag by mapping the relative positional relationship onto a one-dimensional straight line, displays the time line and the space line, and the contents of the frames based on the time line and the space line, and accepts an editing command to perform an editing process according to the inputted editing command.Type: GrantFiled: March 31, 2009Date of Patent: July 22, 2014Assignee: Mitsubishi Electric CorporationInventors: Akira Toyooka, Hiroki Konaka
-
Patent number: 8786613Abstract: A method and system for drawing, displaying, editing animating, simulating and interacting with one or more virtual polygonal, spline, volumetric models, three-dimensional visual models or robotic models. The method and system provide flexible simulation, the ability to combine rigid and flexible simulation on plural portions of a model, rendering of haptic forces and force-feedback to a user.Type: GrantFiled: March 11, 2013Date of Patent: July 22, 2014Inventor: Alan Millman
-
Patent number: 8786608Abstract: Certain embodiments relate to combining or blending animations that are attempting to simultaneously animate the same target. Certain embodiments simplify the blending of animations in the application development environment. For example, certain embodiments allow animations to be used or specified by a developer without the developer having to specifically address the potential for time-overlapping animations. As a few specific examples, an application may specify animations by simply calling a function to change a property of a target or by sending a command to change a public property of the target. Certain embodiments provide a blender that intercepts such function calls and commands. If two animations require a change to the same target at the same time, the blender determines an appropriate blended result and sends an appropriate function call or command to the target. The function calls and commands need not be aware of the blender.Type: GrantFiled: October 14, 2008Date of Patent: July 22, 2014Assignee: Adobe Systems IncorporatedInventor: Chet S. Haase
-
Patent number: 8773442Abstract: An event, such as a vertical blank interrupt or signal, received from a display adapter in a system is identified. Activation of a timer-driven animation routine that updates a state of an animation and activation of a paint controller module that identifies updates to the state of the animation and composes a frame that includes the updates to the state of the animation are aligned, both being activated based on the identified event in the system.Type: GrantFiled: July 6, 2012Date of Patent: July 8, 2014Assignee: Microsoft CorporationInventors: Cenk Ergan, Benjamin C. Constable
-
Patent number: 8760469Abstract: A method that incorporates teachings of the present disclosure may include, for example, the steps of transmitting media content to a group of set top boxes for presentation with an overlay superimposed onto the media content, receiving a first comment from a first set top box of the group of set top boxes where the first comment is presentable with the overlay and the media content by the group of set top boxes, determining a first advertisement based on the first comment, and transmitting the first advertisement to the first set top box for presentation with the overlay and the media content. Other embodiments are disclosed.Type: GrantFiled: November 6, 2009Date of Patent: June 24, 2014Assignee: AT&T Intellectual Property I, L.P.Inventors: Linda Roberts, E-Lee Chang, Ja-Young Sung, Natasha Barrett Schultz, Robert Arthur King
-
Patent number: 8749560Abstract: The disclosed systems and methods make the motion of an object in an animation appear smooth by blending a number of subframes of visually adjusted images of the object for each frame of the animation. A request to animate an object along a motion path can be received by a graphics processing system of a device, where the motion path traverses at least a portion of a user interface presented on a display of the device. For each frame of the animation, the graphics processing system blends N subframes of visually adjusted images of the object to create a final blurred image which is rendered on the display. The graphics processing system can determine whether there is more processing time to perform additional blending of subframes prior to rendering a final frame for display, and then blending more subframes of images prior to rendering the final frame for display.Type: GrantFiled: May 18, 2012Date of Patent: June 10, 2014Assignee: Apple Inc.Inventor: Bas Ording
-
Patent number: 8744214Abstract: Over the past few years there has been a dramatic proliferation of digital cameras, and it has become increasingly easy to share large numbers of photographs with many other people. These trends have contributed to the availability of large databases of photographs. Effectively organizing, browsing, and visualizing such .seas. of images, as well as finding a particular image, can be difficult tasks. In this paper, we demonstrate that knowledge of where images were taken and where they were pointed makes it possible to visualize large sets of photographs in powerful, intuitive new ways. We present and evaluate a set of novel tools that use location and orientation information, derived semi-automatically using structure from motion, to enhance the experience of exploring such large collections of images.Type: GrantFiled: May 21, 2013Date of Patent: June 3, 2014Assignees: Microsoft Corporation, University of WashingtonInventors: Keith Noah Snavely, Steven Maxwell Seitz, Richard Szeliski
-
Patent number: 8744627Abstract: A system of distributed control of an interactive animatronic show includes a plurality of animatronic actors, at least one of the actors a processor and one or more motors controlled by the processor. The system also includes a network interconnecting each of the actors, and a plurality of sensors providing messages to the network, where the messages are indicative of processed information. Each processor executed software that schedules and/or coordinates an action of the actor corresponding to the processor in accordance with the sensor messages representative of attributes of an audience viewing the show and the readiness of the corresponding actor. Actions of the corresponding actor can include animation movements of the actor, responding to another actor and/or responding to a member of the audience. The actions can result in movement of at least a component of the actor caused by control of the motor.Type: GrantFiled: September 22, 2011Date of Patent: June 3, 2014Assignee: Disney Enterprises, Inc.Inventor: Alexis Paul Wieland
-
Patent number: 8743125Abstract: Natural inter-viseme animation of 3D head model driven by speech recognition is calculated by applying limitations to the velocity and/or acceleration of a normalized parameter vector, each element of which may be mapped to animation node outputs of a 3D model based on mesh blending and weighted by a mix of key frames.Type: GrantFiled: March 6, 2009Date of Patent: June 3, 2014Assignee: Sony Computer Entertainment Inc.Inventor: Masanori Omote
-
Patent number: 8730245Abstract: In a method of defining an animation of a virtual object, during which values for attributes of the virtual object are updated at each of a series of time points, a user specifies a structure representing the update that includes a plurality of items and one or more connections between respective items. Each item represents a respective operation. Each connection represents that data output by the operation represented by one item is input to the operation represented by the connected item. The user specifies that the structure comprises one or more items in a predetermined category associated with a predetermined process that may be executed at most a predetermined number of times at each time point. An item belongs to the predetermined category if performing the respective operation represented by that item requires execution of the predetermined process. One or more rules are applied.Type: GrantFiled: August 20, 2009Date of Patent: May 20, 2014Assignee: NaturalMotion Ltd.Inventors: Thomas Lowe, Danny Chapman, Timothy Daoust, James Brewster
-
Patent number: 8726168Abstract: A system and method hides latency in the display of a subsequent user interface by animating the exit of the current user interface and animating the entrance of the subsequent user interface, causing continuity in the display of the two user interfaces. During either or both animations, information used to produce the user interface, animation of the entrance of the subsequent user interface, or both may be retrieved or processed or other actions may be performed.Type: GrantFiled: December 5, 2005Date of Patent: May 13, 2014Assignee: Adobe Systems IncorporatedInventor: Andrew Borovsky
-
Patent number: 8711151Abstract: A hair pipeline utilizes a surface definition module to define a surface and a control hair and a hair motion compositor module combines different control hair curve shapes associated with the control hair and the surface. In particular, the hair motion compositor module generates a static node defining a static control hair curve shape; generates an animation node defining an animation control hair curve shape; and combines the static control hair curve shape of the static node with the animation control hair curve hair shape of the animation node to produce a resultant control hair curve shape for the control hair.Type: GrantFiled: May 11, 2007Date of Patent: April 29, 2014Assignees: Sony Corporation, Sony Pictures Entertainment Inc.Inventors: Armin Walter Bruderlin, Francois Chardavoine, Clint Chua, Gustav Melich
-
Patent number: 8711178Abstract: A method for generating an animated morph between a first image and a second image is provided. The method may include: (i) reading a first set of cephalometric landmark points associated with the first image; (ii) reading a second set of cephalometric landmark points associated with the second image; (iii) defining a first set of line segments by defining a line segment between each of the first set of cephalometric landmarks; (iv) defining a second set of line segments by defining a line segment between each of the second set of cephalometric landmarks such that each line segment of the second set of line segments corresponds to a corresponding line segment of the first set of line segments; and (v) generating an animation progressively warping the first image to the second image based at least on the first set of line segments and the second set of line segments.Type: GrantFiled: May 19, 2011Date of Patent: April 29, 2014Assignee: Dolphin Imaging Systems, LLCInventor: Emilio David Cortés Provencio
-
Patent number: 8704828Abstract: A model is associated with a deep pose. When the model is changed from an attractor pose to a current pose, the current pose and the attractor pose are compared with the deep pose. If any portion of the current pose is more similar to the deep pose than the attractor pose, then the attractor pose is updated. A portion of the attractor pose may be set to the corresponding portion of the current pose. The attractor pose may be modified by a function. Pose attributes of each pose degrees of freedom for the attractor pose, the current pose, and the deep pose may be evaluated to potentially modify all or a portion of the attractor pose. The attractor pose and pose constraints are used to determine a pose of the model, for example by an optimization process based on the attractor pose while satisfying pose constraints.Type: GrantFiled: October 23, 2008Date of Patent: April 22, 2014Assignee: PixarInventors: Andrew Witkin, Michael Kass, Hayley Iben
-
Patent number: 8707151Abstract: A user interface method and apparatus for a Rich Media service in a terminal. A decoder decodes a received stream to check a header of the received stream. A renderer adaptively composes a scene using scene composition elements of the received stream, according to adaptation information in the header checked by the decoder, and a display displays the adaptively composed scene.Type: GrantFiled: April 21, 2009Date of Patent: April 22, 2014Assignee: Samsung Electronics Co., LtdInventors: Seo-Young Hwang, Jae-Yeon Song, Kook-Heui Lee
-
Publication number: 20140098108Abstract: Dynamic icons are described that can employ animations, such as visual effects, audio, and other content that change with time. If multiple animations are scheduled to occur simultaneously, the timing of the animations can be controlled so that timing overlap of the animations is reduced. For example, the starting times of the animations can be staggered so that multiple animations are not initiated too close in time. It has been found that too much motion in the user interface can be distracting and cause confusion amongst users.Type: ApplicationFiled: October 21, 2013Publication date: April 10, 2014Applicant: MICROSOFT CORPORATIONInventors: Jeffrey Cheng-Yao Fong, Jeffery G. Arnold, Christopher A. Glein
-
Patent number: 8692831Abstract: Provided is a parallel operation processing apparatus and method. The parallel operation processing apparatus and method may generate an interpolated matrix with respect to a character included in each of a current frame and a next frame using a matrix corresponding to each of the current frame and the next frame generated, based on joint information corresponding to a plurality of joints included in the character. Also, the parallel operation processing apparatus and method may display an interpolated frame using the interpolated matrix.Type: GrantFiled: June 28, 2010Date of Patent: April 8, 2014Assignees: Samsung Electronics Co., Ltd., Korea University of Technology and Education Industry-University Cooperation FoundationInventors: Hyung Min Yoon, Oh Young Kwon, Byung In Yoo, Chang Mug Lee, Hyo Seok Seo
-
Patent number: 8687006Abstract: A display device includes a display panel having pixels and divided into first and second display regions; first and second image interpolation chips which receive an original image signal and output interpolated ¼, ½, and/or ¾ frames inserted between a previous (n?1)-th frame and a current n-th frame of the original image signal; a first timing unit which receives the interpolated ¼, ½, and/or ¾ frames from the first image interpolation chip and outputs a first quadruple-speed image signal to pixels in the first display region; and a second timing unit which receives the interpolated ¼, ½, and/or ¾ frames from the second image interpolation chip and outputs a second quadruple-speed image signal to pixels in the second display region. The first timing unit transmits data to the second timing unit, and the second timing unit transmits data to the first timing unit.Type: GrantFiled: June 25, 2009Date of Patent: April 1, 2014Assignee: Samsung Display Co., Ltd.Inventors: Dong-Won Park, Sang-Soo Kim
-
Patent number: 8683429Abstract: Methods for runtime control of hierarchical objects are provided. Certain embodiments provide kinematics procedures in a media content, runtime environment. Making these procedures available in the runtime environment allows the variables of the kinematics procedures to be specified at runtime, for example by the end user or by a runtime-executed script. One exemplary method comprises receiving a hierarchical object for a piece of media in a media content authoring environment and providing the piece of media to one or more runtime environments. The piece of media provided to the runtime environments comprises both object information about the hierarchical object and kinematics procedural information for performing kinematics on the hierarchical object, such as procedural classes for performing inverse kinematics procedures based on runtime-provided end-effector and target point variables.Type: GrantFiled: August 25, 2008Date of Patent: March 25, 2014Assignee: Adobe Systems IncorporatedInventor: Eric J. Mueller
-
Patent number: RE45422Abstract: Annotation techniques are provided. In one aspect, a method for processing a computer-based material is provided. The method comprises the following steps. The computer-based material is presented. One or more portions of the computer-based material are determined to be of interest to a user. The one or more portions are annotated to permit return to the one or more portions at a later time. In another aspect, a user interface is provided. The user interface comprises a computer-based material; a viewing focal area encompassing a portion of the computer-based material; and one or more indicia associated with and annotating the portion of the computer-based material.Type: GrantFiled: December 27, 2012Date of Patent: March 17, 2015Assignee: Loughton Technology, L.L.C.Inventor: Christopher Vance Beckman