Temporal Interpolation Or Processing Patents (Class 345/475)
  • Patent number: 10275925
    Abstract: A blend shape method and system that modifies the U-V values associated with vertices in blend shapes constructed in a 3-D blend shape combination system. The blend shape method determines the U-V coordinates associated with each vertex in a base shape and the U-V coordinates associated with corresponding vertices in one or more driving shapes. The method calculates U-V delta values that are associated with vertices in the driving shape. The method multiplies the U-V delta values by a weight value associated with the driving shape to determine a transitional U-V delta value for each vertex. The transitional U-V delta value for each vertex is added to the U-V coordinates for the corresponding vertex in the base shape to determine the modified U-V coordinates for the resulting blend shape. Multiple driving shapes may be used with each shape contributing to the modified U-V values according to its relative weight.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: April 30, 2019
    Assignee: Sony Interactive Entertainment America, LLC
    Inventor: Homoud B. Alkouh
  • Patent number: 10248212
    Abstract: A system is provided that encodes one or more dynamic haptic effects. The system defines a dynamic haptic effect as including a plurality of key frames, where each key frame includes an interpolant value and a corresponding haptic effect. An interpolant value is a value that specifies where an interpolation occurs. The system generates a haptic effect file, and stores the dynamic haptic effect within the haptic effect file.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: April 2, 2019
    Assignee: IMMERSION CORPORATION
    Inventors: Henry Da Costa, Feng Tian An, Christopher J. Ullrich
  • Patent number: 10169676
    Abstract: Described herein are methods and systems for closed-form 3D model generation of non-rigid complex objects from scans with large holes. A computing device receives (i) a partial scan of a non-rigid complex object captured by a sensor coupled to the computing device; (ii) a partial 3D model corresponding to the object, and (iii) a whole 3D model corresponding to the object, wherein the partial 3D scan and the partial 3D model each includes one or more large holes. The device performs a rough match on the partial 3D model and changes the whole 3D model using the rough match to generate a deformed 3D model. The device refines the deformed 3D model using a deformation graph, reshapes the refined deformed 3D model to have greater detail, and adjusts the whole 3D model according to the reshaped 3D model to generate a closed-form 3D model that closes holes in the scan.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: January 1, 2019
    Assignee: VanGogh Imaging, Inc.
    Inventors: Xin Hou, Yasmin Jahir, Jun Yin
  • Patent number: 10062410
    Abstract: Techniques and devices for creating an AutoLoop output video include performing pregate operations. The AutoLoop output video is created from a set of frames. Prior to creating the AutoLoop output video, the set of frames are automatically analyzed to identify one or more image features that are indicative of whether the image content in the set of frames is compatible with creating a video loop. Pregate operations assign one or more pregate scores for the set of frames based on the one or more identified image features, where the pregate scores indicate a compatibility to create the video loop based on the identified image features. Pregate operations automatically determine to create the video loop based on the pregate scores and generate an output video loop based on the loop parameters and at least a portion of the set of frames.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: August 28, 2018
    Assignee: Apple Inc.
    Inventors: Arwen V. Bradley, Samuel G. Noble, Rudolph van der Merwe, Jason Klivington, Douglas P. Mitchell, Joseph M. Triscari
  • Patent number: 10055888
    Abstract: A computing system and method for producing and consuming metadata within multi-dimensional data is provided. The computing system comprising a see-through display, a sensor system, and a processor configured to: in a recording phase, generate an annotation at a location in a three dimensional environment, receive, via the sensor system, a stream of telemetry data recording movement of a first user in the three dimensional environment, receive a message to be recorded from the first user, and store, in memory as annotation data for the annotation, the stream of telemetry data and the message, and in a playback phase, display a visual indicator of the annotation at the location, receive a selection of the visual indicator by a second user, display a simulacrum superimposed onto the three dimensional environment and animated according to the telemetry data, and present the message via the animated simulacrum.
    Type: Grant
    Filed: April 28, 2015
    Date of Patent: August 21, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jonathan Christen, John Charles Howard, Marcus Tanner, Ben Sugden, Robert C. Memmott, Kenneth Charles Ouellette, Alex Kipman, Todd Alan Omotani, James T. Reichert, Jr.
  • Patent number: 10049473
    Abstract: Embodiments of the disclosure are systems and methods for providing third party visualizations. In one embodiment, a method is provided that includes receiving, via an API, computer-executable instructions configured to render a visualization using events and a variable field; rendering the visualization using the events; causing displaying of a graphical user interface (GUI) comprising a visualization panel and a variable element; receiving, via the variable element of the GUI, an indication of a first change in the value of the variable field to a first value; re-rendering the visualization using the events and the first value; and causing display of the GUI with an updated visualization panel and the variable element.
    Type: Grant
    Filed: April 27, 2015
    Date of Patent: August 14, 2018
    Assignee: SPLUNK INC
    Inventors: Nicholas Filippi, Simon Fishel, Siegfried Puchbauer-Schnabel, Mathew Elting, Carl Yestrau
  • Patent number: 9997201
    Abstract: The system provides a method and apparatus for writing a unique copy of data associated with each of a plurality of individual users, without the need for storing duplicate copies of the entire data file. The system provides for creating an unusable copy of a portion of the data that is to be shared by all users of the complete data. The system will store and optionally encrypt and/or watermark a unique copy of the remainder portion of the data for each unique user. When accessed from storage, the system will combine the shared portion with the unique remainder to reconstitute the entire file for access by the user. Deleting the unique remainder associated with a particular user makes all of the data useless to that user. In one embodiment, the system first compresses the entire data file using index frames and delta.
    Type: Grant
    Filed: June 19, 2015
    Date of Patent: June 12, 2018
    Assignee: PHILO, INC.
    Inventors: Christopher Thorpe, Thomer Gil, Christopher Small
  • Patent number: 9977816
    Abstract: A system determines ranking scores for objects based on “virtual” links defined for the objects. A link-based ranking score may then be calculated for the objects based on the virtual links. In one implementation, the virtual links are determined based on a metric of content-based similarity between the objects.
    Type: Grant
    Filed: December 10, 2015
    Date of Patent: May 22, 2018
    Assignee: Google LLC
    Inventors: Yushi Jing, Henry Allan Rowley, Shumeet Baluja
  • Patent number: 9830741
    Abstract: Techniques are disclosed for processing graphics objects in a stage of a graphics processing pipeline. The techniques include receiving a graphics primitive associated with the graphics object, and determining a plurality of attributes corresponding to one or more vertices associated with the graphics primitive. The techniques further include determining values for one or more state parameters associated with a downstream stage of the graphics processing pipeline based on a visual effect associated with the graphics primitive. The techniques further include transmitting the state parameter values to the downstream stage of the graphics processing pipeline. One advantage of the disclosed techniques is that visual effects are flexibly and efficiently performed.
    Type: Grant
    Filed: November 7, 2012
    Date of Patent: November 28, 2017
    Assignee: NVIDIA Corporation
    Inventors: Emmett M. Kilgariff, Morgan McGuire, Yury Y. Uralsky, Ziyad S. Hakura
  • Patent number: 9797802
    Abstract: A method for developing a virtual testing model of a subject for use in simulated aerodynamic testing comprises providing a computer generated generic 3D mesh of the subject, identifying a dimension of the subject and at least one reference point on the subject, imaging the subject to develop point cloud data representing at least the subject's outer surface and adapting the generic 3D mesh to the subject. The generic 3D mesh is adapted by modifying it to have a corresponding dimension and at least one corresponding reference point, and applying at least a portion of the point cloud data from the imaged subject's outer surface at selected locations to scale the generic 3D mesh to correspond to the subject, thereby developing the virtual testing model specific to the subject.
    Type: Grant
    Filed: March 4, 2014
    Date of Patent: October 24, 2017
    Inventor: Jay White
  • Patent number: 9779484
    Abstract: Dynamic motion path blur techniques are described. In one or more implementations, paths may be specified to constrain a motion blur effect to be applied to a single image. A variety of different techniques may be employed as part of the motion blur effects, including use of curved blur kernel shapes, use of a mesh representation of blur kernel parameter fields to support real time output of the motion blur effect to an image, use of flash effects, blur kernel positioning to support centered or directional blurring, tapered exposure modeling, and null paths.
    Type: Grant
    Filed: August 4, 2014
    Date of Patent: October 3, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Gregg D. Wilensky, Nathan A. Carr
  • Patent number: 9704288
    Abstract: Techniques are disclosed for providing a learning-based clothing model that enables the simultaneous animation of multiple detailed garments in real-time. A simple conditional model learns and preserves key dynamic properties of cloth motions and folding details. Such a conditional model may be generated for each garment worn by a given character. Once generated, the conditional model may be used to determine complex body/cloth interactions in order to render the character and garment from frame-to-frame. The clothing model may be used for a variety of garments worn by male and female human characters (as well as non-human characters) while performing a varied set of motions typically used in video games (e.g., walking, running, jumping, turning, etc.).
    Type: Grant
    Filed: December 21, 2010
    Date of Patent: July 11, 2017
    Assignee: Disney Enterprises, Inc.
    Inventors: Edilson de Aguiar, Leonid Sigal, Adrien Treuille, Jessica K. Hodgins
  • Patent number: 9681173
    Abstract: There are discloses a method of and a server for processing a user request for a web resource, the user request received at a server from an electronic device via a communication network. The method can be executed at the server.
    Type: Grant
    Filed: October 2, 2015
    Date of Patent: June 13, 2017
    Assignee: YANDEX EUROPE AG
    Inventors: Nina Viktorovna Sapunova, Evgeny Valeryevich Eroshin, Ekaterina Vladimirovna Rubtcova, Maksim Pavlovich Voznin, Grigory Aleksandrovich Matveev, Nikita Alekseevich Smetanin
  • Patent number: 9672646
    Abstract: Systems, methods, and computer-readable storage media for performing a visual rewind operation in an image editing application may include capturing, compressing, and storing image data and interaction logs and correlations between them. The stored information may be used in a visual rewind operation, during which a sequence of frames (e.g., an animation) depicting changes in an image during image editing operations is displayed in reverse order. In response to navigating to a point in the animation, data representing the image state at that point may be reconstructed from the stored data and stored as a modified image or a variation thereof. The methods may be employed in an image editing application to provide a partial undo operation, image editing variation previewing, and/or visually-driven editing script creation. The methods may be implemented as stand-alone applications or as program instructions implementing components of a graphics application, executable by a CPU and/or GPU.
    Type: Grant
    Filed: August 28, 2009
    Date of Patent: June 6, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Jerry G. Harris, Scott L. Byer, Stephan D. Schaem
  • Patent number: 9646227
    Abstract: This disclosure describes techniques for training models from video data and applying the learned models to identify desirable video data. Video data may be labeled to indicate a semantic category and/or a score indicative of desirability. The video data may be processed to extract low and high level features. A classifier and a scoring model may be trained based on the extracted features. The classifier may estimate a probability that the video data belongs to at least one of the categories in a set of semantic categories. The scoring model may determine a desirability score for the video data. New video data may be processed to extract low and high level features, and feature values may be determined based on the extracted features. The learned classifier and scoring model may be applied to the feature values to determine a desirability score associated with the new video data.
    Type: Grant
    Filed: July 29, 2014
    Date of Patent: May 9, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Nitin Suri, Xian-Sheng Hua, Tzong-Jhy Wang, William D. Sproule, Andrew S. Ivory, Jin Li
  • Patent number: 9600160
    Abstract: There is provided an image processing device including a moving image generation unit configured to generate a parallelly animated moving image in which a plurality of object images are each parallelly animated, the plurality of the object images having been selected from a series of object images that have been generated by extracting a moving object from frame images of a source moving image, and an image output unit configured to output the parallelly animated moving image.
    Type: Grant
    Filed: October 29, 2014
    Date of Patent: March 21, 2017
    Assignee: Sony Corporation
    Inventors: Tatsuhiro Iida, Shogo Kimura
  • Patent number: 9478066
    Abstract: A system, method, and computer program product are provided for adjusting vertex positions. One or more viewport dimensions are received and a snap spacing is determined based on the one or more viewport dimensions. The vertex positions are adjusted to a grid according to the snap spacing. The precision of the vertex adjustment may increase as at least one dimension of the viewport decreases. The precision of the vertex adjustment may decrease as at least one dimension of the viewport increases.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: October 25, 2016
    Assignee: NVIDIA Corporation
    Inventors: Eric Brian Lum, Henry Packard Moreton, Kyle Perry Roden, Walter Robert Steiner, Ziyad Sami Hakura
  • Patent number: 9373187
    Abstract: Method and apparatus for producing a cinemagraph, wherein based on received user input an image from a sequence of images is selected as a baseframe image. The baseframe image is segmented and at least one segment is selected based on user input. A mask is created based on the selected segments and at least one image most similar to the baseframe is selected from the sequence of images using the mask. The selected images are aligned the baseframe image a first cinemagraph is created from the selected images and the baseframe image using the mask.
    Type: Grant
    Filed: May 25, 2012
    Date of Patent: June 21, 2016
    Assignee: Nokia Corporation
    Inventors: Kemal Ugur, Ali Karaoglu, Miska Hannuksela, Jani Lainema
  • Patent number: 9355438
    Abstract: The geometric distortions of videos and images are corrected wherein a plurality of geometrically distorted frames are mapped with a plurality of original frames of the video content. Further, one or more features associated with the mapped frames are identified as insensitive to the one or more geometric distortions. One or more features of the mapped frames are further mapped with original frames based on a predefined similarity threshold and thereafter one or more geometric distortion parameters are determined. Furthermore, a frame level average distortion and a video level average distortion of each of the one or more geometric distortion parameters are determined, based on which the one or more geometric distortions of the video content are corrected.
    Type: Grant
    Filed: January 14, 2015
    Date of Patent: May 31, 2016
    Assignee: INFOSYS LIMITED
    Inventors: Sachin Mehta, Rajarathnam Nallusamy
  • Patent number: 9324376
    Abstract: Traditionally, time-lapse videos are constructed from images captured at given time intervals called “temporal points of interests” or “temporal POIs.” Disclosed herein are intelligent systems and methods of capturing and selecting better images around temporal points of interest for the construction of improved time-lapse videos. According to some embodiments, a small “burst” of images may be captured, centered around the aforementioned temporal points of interest. Then, each burst sequence of images may be analyzed, e.g., by performing a similarity comparison between each image in the burst sequence and the image selected at the previous temporal point of interest. Selecting the image from a given burst that is most similar to the previous selected image allows the intelligent systems and methods described herein to improve the quality of the resultant time-lapse video by discarding “outlier” or other undesirable images captured in the burst sequence around a particular temporal point of interest.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: April 26, 2016
    Assignee: Apple Inc.
    Inventor: Frank Doepke
  • Patent number: 9305385
    Abstract: An animation timeline is analyzed to determine one or more discrete states. Each discrete state includes one or more animation effects. The discrete states represent scenes of a slide in a slide presentation. The concepts of scenes allows user to view a timeline of scenes, open a scene, and direct manipulate objects in the scene to author animations. The animations can include motion path animation effects, which can be directly manipulated utilizing a motion path tweening method. To aid in direct manipulation of a motion path of an object, a ghost version of the object can be shown to communicate to a user the position of the object after a motion path animation effect that includes the motion path is performed. The ghost version may also be used to show a start position when a start point is manipulated.
    Type: Grant
    Filed: November 17, 2011
    Date of Patent: April 5, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Christopher Michael Maloney, Mirza Pasalic, Runzhen Huang
  • Patent number: 9292967
    Abstract: A novel “contour person” (CP) model of the human body is proposed that has the expressive power of a detailed 3D model and the computational benefits of a simple 20 part-based model. The CP model is learned from a 3D model of the human body that captures natural shape and pose variations. The CP model factors deformations of the body into three components: shape variation, viewpoint change and pose variation. The CP model can be “dressed” with a low-dimensional clothing model. The clothing is represented as a deformation from the underlying CP representation. This deformation is learned from training examples using principal component analysis to produce so-called eigen-clothing. The coefficients of the eigen-clothing can be used to recognize different categories of clothing on dressed people. The parameters of the estimated 20 body can be used to discriminatively predict 3D body shape using a learned mapping approach.
    Type: Grant
    Filed: June 8, 2011
    Date of Patent: March 22, 2016
    Assignee: Brown University
    Inventors: Michael J. Black, Oren Freifeld, Alexander W. Weiss, Matthew M. Loper, Peng Guan
  • Patent number: 9083814
    Abstract: Displaying a lock mode screen of a mobile terminal is disclosed. One embodiment of the present disclosure pertains to a mobile terminal comprising a display module, an input device configured to detect an input for triggering a bouncing animation of a lock mode screen, and a controller configured to cause the display module to display the bouncing animation in response to the input for triggering the bouncing animation, where the bouncing animation comprises the lock mode screen bouncing for a set number of times with respect to an edge of the display module prior to stabilization.
    Type: Grant
    Filed: October 13, 2010
    Date of Patent: July 14, 2015
    Assignee: LG ELECTRONICS INC.
    Inventors: Jungjoon Lee, Taehun Kim, Taekon Lee, Jeongyoon Rhee, Younhwa Choi, Minhun Kang, Hyunjoo Jeon
  • Patent number: 9041717
    Abstract: Techniques are disclosed for creating animated video frames which include both computer generated elements and hand drawn elements. For example, a software tool may allows an artist to draw line work (or supply other 2D image data) to composite with an animation frame rendered from a three dimensional (3D) graphical model of an object. The software tool may be configured to determine how to animate such 2D image data provided for one frame in order to appear in subsequent (or prior) frames in a manner consistent with changes in rendering the underlying 3D geometry.
    Type: Grant
    Filed: September 12, 2011
    Date of Patent: May 26, 2015
    Assignee: Disney Enterprises, Inc.
    Inventors: Michael Kaschalk, Eric A. Daniels, Brian S. Whited, Kyle D. Odermatt, Patrick T. Osborne
  • Patent number: 9041718
    Abstract: Techniques are disclosed for generating a bilinear spatiotemporal basis model. A method includes the steps of predefining a trajectory basis for the bilinear spatiotemporal basis model, receiving three-dimensional spatiotemporal data for a training sequence, estimating a shape basis for the bilinear spatiotemporal basis model using the three-dimensional spatiotemporal data, and computing coefficients for the bilinear spatiotemporal basis model using the trajectory basis and the shape basis.
    Type: Grant
    Filed: March 20, 2012
    Date of Patent: May 26, 2015
    Assignee: Disney Enterprises, Inc.
    Inventors: Iain Matthews, Ijaz Akhter, Tomas Simon, Sohaib Khan, Yaser Sheikh
  • Patent number: 9030479
    Abstract: Disclosed are a system and a method for motion editing multiple synchronized characters. The motion editing system comprises: a Laplacian motion editor which edits a spatial route of inputted character data according to user conditions, and processes the distortion of the interaction time; and a discrete motion editor which applies a discrete transformation while the character data is processed.
    Type: Grant
    Filed: June 19, 2009
    Date of Patent: May 12, 2015
    Assignee: SNU R&DB Foundation
    Inventors: Jehee Lee, Manmyung Kim
  • Patent number: 9019278
    Abstract: Systems, methods and products for animating non-humanoid characters with human motion are described. One aspect includes selecting key poses included in initial motion data at a computing system; obtaining non-humanoid character key poses which provide a one to one correspondence to selected key poses in said initial motion data; and statically mapping poses of said initial motion data to non-humanoid character poses using a model built based on said one to one correspondence from said key poses of said initial motion data to said non-humanoid character key poses. Other embodiments are described.
    Type: Grant
    Filed: December 2, 2013
    Date of Patent: April 28, 2015
    Assignee: Disney Enterprises, Inc.
    Inventors: Jessica Kate Hodgins, Katsu Yamane, Yuka Ariki
  • Patent number: 9019279
    Abstract: System and method for rendering a sequence of orthographic approximation images corresponding to camera poses to generate an animation moving between an initial view and a final view of a target area are provided. An initial image corresponding to an initial camera pose directed at the target area is identified. A final image and an associated depthmap corresponding to a final camera pose directed at the target area is further identified. A plurality of intermediate images corresponding to a plurality of camera poses directed at the target area is produced by performing interpolation on the initial image, the final image, and the associated depthmap. Each intermediate image is associated with a point along a navigational path between the initial camera pose and the final camera pose. An animation of the plurality of intermediate images produces a transition of views between the initial camera pose and the final camera pose.
    Type: Grant
    Filed: March 21, 2012
    Date of Patent: April 28, 2015
    Assignee: Google Inc.
    Inventors: Jeffrey Thomas Prouty, Steven Maxwell Seitz, Carlos Hernandez Esteban, Matthew Robert Simpson
  • Patent number: 9007381
    Abstract: An exemplary method includes a transition animation system detecting a screen size of a display screen associated with a computing device executing an application, automatically generating, based on the detected screen size, a plurality of animation step values each corresponding to a different animation step included in a plurality of animation steps that are to be involved in an animation of a transition of a user interface associated with the application into the display screen, and directing the computing device to perform the plurality of animation steps in accordance with the generated animation step values. Corresponding methods and systems are also disclosed.
    Type: Grant
    Filed: September 2, 2011
    Date of Patent: April 14, 2015
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Jian Huang, Jack J. Hao
  • Patent number: 9001129
    Abstract: A processing apparatus for creating an avatar is provided. The processing apparatus calculates skeleton sizes of joints of the avatar and local coordinates corresponding to sensors attached to a target user, by minimizing a sum of a difference function and a skeleton prior function, the difference function representing a difference between a forward kinematics function regarding the joints with respect to reference poses of the target user and positions of the sensors, and the skeleton prior function based on statistics of skeleton sizes with respect to reference poses of a plurality of users.
    Type: Grant
    Filed: October 19, 2011
    Date of Patent: April 7, 2015
    Assignees: Samsung Electronics Co., Ltd., Texas A&M University System
    Inventors: Taehyun Rhee, Inwoo Ha, Dokyoon Kim, Xiaolin Wei, Jinxiang Chai, Huajun Liu
  • Patent number: 8994738
    Abstract: System and method for rendering a sequence of images corresponding to a sequence of camera poses of a target area to generate an animation representative of a progression of camera poses are provided. An initial image and an associated initial depthmap of a target area captured from an initial camera pose, and a final image and an associated final depthmap of the target area captured from a final camera pose are identified. A plurality of intermediate images representing a plurality of intermediate camera poses directed at the target are produced by performing interpolation on the initial image, the initial depthmap, the final image and the final depthmap. Each intermediate image is associated with a point along the navigational path between the initial and the final camera poses. An animation of the plurality of intermediate images produces a transition of views between the initial camera pose and the final camera pose.
    Type: Grant
    Filed: March 21, 2012
    Date of Patent: March 31, 2015
    Assignee: Google Inc.
    Inventors: Carlos Hernandez Esteban, Steven Maxwell Seitz, Matthew Robert Simpson
  • Patent number: 8988422
    Abstract: Techniques are disclosed for augmenting hand-drawn animation of human characters with three-dimensional (3D) physical effects to create secondary motion. Secondary motion, or the motion of objects in response to that of the primary character, is widely used to amplify the audience's response to the character's motion and to provide a connection to the environment. These 3D effects are largely passive and tend to be time consuming to animate by hand, yet most are very effectively simulated in current animation software. The techniques enable hand-drawn characters to interact with simulated objects such as cloth and clothing, balls and particles, and fluids. The driving points or volumes for the secondary motion are tracked in two dimensions, reconstructed into three dimensions, and used to drive and collide with the simulated objects.
    Type: Grant
    Filed: December 17, 2010
    Date of Patent: March 24, 2015
    Assignee: Disney Enterprises, Inc.
    Inventors: Jessica Kate Hodgins, Eakta Jain, Yaser Sheikh
  • Patent number: 8988437
    Abstract: In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate.
    Type: Grant
    Filed: March 20, 2009
    Date of Patent: March 24, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook
  • Patent number: 8988439
    Abstract: A method or apparatus to provide motion-based display effects in a mobile device is described. The method comprises determining a motion of the mobile device using an accelerometer. The method further comprises utilizing the motion of the mobile device to overlay a motion-based display effect on the display of the mobile device, in one embodiment to enhance the three-dimensional affect of the image.
    Type: Grant
    Filed: June 6, 2008
    Date of Patent: March 24, 2015
    Assignee: DP Technologies, Inc.
    Inventors: Philippe Kahn, Arthur Kinsolving, Colin McClarin Cooper, John Michael Fitzgibbons
  • Patent number: 8982132
    Abstract: Methods and systems for animation timelines using value templates are disclosed. In some embodiments, a method includes generating a data structure corresponding to a graphical representation of a timeline and creating an animation of an element along the timeline, where the animation modifies a property of the element according to a function, and where the function uses a combination of a string with a numerical value to render the animation. The method also includes adding a command corresponding to the animation into the data structure, where the command is configured to return the numerical value, and where the data structure includes a value template that produces the combination of the string with the numerical value. The method further includes passing the produced combination of the string with the numerical value to the function and executing the function to animate the element.
    Type: Grant
    Filed: February 28, 2011
    Date of Patent: March 17, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Joaquin Cruz Blas, Jr., James W. Doubek
  • Patent number: 8982122
    Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter, clothing selections, and texture-region color component selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the at least one texture selection via a user interface. In addition, the application server includes a generative model and the application server is configured to generate a 3D mesh based upon the user defined model parameters using the generative model and to apply texture to the generated mesh based upon the at least one texture selection.
    Type: Grant
    Filed: March 25, 2011
    Date of Patent: March 17, 2015
    Assignee: Mixamo, Inc.
    Inventors: Stefano Corazza, Emiliano Gambaretto
  • Publication number: 20150070362
    Abstract: A transition path determinator 4 determines a transition path leading from a current screen which an output unit 9 is currently displaying to a transition destination screen for which the transition path determinator accepts a shortcut operation from a user via an input unit 1 with reference to a hierarchical structure which a transition table storage 3 stores. An animation-during-transition acquiring unit 5 acquires each animation during transition included in the transition path from an animation-during-transition table storage 6, an animation-during-transition controller 7 controls a playback speed according to the number of hierarchical layers transitioned in the transition path, and an output unit 9 displays the animation during transition in order at the playback speed, so that a transition to the transition destination screen is made.
    Type: Application
    Filed: July 20, 2012
    Publication date: March 12, 2015
    Applicant: Mitsubishi Electric Corporation
    Inventor: Masato Hirai
  • Patent number: 8976184
    Abstract: A game developer can “tag” an item in the game environment. When an animated character walks near the “tagged” item, the animation engine can cause the character's head to turn toward the item, and mathematically computes what needs to be done in order to make the action look real and normal. The tag can also be modified to elicit an emotional response from the character. For example, a tagged enemy can cause fear, while a tagged inanimate object may cause only indifference or indifferent interest.
    Type: Grant
    Filed: October 9, 2013
    Date of Patent: March 10, 2015
    Assignee: Nintendo Co., Ltd.
    Inventors: Henry Sterchi, Jeff Kalles, Shigeru Miyamoto, Denis Dyack, Carey Murray
  • Patent number: 8970592
    Abstract: A system includes a computing device that includes a memory configured to store instructions. The computing device also includes a processor configured to execute the instructions to perform a method that includes obtaining first data corresponding to a first simulation of matter in a space domain. The method also includes performing, using the first data, a second simulation that produces second data representative of particles in the space domain. The method also includes rasterizing the second data representative of the particles as defined by cells of a grid, wherein each cell has a common depth-to-size ratio, and, rendering an image of the particles from the rasterized second data.
    Type: Grant
    Filed: April 19, 2011
    Date of Patent: March 3, 2015
    Assignee: Lucasfilm Entertainment Company LLC
    Inventor: Frank Losasso Petterson
  • Patent number: 8957899
    Abstract: The present invention includes an image processing apparatus having a slide show function of displaying a plurality of images while sequentially automatically switching the images and including an adding unit which adds a transition effect at the time of switching display from a first image to a second image, an obtaining unit which obtains characteristic values indicative of luminance of the first and second images, and a control unit which controls the adding unit to add a transition effect when the difference between the characteristic value of the first image and the characteristic value of the second image is equal to or larger than a predetermined threshold.
    Type: Grant
    Filed: August 2, 2010
    Date of Patent: February 17, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hirofumi Takei
  • Patent number: 8957900
    Abstract: Animation coordination system and methods are provided that manage animation context transitions between and/or among multiple applications. A global coordinator can obtain initial information, such as initial graphical representations and object types, initial positions, etc., from initiator applications and final information, such as final graphical representations and object types, final positions, etc. from destination applications. The global coordination creates an animation context transition between initiator applications and destination applications based upon the initial information and the final information.
    Type: Grant
    Filed: December 13, 2010
    Date of Patent: February 17, 2015
    Assignee: Microsoft Corporation
    Inventors: Bonny Lau, Song Zou, Wei Zhang, Brian Beck, Jonathan Gleasman, Pai-Hung Chen
  • Patent number: 8941667
    Abstract: The invention generally provides a method and apparatus for up-converting the frame rate of a digital video signal, the method comprising: receiving a digital video signal containing a first frame and a second frame; finding in one of the received frames, matches for objects in the other of the received frames; utilizing 3 dimensional position data in respect of the objects within the frames to determine 3 dimensional movement matrices for the matched objects; using the 3 dimensional movement matrices, determining the position of the objects in a temporally intermediate frame and thereby generating an interpolated frame, temporally between the first and second frame.
    Type: Grant
    Filed: January 29, 2010
    Date of Patent: January 27, 2015
    Assignee: Vestel Elektronik Sanayi ve Ticaret A,S.
    Inventors: Osman Serdar Gedik, Abdullah Aydin Alatan
  • Patent number: 8928671
    Abstract: In particular embodiments, a method includes generating a 3D display of an avatar of a person, where the avatar can receive inputs identifying a type of a physiological event, a location of the physiological event in or on a person's body in three spatial dimensions, a time range of the physiological event, a quality of the physiological event, and rendering the physiological event on the avatar based on the inputs.
    Type: Grant
    Filed: November 24, 2010
    Date of Patent: January 6, 2015
    Assignee: Fujitsu Limited
    Inventors: B. Thomas Adler, David Marvit, Jawahar Jain
  • Patent number: 8902233
    Abstract: Techniques that give animators the direct control they are accustomed to with key frame animation, while providing for path-based motion. A key frame animation-based interface is used to achieve path-based motion with rotation animation variable value correction using additional animation variables for smoothing. The value of the additional animation variables for smoothing can be directly controlled using a tangent handle in a user interface.
    Type: Grant
    Filed: March 4, 2011
    Date of Patent: December 2, 2014
    Assignee: Pixar
    Inventors: Chen Shen, Bena L. Currin, Timothy S. Milliron
  • Patent number: 8902232
    Abstract: Acquisition, modeling, compression, and synthesis of realistic facial deformations using polynomial displacement maps are described. An analysis phase can be included where the relationship between motion capture markers and detailed facial geometry is inferred. A synthesis phase can be included where detailed animated facial geometry is driven by a sparse set of motion capture markers. For analysis, an actor can be recorded wearing facial markers while performing a set of training expression clips. Real-time high-resolution facial deformations are captured, including dynamic wrinkle and pore detail, using interleaved structured light 3D scanning and photometric stereo. Next, displacements are calculated between a neutral mesh driven by the motion capture markers and the high-resolution captured expressions. These geometric displacements are stored in one or more polynomial displacement maps parameterized according to the local deformations of the motion capture dots.
    Type: Grant
    Filed: February 2, 2009
    Date of Patent: December 2, 2014
    Assignee: University of Southern California
    Inventors: Paul E. Debevec, Wan-Chun Ma, Timothy Hawkins
  • Patent number: 8902235
    Abstract: A computerized device implements an animation coding engine to analyze timeline data defining an animation sequence and generate a code package. The code package can represent the animation sequence using markup code that defines a rendered appearance of a plurality of frames and a structured data object also comprised in the code package and defining a parameter used by a scripting language in transitioning between frames. The markup code can also comprise a reference to a visual asset included within a frame. The code package further comprises a cascading style sheet defining an animation primitive as a style to be applied to the asset to reproduce one or more portions of the animation sequence without transitioning between frames.
    Type: Grant
    Filed: April 7, 2011
    Date of Patent: December 2, 2014
    Assignee: Adobe Systems Incorporated
    Inventor: Alexandru Chiculit{hacek over (a)}
  • Patent number: 8897821
    Abstract: A method for providing visual effect messages on a receiving end and associated transmitting end configuration is provided. At the transmitting end, visual effect positions and visual effects of messages are determined according to an input message. The visual effect positions and visual effect information are transmitted to the receiving end, and are displayed at the visual effect positions at the receiving end according to the visual information.
    Type: Grant
    Filed: April 19, 2012
    Date of Patent: November 25, 2014
    Assignee: MStar Semiconductor, Inc.
    Inventors: Chih-Hsien Huang, Sheng-Chi Yu
  • Patent number: 8878880
    Abstract: A method of driving an electrophoretic display device includes changing the gradation level of image data on the basis of correction data corresponding to the gradation level, converting image data with the changed gradation level to a dithering pattern, in which the first color and the second color are combined, corresponding to the changed gradation level for each predetermined region of image data, and driving the electrophoretic particles of the first color and the electrophoretic particles of the second color on the basis of image data converted to the dithering pattern for the plurality of pixels in the display section.
    Type: Grant
    Filed: April 7, 2011
    Date of Patent: November 4, 2014
    Assignee: Seiko Epson Corporation
    Inventors: Tetsuaki Otsuki, Kota Muto
  • Patent number: 8880044
    Abstract: A mobile terminal is presented. The mobile terminal includes a display including a touchscreen, and a controller for performing an editing operation on information displayed on the touchscreen according to a state of an object in near-proximity to the displayed information.
    Type: Grant
    Filed: January 26, 2009
    Date of Patent: November 4, 2014
    Assignee: LG Electronics Inc.
    Inventor: Jong Hwan Kim
  • Patent number: RE45422
    Abstract: Annotation techniques are provided. In one aspect, a method for processing a computer-based material is provided. The method comprises the following steps. The computer-based material is presented. One or more portions of the computer-based material are determined to be of interest to a user. The one or more portions are annotated to permit return to the one or more portions at a later time. In another aspect, a user interface is provided. The user interface comprises a computer-based material; a viewing focal area encompassing a portion of the computer-based material; and one or more indicia associated with and annotating the portion of the computer-based material.
    Type: Grant
    Filed: December 27, 2012
    Date of Patent: March 17, 2015
    Assignee: Loughton Technology, L.L.C.
    Inventor: Christopher Vance Beckman