Temporal Interpolation Or Processing Patents (Class 345/475)
-
Patent number: 10607568Abstract: A method for processing virtual pointer movement. The method and related components that, when active in an application or virtual environment as a whole, will aid a user in tasks that would require more effort and dexterity with currently implemented pointers. The method uses three operations (separated into two components, Smooth Pointer and Aim Assist Area) that result in a systematic asynchronous movement of a controller and its interface counterpart. Smooth Pointer, responsible for the pointer's stability, is a group of features that manage the process of smoothing the controller's rotation and final virtual pointer position by using a continuous interpolation process. Furthermore, Aim Assist Area is the component (based on two methods: Magnetism and Friction) that describes how the pointer will suffer interferences from the environment to help users have more stability while aiming at elements available in the interface.Type: GrantFiled: June 20, 2018Date of Patent: March 31, 2020Assignee: SAMSUNG ELECTRÔNICA DA AMAZÔNIA LTDA.Inventors: Alvaro Augusto Braga Lourenço, Taynah De Araujo Miyagawa, Juscelino Tanaka Saraiva
-
Patent number: 10546409Abstract: Techniques described herein relate to a streamlined animation production workflow that integrates script drafting, performance, and editing. A script including animation events is parsed to encode the animation events into nodes of a story model. The animation events are automatically triggered by a performance as a playhead advances through the story model and identifies active node(s). A command interface accepts various commands that allow a performer to act as a director by controlling recording and playback. Recording binds a generated animation event to each active node. Playback triggers generated animation events for active nodes. An animated movie is assembled from the generated animation events in the story model. The animated movie can be presented as a live preview to provide feedback to the performer, and a teleprompter interface can guide a performer by presenting and advancing the script to follow the performance.Type: GrantFiled: August 7, 2018Date of Patent: January 28, 2020Assignee: Adobe Inc.Inventors: Hariharan Subramonyam, Eytan Adar, Lubomira Assenova Dontcheva, Wilmot Wei-Mau Li
-
Patent number: 10540820Abstract: This disclosure generally relates to a system, which includes a processor to receive video of a cymatic effect. The video of the cymatic effect may be converted into a virtual reality effect which includes a virtual reality representation of the cymatic effect. The virtual reality effect may then be output by the processor to a virtual reality device for display to a user.Type: GrantFiled: February 1, 2018Date of Patent: January 21, 2020Assignee: CTRL5, Corp.Inventor: Robert Owen Brown, III
-
Patent number: 10462447Abstract: An electronic system includes a circuitry configured to obtain a sequence of frames of an object under different viewing angles at consecutive time instances. For a first time instance, the circuitry generates a point cloud descriptive for an external surface of the object on basis of (i) a point cloud obtained for a second time instance preceding the first time instance and (ii) disparity information concerning a frame captured at the first time instance.Type: GrantFiled: June 28, 2019Date of Patent: October 29, 2019Assignee: Sony CorporationInventors: Roderick Köhle, Francesco Michielin, Dennis Harres
-
Patent number: 10417818Abstract: A method for providing a three-dimensional body model which may be applied for an animation, based on a moving body, wherein the method comprises providing a parametric three-dimensional body model, which allows shape and pose variations; applying a standard set of body markers; optimizing the set of body markers by generating an additional set of body markers and applying the same for providing 3D coordinate marker signals for capturing shape and pose of the body and dynamics of soft tissue; and automatically providing an animation by processing the 3D coordinate marker signals in order to provide a personalized three-dimensional body model, based on estimated shape and an estimated pose of the body by means of predicted marker locations.Type: GrantFiled: June 19, 2017Date of Patent: September 17, 2019Assignee: Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V.Inventors: Matthew Loper, Naureen Mahmood, Michael Black
-
Patent number: 10388078Abstract: Disclosed are computer-readable devices, systems and methods for generating a model of a clothed body. The method includes generating a model of an unclothed human body, the model capturing a shape or a pose of the unclothed human body, determining two-dimensional contours associated with the model, and computing deformations by aligning a contour of a clothed human body with a contour of the unclothed human body. Based on the two-dimensional contours and the deformations, the method includes generating a first two-dimensional model of the unclothed human body, the first two-dimensional model factoring the deformations of the unclothed human body into one or more of a shape variation component, a viewpoint change, and a pose variation and learning an eigen-clothing model using principal component analysis applied to the deformations, wherein the eigen-clothing model classifies different types of clothing, to yield a second two-dimensional model of a clothed human body.Type: GrantFiled: September 11, 2017Date of Patent: August 20, 2019Assignee: BROWN UNIVERSITYInventors: Michael J. Black, Oren Freifeld, Alexander W. Weiss, Matthew M. Loper, Peng Guan
-
Patent number: 10299750Abstract: A medical image processing apparatus according to an embodiment includes processing circuitry. The processing circuitry detects three or more bones and a joint space region from three-dimensional medical image data captured for images of a joint formed between the three or more bones, the joint space region corresponding to a joint space of the joint. The processing circuitry divides the joint space region into a plurality of small regions corresponding to different pairs of opposed bones of the three or more bones. The processing circuitry obtains information on each of the small regions based on the small regions into which the joint space region has been divided that correspond to the different pairs of bones. The processing circuitry outputs the obtained information.Type: GrantFiled: July 28, 2017Date of Patent: May 28, 2019Assignee: Toshiba Medical Systems CorporationInventors: Shintaro Funabasama, Yasuko Fujisawa
-
Patent number: 10275925Abstract: A blend shape method and system that modifies the U-V values associated with vertices in blend shapes constructed in a 3-D blend shape combination system. The blend shape method determines the U-V coordinates associated with each vertex in a base shape and the U-V coordinates associated with corresponding vertices in one or more driving shapes. The method calculates U-V delta values that are associated with vertices in the driving shape. The method multiplies the U-V delta values by a weight value associated with the driving shape to determine a transitional U-V delta value for each vertex. The transitional U-V delta value for each vertex is added to the U-V coordinates for the corresponding vertex in the base shape to determine the modified U-V coordinates for the resulting blend shape. Multiple driving shapes may be used with each shape contributing to the modified U-V values according to its relative weight.Type: GrantFiled: September 29, 2016Date of Patent: April 30, 2019Assignee: Sony Interactive Entertainment America, LLCInventor: Homoud B. Alkouh
-
Patent number: 10248212Abstract: A system is provided that encodes one or more dynamic haptic effects. The system defines a dynamic haptic effect as including a plurality of key frames, where each key frame includes an interpolant value and a corresponding haptic effect. An interpolant value is a value that specifies where an interpolation occurs. The system generates a haptic effect file, and stores the dynamic haptic effect within the haptic effect file.Type: GrantFiled: March 15, 2018Date of Patent: April 2, 2019Assignee: IMMERSION CORPORATIONInventors: Henry Da Costa, Feng Tian An, Christopher J. Ullrich
-
Patent number: 10169676Abstract: Described herein are methods and systems for closed-form 3D model generation of non-rigid complex objects from scans with large holes. A computing device receives (i) a partial scan of a non-rigid complex object captured by a sensor coupled to the computing device; (ii) a partial 3D model corresponding to the object, and (iii) a whole 3D model corresponding to the object, wherein the partial 3D scan and the partial 3D model each includes one or more large holes. The device performs a rough match on the partial 3D model and changes the whole 3D model using the rough match to generate a deformed 3D model. The device refines the deformed 3D model using a deformation graph, reshapes the refined deformed 3D model to have greater detail, and adjusts the whole 3D model according to the reshaped 3D model to generate a closed-form 3D model that closes holes in the scan.Type: GrantFiled: February 23, 2017Date of Patent: January 1, 2019Assignee: VanGogh Imaging, Inc.Inventors: Xin Hou, Yasmin Jahir, Jun Yin
-
Patent number: 10062410Abstract: Techniques and devices for creating an AutoLoop output video include performing pregate operations. The AutoLoop output video is created from a set of frames. Prior to creating the AutoLoop output video, the set of frames are automatically analyzed to identify one or more image features that are indicative of whether the image content in the set of frames is compatible with creating a video loop. Pregate operations assign one or more pregate scores for the set of frames based on the one or more identified image features, where the pregate scores indicate a compatibility to create the video loop based on the identified image features. Pregate operations automatically determine to create the video loop based on the pregate scores and generate an output video loop based on the loop parameters and at least a portion of the set of frames.Type: GrantFiled: September 23, 2016Date of Patent: August 28, 2018Assignee: Apple Inc.Inventors: Arwen V. Bradley, Samuel G. Noble, Rudolph van der Merwe, Jason Klivington, Douglas P. Mitchell, Joseph M. Triscari
-
Patent number: 10055888Abstract: A computing system and method for producing and consuming metadata within multi-dimensional data is provided. The computing system comprising a see-through display, a sensor system, and a processor configured to: in a recording phase, generate an annotation at a location in a three dimensional environment, receive, via the sensor system, a stream of telemetry data recording movement of a first user in the three dimensional environment, receive a message to be recorded from the first user, and store, in memory as annotation data for the annotation, the stream of telemetry data and the message, and in a playback phase, display a visual indicator of the annotation at the location, receive a selection of the visual indicator by a second user, display a simulacrum superimposed onto the three dimensional environment and animated according to the telemetry data, and present the message via the animated simulacrum.Type: GrantFiled: April 28, 2015Date of Patent: August 21, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Jonathan Christen, John Charles Howard, Marcus Tanner, Ben Sugden, Robert C. Memmott, Kenneth Charles Ouellette, Alex Kipman, Todd Alan Omotani, James T. Reichert, Jr.
-
Patent number: 10049473Abstract: Embodiments of the disclosure are systems and methods for providing third party visualizations. In one embodiment, a method is provided that includes receiving, via an API, computer-executable instructions configured to render a visualization using events and a variable field; rendering the visualization using the events; causing displaying of a graphical user interface (GUI) comprising a visualization panel and a variable element; receiving, via the variable element of the GUI, an indication of a first change in the value of the variable field to a first value; re-rendering the visualization using the events and the first value; and causing display of the GUI with an updated visualization panel and the variable element.Type: GrantFiled: April 27, 2015Date of Patent: August 14, 2018Assignee: SPLUNK INCInventors: Nicholas Filippi, Simon Fishel, Siegfried Puchbauer-Schnabel, Mathew Elting, Carl Yestrau
-
Patent number: 9997201Abstract: The system provides a method and apparatus for writing a unique copy of data associated with each of a plurality of individual users, without the need for storing duplicate copies of the entire data file. The system provides for creating an unusable copy of a portion of the data that is to be shared by all users of the complete data. The system will store and optionally encrypt and/or watermark a unique copy of the remainder portion of the data for each unique user. When accessed from storage, the system will combine the shared portion with the unique remainder to reconstitute the entire file for access by the user. Deleting the unique remainder associated with a particular user makes all of the data useless to that user. In one embodiment, the system first compresses the entire data file using index frames and delta.Type: GrantFiled: June 19, 2015Date of Patent: June 12, 2018Assignee: PHILO, INC.Inventors: Christopher Thorpe, Thomer Gil, Christopher Small
-
Patent number: 9977816Abstract: A system determines ranking scores for objects based on “virtual” links defined for the objects. A link-based ranking score may then be calculated for the objects based on the virtual links. In one implementation, the virtual links are determined based on a metric of content-based similarity between the objects.Type: GrantFiled: December 10, 2015Date of Patent: May 22, 2018Assignee: Google LLCInventors: Yushi Jing, Henry Allan Rowley, Shumeet Baluja
-
Patent number: 9830741Abstract: Techniques are disclosed for processing graphics objects in a stage of a graphics processing pipeline. The techniques include receiving a graphics primitive associated with the graphics object, and determining a plurality of attributes corresponding to one or more vertices associated with the graphics primitive. The techniques further include determining values for one or more state parameters associated with a downstream stage of the graphics processing pipeline based on a visual effect associated with the graphics primitive. The techniques further include transmitting the state parameter values to the downstream stage of the graphics processing pipeline. One advantage of the disclosed techniques is that visual effects are flexibly and efficiently performed.Type: GrantFiled: November 7, 2012Date of Patent: November 28, 2017Assignee: NVIDIA CorporationInventors: Emmett M. Kilgariff, Morgan McGuire, Yury Y. Uralsky, Ziyad S. Hakura
-
Patent number: 9797802Abstract: A method for developing a virtual testing model of a subject for use in simulated aerodynamic testing comprises providing a computer generated generic 3D mesh of the subject, identifying a dimension of the subject and at least one reference point on the subject, imaging the subject to develop point cloud data representing at least the subject's outer surface and adapting the generic 3D mesh to the subject. The generic 3D mesh is adapted by modifying it to have a corresponding dimension and at least one corresponding reference point, and applying at least a portion of the point cloud data from the imaged subject's outer surface at selected locations to scale the generic 3D mesh to correspond to the subject, thereby developing the virtual testing model specific to the subject.Type: GrantFiled: March 4, 2014Date of Patent: October 24, 2017Inventor: Jay White
-
Patent number: 9779484Abstract: Dynamic motion path blur techniques are described. In one or more implementations, paths may be specified to constrain a motion blur effect to be applied to a single image. A variety of different techniques may be employed as part of the motion blur effects, including use of curved blur kernel shapes, use of a mesh representation of blur kernel parameter fields to support real time output of the motion blur effect to an image, use of flash effects, blur kernel positioning to support centered or directional blurring, tapered exposure modeling, and null paths.Type: GrantFiled: August 4, 2014Date of Patent: October 3, 2017Assignee: Adobe Systems IncorporatedInventors: Gregg D. Wilensky, Nathan A. Carr
-
Patent number: 9704288Abstract: Techniques are disclosed for providing a learning-based clothing model that enables the simultaneous animation of multiple detailed garments in real-time. A simple conditional model learns and preserves key dynamic properties of cloth motions and folding details. Such a conditional model may be generated for each garment worn by a given character. Once generated, the conditional model may be used to determine complex body/cloth interactions in order to render the character and garment from frame-to-frame. The clothing model may be used for a variety of garments worn by male and female human characters (as well as non-human characters) while performing a varied set of motions typically used in video games (e.g., walking, running, jumping, turning, etc.).Type: GrantFiled: December 21, 2010Date of Patent: July 11, 2017Assignee: Disney Enterprises, Inc.Inventors: Edilson de Aguiar, Leonid Sigal, Adrien Treuille, Jessica K. Hodgins
-
Patent number: 9681173Abstract: There are discloses a method of and a server for processing a user request for a web resource, the user request received at a server from an electronic device via a communication network. The method can be executed at the server.Type: GrantFiled: October 2, 2015Date of Patent: June 13, 2017Assignee: YANDEX EUROPE AGInventors: Nina Viktorovna Sapunova, Evgeny Valeryevich Eroshin, Ekaterina Vladimirovna Rubtcova, Maksim Pavlovich Voznin, Grigory Aleksandrovich Matveev, Nikita Alekseevich Smetanin
-
Patent number: 9672646Abstract: Systems, methods, and computer-readable storage media for performing a visual rewind operation in an image editing application may include capturing, compressing, and storing image data and interaction logs and correlations between them. The stored information may be used in a visual rewind operation, during which a sequence of frames (e.g., an animation) depicting changes in an image during image editing operations is displayed in reverse order. In response to navigating to a point in the animation, data representing the image state at that point may be reconstructed from the stored data and stored as a modified image or a variation thereof. The methods may be employed in an image editing application to provide a partial undo operation, image editing variation previewing, and/or visually-driven editing script creation. The methods may be implemented as stand-alone applications or as program instructions implementing components of a graphics application, executable by a CPU and/or GPU.Type: GrantFiled: August 28, 2009Date of Patent: June 6, 2017Assignee: Adobe Systems IncorporatedInventors: Jerry G. Harris, Scott L. Byer, Stephan D. Schaem
-
Patent number: 9646227Abstract: This disclosure describes techniques for training models from video data and applying the learned models to identify desirable video data. Video data may be labeled to indicate a semantic category and/or a score indicative of desirability. The video data may be processed to extract low and high level features. A classifier and a scoring model may be trained based on the extracted features. The classifier may estimate a probability that the video data belongs to at least one of the categories in a set of semantic categories. The scoring model may determine a desirability score for the video data. New video data may be processed to extract low and high level features, and feature values may be determined based on the extracted features. The learned classifier and scoring model may be applied to the feature values to determine a desirability score associated with the new video data.Type: GrantFiled: July 29, 2014Date of Patent: May 9, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Nitin Suri, Xian-Sheng Hua, Tzong-Jhy Wang, William D. Sproule, Andrew S. Ivory, Jin Li
-
Patent number: 9600160Abstract: There is provided an image processing device including a moving image generation unit configured to generate a parallelly animated moving image in which a plurality of object images are each parallelly animated, the plurality of the object images having been selected from a series of object images that have been generated by extracting a moving object from frame images of a source moving image, and an image output unit configured to output the parallelly animated moving image.Type: GrantFiled: October 29, 2014Date of Patent: March 21, 2017Assignee: Sony CorporationInventors: Tatsuhiro Iida, Shogo Kimura
-
Patent number: 9478066Abstract: A system, method, and computer program product are provided for adjusting vertex positions. One or more viewport dimensions are received and a snap spacing is determined based on the one or more viewport dimensions. The vertex positions are adjusted to a grid according to the snap spacing. The precision of the vertex adjustment may increase as at least one dimension of the viewport decreases. The precision of the vertex adjustment may decrease as at least one dimension of the viewport increases.Type: GrantFiled: March 14, 2013Date of Patent: October 25, 2016Assignee: NVIDIA CorporationInventors: Eric Brian Lum, Henry Packard Moreton, Kyle Perry Roden, Walter Robert Steiner, Ziyad Sami Hakura
-
Patent number: 9373187Abstract: Method and apparatus for producing a cinemagraph, wherein based on received user input an image from a sequence of images is selected as a baseframe image. The baseframe image is segmented and at least one segment is selected based on user input. A mask is created based on the selected segments and at least one image most similar to the baseframe is selected from the sequence of images using the mask. The selected images are aligned the baseframe image a first cinemagraph is created from the selected images and the baseframe image using the mask.Type: GrantFiled: May 25, 2012Date of Patent: June 21, 2016Assignee: Nokia CorporationInventors: Kemal Ugur, Ali Karaoglu, Miska Hannuksela, Jani Lainema
-
Patent number: 9355438Abstract: The geometric distortions of videos and images are corrected wherein a plurality of geometrically distorted frames are mapped with a plurality of original frames of the video content. Further, one or more features associated with the mapped frames are identified as insensitive to the one or more geometric distortions. One or more features of the mapped frames are further mapped with original frames based on a predefined similarity threshold and thereafter one or more geometric distortion parameters are determined. Furthermore, a frame level average distortion and a video level average distortion of each of the one or more geometric distortion parameters are determined, based on which the one or more geometric distortions of the video content are corrected.Type: GrantFiled: January 14, 2015Date of Patent: May 31, 2016Assignee: INFOSYS LIMITEDInventors: Sachin Mehta, Rajarathnam Nallusamy
-
Patent number: 9324376Abstract: Traditionally, time-lapse videos are constructed from images captured at given time intervals called “temporal points of interests” or “temporal POIs.” Disclosed herein are intelligent systems and methods of capturing and selecting better images around temporal points of interest for the construction of improved time-lapse videos. According to some embodiments, a small “burst” of images may be captured, centered around the aforementioned temporal points of interest. Then, each burst sequence of images may be analyzed, e.g., by performing a similarity comparison between each image in the burst sequence and the image selected at the previous temporal point of interest. Selecting the image from a given burst that is most similar to the previous selected image allows the intelligent systems and methods described herein to improve the quality of the resultant time-lapse video by discarding “outlier” or other undesirable images captured in the burst sequence around a particular temporal point of interest.Type: GrantFiled: September 30, 2014Date of Patent: April 26, 2016Assignee: Apple Inc.Inventor: Frank Doepke
-
Patent number: 9305385Abstract: An animation timeline is analyzed to determine one or more discrete states. Each discrete state includes one or more animation effects. The discrete states represent scenes of a slide in a slide presentation. The concepts of scenes allows user to view a timeline of scenes, open a scene, and direct manipulate objects in the scene to author animations. The animations can include motion path animation effects, which can be directly manipulated utilizing a motion path tweening method. To aid in direct manipulation of a motion path of an object, a ghost version of the object can be shown to communicate to a user the position of the object after a motion path animation effect that includes the motion path is performed. The ghost version may also be used to show a start position when a start point is manipulated.Type: GrantFiled: November 17, 2011Date of Patent: April 5, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Christopher Michael Maloney, Mirza Pasalic, Runzhen Huang
-
Patent number: 9292967Abstract: A novel “contour person” (CP) model of the human body is proposed that has the expressive power of a detailed 3D model and the computational benefits of a simple 20 part-based model. The CP model is learned from a 3D model of the human body that captures natural shape and pose variations. The CP model factors deformations of the body into three components: shape variation, viewpoint change and pose variation. The CP model can be “dressed” with a low-dimensional clothing model. The clothing is represented as a deformation from the underlying CP representation. This deformation is learned from training examples using principal component analysis to produce so-called eigen-clothing. The coefficients of the eigen-clothing can be used to recognize different categories of clothing on dressed people. The parameters of the estimated 20 body can be used to discriminatively predict 3D body shape using a learned mapping approach.Type: GrantFiled: June 8, 2011Date of Patent: March 22, 2016Assignee: Brown UniversityInventors: Michael J. Black, Oren Freifeld, Alexander W. Weiss, Matthew M. Loper, Peng Guan
-
Patent number: 9083814Abstract: Displaying a lock mode screen of a mobile terminal is disclosed. One embodiment of the present disclosure pertains to a mobile terminal comprising a display module, an input device configured to detect an input for triggering a bouncing animation of a lock mode screen, and a controller configured to cause the display module to display the bouncing animation in response to the input for triggering the bouncing animation, where the bouncing animation comprises the lock mode screen bouncing for a set number of times with respect to an edge of the display module prior to stabilization.Type: GrantFiled: October 13, 2010Date of Patent: July 14, 2015Assignee: LG ELECTRONICS INC.Inventors: Jungjoon Lee, Taehun Kim, Taekon Lee, Jeongyoon Rhee, Younhwa Choi, Minhun Kang, Hyunjoo Jeon
-
Patent number: 9041718Abstract: Techniques are disclosed for generating a bilinear spatiotemporal basis model. A method includes the steps of predefining a trajectory basis for the bilinear spatiotemporal basis model, receiving three-dimensional spatiotemporal data for a training sequence, estimating a shape basis for the bilinear spatiotemporal basis model using the three-dimensional spatiotemporal data, and computing coefficients for the bilinear spatiotemporal basis model using the trajectory basis and the shape basis.Type: GrantFiled: March 20, 2012Date of Patent: May 26, 2015Assignee: Disney Enterprises, Inc.Inventors: Iain Matthews, Ijaz Akhter, Tomas Simon, Sohaib Khan, Yaser Sheikh
-
Patent number: 9041717Abstract: Techniques are disclosed for creating animated video frames which include both computer generated elements and hand drawn elements. For example, a software tool may allows an artist to draw line work (or supply other 2D image data) to composite with an animation frame rendered from a three dimensional (3D) graphical model of an object. The software tool may be configured to determine how to animate such 2D image data provided for one frame in order to appear in subsequent (or prior) frames in a manner consistent with changes in rendering the underlying 3D geometry.Type: GrantFiled: September 12, 2011Date of Patent: May 26, 2015Assignee: Disney Enterprises, Inc.Inventors: Michael Kaschalk, Eric A. Daniels, Brian S. Whited, Kyle D. Odermatt, Patrick T. Osborne
-
Patent number: 9030479Abstract: Disclosed are a system and a method for motion editing multiple synchronized characters. The motion editing system comprises: a Laplacian motion editor which edits a spatial route of inputted character data according to user conditions, and processes the distortion of the interaction time; and a discrete motion editor which applies a discrete transformation while the character data is processed.Type: GrantFiled: June 19, 2009Date of Patent: May 12, 2015Assignee: SNU R&DB FoundationInventors: Jehee Lee, Manmyung Kim
-
Patent number: 9019279Abstract: System and method for rendering a sequence of orthographic approximation images corresponding to camera poses to generate an animation moving between an initial view and a final view of a target area are provided. An initial image corresponding to an initial camera pose directed at the target area is identified. A final image and an associated depthmap corresponding to a final camera pose directed at the target area is further identified. A plurality of intermediate images corresponding to a plurality of camera poses directed at the target area is produced by performing interpolation on the initial image, the final image, and the associated depthmap. Each intermediate image is associated with a point along a navigational path between the initial camera pose and the final camera pose. An animation of the plurality of intermediate images produces a transition of views between the initial camera pose and the final camera pose.Type: GrantFiled: March 21, 2012Date of Patent: April 28, 2015Assignee: Google Inc.Inventors: Jeffrey Thomas Prouty, Steven Maxwell Seitz, Carlos Hernandez Esteban, Matthew Robert Simpson
-
Patent number: 9019278Abstract: Systems, methods and products for animating non-humanoid characters with human motion are described. One aspect includes selecting key poses included in initial motion data at a computing system; obtaining non-humanoid character key poses which provide a one to one correspondence to selected key poses in said initial motion data; and statically mapping poses of said initial motion data to non-humanoid character poses using a model built based on said one to one correspondence from said key poses of said initial motion data to said non-humanoid character key poses. Other embodiments are described.Type: GrantFiled: December 2, 2013Date of Patent: April 28, 2015Assignee: Disney Enterprises, Inc.Inventors: Jessica Kate Hodgins, Katsu Yamane, Yuka Ariki
-
Patent number: 9007381Abstract: An exemplary method includes a transition animation system detecting a screen size of a display screen associated with a computing device executing an application, automatically generating, based on the detected screen size, a plurality of animation step values each corresponding to a different animation step included in a plurality of animation steps that are to be involved in an animation of a transition of a user interface associated with the application into the display screen, and directing the computing device to perform the plurality of animation steps in accordance with the generated animation step values. Corresponding methods and systems are also disclosed.Type: GrantFiled: September 2, 2011Date of Patent: April 14, 2015Assignee: Verizon Patent and Licensing Inc.Inventors: Jian Huang, Jack J. Hao
-
Patent number: 9001129Abstract: A processing apparatus for creating an avatar is provided. The processing apparatus calculates skeleton sizes of joints of the avatar and local coordinates corresponding to sensors attached to a target user, by minimizing a sum of a difference function and a skeleton prior function, the difference function representing a difference between a forward kinematics function regarding the joints with respect to reference poses of the target user and positions of the sensors, and the skeleton prior function based on statistics of skeleton sizes with respect to reference poses of a plurality of users.Type: GrantFiled: October 19, 2011Date of Patent: April 7, 2015Assignees: Samsung Electronics Co., Ltd., Texas A&M University SystemInventors: Taehyun Rhee, Inwoo Ha, Dokyoon Kim, Xiaolin Wei, Jinxiang Chai, Huajun Liu
-
Patent number: 8994738Abstract: System and method for rendering a sequence of images corresponding to a sequence of camera poses of a target area to generate an animation representative of a progression of camera poses are provided. An initial image and an associated initial depthmap of a target area captured from an initial camera pose, and a final image and an associated final depthmap of the target area captured from a final camera pose are identified. A plurality of intermediate images representing a plurality of intermediate camera poses directed at the target are produced by performing interpolation on the initial image, the initial depthmap, the final image and the final depthmap. Each intermediate image is associated with a point along the navigational path between the initial and the final camera poses. An animation of the plurality of intermediate images produces a transition of views between the initial camera pose and the final camera pose.Type: GrantFiled: March 21, 2012Date of Patent: March 31, 2015Assignee: Google Inc.Inventors: Carlos Hernandez Esteban, Steven Maxwell Seitz, Matthew Robert Simpson
-
Patent number: 8988439Abstract: A method or apparatus to provide motion-based display effects in a mobile device is described. The method comprises determining a motion of the mobile device using an accelerometer. The method further comprises utilizing the motion of the mobile device to overlay a motion-based display effect on the display of the mobile device, in one embodiment to enhance the three-dimensional affect of the image.Type: GrantFiled: June 6, 2008Date of Patent: March 24, 2015Assignee: DP Technologies, Inc.Inventors: Philippe Kahn, Arthur Kinsolving, Colin McClarin Cooper, John Michael Fitzgibbons
-
Patent number: 8988422Abstract: Techniques are disclosed for augmenting hand-drawn animation of human characters with three-dimensional (3D) physical effects to create secondary motion. Secondary motion, or the motion of objects in response to that of the primary character, is widely used to amplify the audience's response to the character's motion and to provide a connection to the environment. These 3D effects are largely passive and tend to be time consuming to animate by hand, yet most are very effectively simulated in current animation software. The techniques enable hand-drawn characters to interact with simulated objects such as cloth and clothing, balls and particles, and fluids. The driving points or volumes for the secondary motion are tracked in two dimensions, reconstructed into three dimensions, and used to drive and collide with the simulated objects.Type: GrantFiled: December 17, 2010Date of Patent: March 24, 2015Assignee: Disney Enterprises, Inc.Inventors: Jessica Kate Hodgins, Eakta Jain, Yaser Sheikh
-
Patent number: 8988437Abstract: In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate.Type: GrantFiled: March 20, 2009Date of Patent: March 24, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook
-
Patent number: 8982132Abstract: Methods and systems for animation timelines using value templates are disclosed. In some embodiments, a method includes generating a data structure corresponding to a graphical representation of a timeline and creating an animation of an element along the timeline, where the animation modifies a property of the element according to a function, and where the function uses a combination of a string with a numerical value to render the animation. The method also includes adding a command corresponding to the animation into the data structure, where the command is configured to return the numerical value, and where the data structure includes a value template that produces the combination of the string with the numerical value. The method further includes passing the produced combination of the string with the numerical value to the function and executing the function to animate the element.Type: GrantFiled: February 28, 2011Date of Patent: March 17, 2015Assignee: Adobe Systems IncorporatedInventors: Joaquin Cruz Blas, Jr., James W. Doubek
-
Patent number: 8982122Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter, clothing selections, and texture-region color component selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the at least one texture selection via a user interface. In addition, the application server includes a generative model and the application server is configured to generate a 3D mesh based upon the user defined model parameters using the generative model and to apply texture to the generated mesh based upon the at least one texture selection.Type: GrantFiled: March 25, 2011Date of Patent: March 17, 2015Assignee: Mixamo, Inc.Inventors: Stefano Corazza, Emiliano Gambaretto
-
Publication number: 20150070362Abstract: A transition path determinator 4 determines a transition path leading from a current screen which an output unit 9 is currently displaying to a transition destination screen for which the transition path determinator accepts a shortcut operation from a user via an input unit 1 with reference to a hierarchical structure which a transition table storage 3 stores. An animation-during-transition acquiring unit 5 acquires each animation during transition included in the transition path from an animation-during-transition table storage 6, an animation-during-transition controller 7 controls a playback speed according to the number of hierarchical layers transitioned in the transition path, and an output unit 9 displays the animation during transition in order at the playback speed, so that a transition to the transition destination screen is made.Type: ApplicationFiled: July 20, 2012Publication date: March 12, 2015Applicant: Mitsubishi Electric CorporationInventor: Masato Hirai
-
Patent number: 8976184Abstract: A game developer can “tag” an item in the game environment. When an animated character walks near the “tagged” item, the animation engine can cause the character's head to turn toward the item, and mathematically computes what needs to be done in order to make the action look real and normal. The tag can also be modified to elicit an emotional response from the character. For example, a tagged enemy can cause fear, while a tagged inanimate object may cause only indifference or indifferent interest.Type: GrantFiled: October 9, 2013Date of Patent: March 10, 2015Assignee: Nintendo Co., Ltd.Inventors: Henry Sterchi, Jeff Kalles, Shigeru Miyamoto, Denis Dyack, Carey Murray
-
Patent number: 8970592Abstract: A system includes a computing device that includes a memory configured to store instructions. The computing device also includes a processor configured to execute the instructions to perform a method that includes obtaining first data corresponding to a first simulation of matter in a space domain. The method also includes performing, using the first data, a second simulation that produces second data representative of particles in the space domain. The method also includes rasterizing the second data representative of the particles as defined by cells of a grid, wherein each cell has a common depth-to-size ratio, and, rendering an image of the particles from the rasterized second data.Type: GrantFiled: April 19, 2011Date of Patent: March 3, 2015Assignee: Lucasfilm Entertainment Company LLCInventor: Frank Losasso Petterson
-
Patent number: 8957900Abstract: Animation coordination system and methods are provided that manage animation context transitions between and/or among multiple applications. A global coordinator can obtain initial information, such as initial graphical representations and object types, initial positions, etc., from initiator applications and final information, such as final graphical representations and object types, final positions, etc. from destination applications. The global coordination creates an animation context transition between initiator applications and destination applications based upon the initial information and the final information.Type: GrantFiled: December 13, 2010Date of Patent: February 17, 2015Assignee: Microsoft CorporationInventors: Bonny Lau, Song Zou, Wei Zhang, Brian Beck, Jonathan Gleasman, Pai-Hung Chen
-
Patent number: 8957899Abstract: The present invention includes an image processing apparatus having a slide show function of displaying a plurality of images while sequentially automatically switching the images and including an adding unit which adds a transition effect at the time of switching display from a first image to a second image, an obtaining unit which obtains characteristic values indicative of luminance of the first and second images, and a control unit which controls the adding unit to add a transition effect when the difference between the characteristic value of the first image and the characteristic value of the second image is equal to or larger than a predetermined threshold.Type: GrantFiled: August 2, 2010Date of Patent: February 17, 2015Assignee: Canon Kabushiki KaishaInventor: Hirofumi Takei
-
Patent number: 8941667Abstract: The invention generally provides a method and apparatus for up-converting the frame rate of a digital video signal, the method comprising: receiving a digital video signal containing a first frame and a second frame; finding in one of the received frames, matches for objects in the other of the received frames; utilizing 3 dimensional position data in respect of the objects within the frames to determine 3 dimensional movement matrices for the matched objects; using the 3 dimensional movement matrices, determining the position of the objects in a temporally intermediate frame and thereby generating an interpolated frame, temporally between the first and second frame.Type: GrantFiled: January 29, 2010Date of Patent: January 27, 2015Assignee: Vestel Elektronik Sanayi ve Ticaret A,S.Inventors: Osman Serdar Gedik, Abdullah Aydin Alatan
-
Patent number: RE45422Abstract: Annotation techniques are provided. In one aspect, a method for processing a computer-based material is provided. The method comprises the following steps. The computer-based material is presented. One or more portions of the computer-based material are determined to be of interest to a user. The one or more portions are annotated to permit return to the one or more portions at a later time. In another aspect, a user interface is provided. The user interface comprises a computer-based material; a viewing focal area encompassing a portion of the computer-based material; and one or more indicia associated with and annotating the portion of the computer-based material.Type: GrantFiled: December 27, 2012Date of Patent: March 17, 2015Assignee: Loughton Technology, L.L.C.Inventor: Christopher Vance Beckman