Motion Planning Or Control Patents (Class 345/474)
  • Patent number: 9269178
    Abstract: Some embodiments provide a non-transitory machine-readable medium that stores a mapping application which when executed on a device by at least one processing unit provides automated animation of a three-dimensional (3D) map along a navigation route. The mapping application identifies a first set of attributes for determining a first position of a virtual camera in the 3D map at a first instance in time. Based on the identified first set of attributes, the mapping application determines the position of the virtual camera in the 3D map at the first instance in time. The mapping application identifies a second set of attributes for determining a second position of the virtual camera in the 3D map at a second instance in time. Based on the identified second set of attributes, the mapping application determines the position of the virtual camera in the 3D map at the second instance in time.
    Type: Grant
    Filed: September 30, 2012
    Date of Patent: February 23, 2016
    Assignee: APPLE INC.
    Inventors: Patrick S. Piemonte, Aroon Pahwa, Christopher D. Moore
  • Patent number: 9262855
    Abstract: An animation system is described herein that uses a transfer function on the progress of an animation that realistically simulates a bounce behavior. The transfer function maps normalized time and allows a user to specify both a number of bounces and a bounciness factor. Given a normalized time input, the animation system maps the time input onto a unit space where a single unit is the duration of the first bounce. In this coordinate space, the system can find the corresponding bounce and compute the start unit and end unit of this bounce. The system projects the start and end units back onto a normalized time scale and fits these points to a quadratic curve. The quadratic curve can be directly evaluated at the normalized time input to produce a particular output.
    Type: Grant
    Filed: March 18, 2010
    Date of Patent: February 16, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Brandon C. Furtwangler, Saied Khanahmadi
  • Patent number: 9258168
    Abstract: One exemplary embodiment can describe a method for communicating. The method for communicating can include a step for identifying characteristics of a communications channel, a step for identifying a set of nonlinear functions used to generate waveforms, a step for assigning a unique numeric code to each waveform, a step for transmitting a numeric sequence as a series of waveforms, a step for receiving the series of waveforms, and a step for decoding the series of waveforms.
    Type: Grant
    Filed: December 24, 2014
    Date of Patent: February 9, 2016
    Assignee: ASTRAPI CORPORATION
    Inventor: Jerrold D. Prothero
  • Patent number: 9251618
    Abstract: The movement of skin on an animated target, such as a character or other object, is simulated via a simulation software application. The software application creates a finite element model (FEM) comprising a plurality of finite elements based on an animated target. The software application attaches a first constraint force to a node associated with a first finite element in the plurality of finite elements. The software application attaches a second constraint force to the node. The software application detects a movement of the first finite element that results in a corresponding movement of the node. The software application determines a new position for the node based on the movement of at least one of the first finite element, the first constraint force, and the second constraint force.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: February 2, 2016
    Assignee: PIXAR
    Inventors: Ryan Kautzman, Jiayi Chong, Patrick Coleman
  • Patent number: 9251617
    Abstract: A method for creating a computer simulation of an actor having a first foot, a second foot and a body including the steps of planting the first foot as a support foot along a space time-varying path. There is the step of stopping time regarding placement of the first foot. There is the step of changing posture of the first foot while the first foot is planted. There is the step of moving time into the future for the second foot as a lifted foot and changing posture for the lifted foot. An apparatus for creating a computer simulation of an actor having a first foot, a second foot and a body. A software program for creating a computer simulation of an actor having a first foot, a second foot and a body that performs the steps of planting the first foot as a support foot along a space time-varying path.
    Type: Grant
    Filed: August 8, 2012
    Date of Patent: February 2, 2016
    Inventor: Kenneth Perlin
  • Patent number: 9240067
    Abstract: An image animation method implementable in software includes: fitting a fitting model to at least an object in the image, and animating the object in accordance with a corresponding animation model, where the fitting model is at least as rigid as the animation model, and the animation model is no more rigid than the fitting model.
    Type: Grant
    Filed: October 14, 2010
    Date of Patent: January 19, 2016
    Assignee: Yeda Research & Development Co. Ltd.
    Inventors: Yosef Yomdin, Grigory Dinkin
  • Patent number: 9237332
    Abstract: Apparatus and method for reconstructing a high-density three-dimensional (3D) image are provided. The method includes: generating an initial 3D image by matching a first image captured using a first camera and a second image captured using a second camera; searching for a first area and a second area from the initial 3D image by using a number of characteristic points included in the initial 3D image; detecting a plane from a divided first area; filtering a divided second area; and synthesizing the detected plane and the filtered second area.
    Type: Grant
    Filed: December 27, 2012
    Date of Patent: January 12, 2016
    Assignee: Hanwha Techwin Co., Ltd.
    Inventor: Soon-Min Bae
  • Patent number: 9230292
    Abstract: A method for requesting an on-demand service on a computing device is provided. One or more processors determine the current location of the computing device. A multistate selection feature of a plurality of service options for providing the on-demand service is presented on the display of the computing device. The multistate selection feature enables a user to select a service option that is available within a region that includes the current location to provide the on-demand service. In response to the user selecting one of the plurality of service options, a summary user interface is presented on the display to provide region-specific information about the on-demand service based on the selected service option.
    Type: Grant
    Filed: November 8, 2012
    Date of Patent: January 5, 2016
    Assignee: Uber Technologies, Inc.
    Inventors: Shalin Amin, Mina Radhakrishnan, Paul-Phillip Holden, Curtis Chambers
  • Patent number: 9215272
    Abstract: Distributing meeting data from an interactive whiteboard projector to at least one computer of a meeting member, comprises: in the interactive whiteboard projector, using a processor to save meeting data, generate a password, display a screen with a URL of the projector and the password and use a web server to transfer the meeting data from the interactive whiteboard projector to the at least one computer.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: December 15, 2015
    Assignee: Seiko Epson Corporation
    Inventors: Steve Nelson, Victor Ivashin
  • Patent number: 9208613
    Abstract: Actions of a player character in a virtual three-dimensional space are determined for each of the extremities. The position of the end portion of each extremity is determined in the local coordinates of the extremity, with the fixed portion being the reference position. The position of the end portion is determined within a pre-defined range of access, then the joint angle of the intermediate portion is fixed, and the position of the intermediate portion is determined from the angle of rotation about the axis connecting the fixed portion and the end portion. The position of the intermediate portion is determined as angular information within the range of motion of the intermediate portion, which is defined in accordance with the position of the end portion. The shape of each extremity of the character is determined by the positional information about the end portion and the angular information about the intermediate portion.
    Type: Grant
    Filed: February 13, 2012
    Date of Patent: December 8, 2015
    Assignee: KABUSHIKI KAISHA SQUARE ENIX
    Inventor: Tomohiko Mukai
  • Patent number: 9203880
    Abstract: Positions of avatars in a virtual world may be communicated to clients using multiple bitcode resolutions to minimize required communication bandwidth between a virtual world server and virtual world clients, thereby allowing transmission of all avatars' positions to every other player. Lower resolution bitcodes may be based on a lower resolution grid overlaid on the virtual world, whereas higher resolution bitcodes may be based on a higher resolution grid overlaid on the virtual world. In one example, a virtual world server may determine the bitcode resolution to use based on a distance between an avatar to which the position information is to be sent and other avatars in the virtual world. Resolution may include spatial resolution, where nearer avatars' locations are provided with higher resolution bitcodes, or temporal resolution, where the transmission frequency of position information is greater for nearer avatars.
    Type: Grant
    Filed: April 12, 2013
    Date of Patent: December 1, 2015
    Assignee: Jagex Ltd.
    Inventor: Andrew Gower
  • Patent number: 9196016
    Abstract: Systems and methods for improving video stutter in high resolution progressive video captured with fast exposure times. In a first approach, digital video is captured with fast shutter speeds that cause objects moving within the frame to appear motionless. The video codec generates motion information that may be utilized to add an artificial motion blur to each frame of the digital video during processing in a digital video pipeline. The motion blur creates the appearance that an object is moving in the frame. In a second approach, the lens assembly of the digital camera includes an electronically controlled filter that attenuates the light reaching an image sensor such that the shutter speeds may be decreased in order to capture motion blur. The electronically controlled filter may be a liquid crystal display (LCD) device that is set to a plurality of different transparency levels based on a target exposure value.
    Type: Grant
    Filed: January 18, 2012
    Date of Patent: November 24, 2015
    Assignee: LinkedIn Corporation
    Inventor: John Furlan
  • Patent number: 9191579
    Abstract: The invention concerns a computer-implemented method for tracking and reshaping a human-shaped figure in a digital video comprising the steps: acquiring a body model of the figure from the digital video, adapting a shape of the body model, modifying frames of the digital video, based on the adapted body model and outputting the digital video.
    Type: Grant
    Filed: November 29, 2011
    Date of Patent: November 17, 2015
    Assignee: MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN
    Inventors: Hans-Peter Seidel, Christian Theobalt, Thorsten Thormaehlen, Arjun Jain
  • Patent number: 9177409
    Abstract: A method of animating a virtual object within a virtual world, wherein the virtual object comprises a plurality of object parts, wherein for a first object part there is one or more associated second object parts, the method comprising: at an animation update step: specifying a target frame in the virtual world; and applying control to the first object part, wherein the control is arranged such that the application of the control in isolation to the first object part would cause a movement of the first object part in the virtual world that reduces a difference between a control frame and the target frame, the control frame being a frame at a specified position and orientation in the virtual world relative to the first object part, wherein applying control to the first object part comprises moving the one or more second object parts within the virtual world to compensate for the movement of the first object part in the virtual world caused by applying the control to the first object part.
    Type: Grant
    Filed: April 29, 2010
    Date of Patent: November 3, 2015
    Assignee: NaturalMotion Ltd
    Inventors: Antoine Félix Robert Rennuit, Thomas Daniel Lowe
  • Patent number: 9170161
    Abstract: An image capture device and method for correcting output levels thereof, made up of an image shifter for moving an image position on a light receiving surface, having a differential calculating section for calculating the differential between the output levels of pixels receiving light from the same part of an image on a light-receiving surface before and after moving the image position; an offset amount calculating section for calculating the amount of offset to the output levels of pixels in a pixel line by sequentially adding the differences in output levels in pixel lines arrayed in the direction of image position movement; and a correcting section for correcting pixel output levels by equalizing the variability in output levels between pixels in a pixel line based on the amount of pixel offset in the pixel line.
    Type: Grant
    Filed: March 29, 2012
    Date of Patent: October 27, 2015
    Assignee: Nippon Avionics Co., Ltd.
    Inventor: Yukiko Shibata
  • Patent number: 9159140
    Abstract: Techniques described herein use signal analysis to detect and analyze repetitive user motion that is captured in a 3D image. The repetitive motion could be the user exercising. One embodiment includes analyzing image data that tracks a user performing a repetitive motion to determine data points for a parameter that is associated with the repetitive motion. The different data points are for different points in time. A parameter signal of the parameter versus time that tracks the repetitive motion is formed. The parameter signal is divided into brackets that delineate one repetition of the repetitive motion from other repetitions of the repetitive motion. A repetition in the parameter signal is analyzed using a signal processing technique. Curve fitting and/or autocorrelation may be used to analyze the repetition.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: October 13, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jonathan R. Hoof, Daniel G. Kennett, Anis Ahmad
  • Patent number: 9153068
    Abstract: A method for reducing the number of samples tested for rendering a screen space region of an image includes constructing a trajectory of a primitive extending in an image which is to be rendered. A bounding volume is constructed for a screen space region of the image, the bounding volume characterized as having a bound in a non-screen space dimension which is defined as a function of the primitive's trajectory. The bounding volume is further characterized as overlapping a portion of the screen space region which is to be rendered. One or more sample points which are located within the screen space region, and which are not overlapped by the bounding volume are excluded from testing.
    Type: Grant
    Filed: June 24, 2011
    Date of Patent: October 6, 2015
    Assignee: NVIDIA CORPORATION
    Inventors: Samuli Laine, Tero Karras, Jaakko Lehtinen, Timo Aila
  • Patent number: 9147270
    Abstract: A method for reducing the number of samples tested for rendering a screen space region of an image includes constructing a trajectory of a primitive in a three dimensional coordinate system, the coordinate system including a screen space dimension, a lens dimension and a time dimension. A bounding volume is constructed for a screen space region which is to be rendered, the bounding volume overlapping a portion of the screen space region. The bounding volume is defined according to a plurality of bounding planes which extend in the three dimensional coordinate system, whereby the bounding planes are determined as a function of the trajectory of the primitive. One or more sample points which are located within the screen space region, and which are not overlapped by the bounding volume are excluded from testing.
    Type: Grant
    Filed: June 24, 2011
    Date of Patent: September 29, 2015
    Assignee: NVIDIA CORPORATION
    Inventors: Jaakko Lehtinen, Timo Aila, Samuli Laine
  • Patent number: 9142043
    Abstract: A method for reducing the number of samples tested for rendering a screen space region of an image includes constructing a trajectory of a primitive extending within an image which is to be rendered. A bounding volume is constructed for a screen space region of the image, the bounding volume characterized as having a bound in a non-screen space dimension which is defined as a function of the primitive's trajectory. The bounding volume is further characterized as overlapping a portion of the screen space region which is to be rendered. One or more sample points which are located within the screen space region, and which are not overlapped by the bounding volume are excluded from testing.
    Type: Grant
    Filed: June 24, 2011
    Date of Patent: September 22, 2015
    Assignee: NVIDIA CORPORATION
    Inventors: Timo Aila, Samuli Laine, Tero Karras, Jaakko Lehtinen, Peter Shirley
  • Patent number: 9142259
    Abstract: Provided is an editing device including an input material timeline area display control unit that executes control such that an input material timeline in which an event is arranged is displayed, using a material which is an element of selected content as the event, and an output material timeline area display control unit that executes control such that an output material timeline in which an event which is being edited or has been edited is arranged is displayed. The input material timeline and the output material timeline have a same time axis, and the input material timeline area display control unit controls a display of the input material timeline such that the event arranged in the input material timeline is expressed by the same time axis as the event arranged in the output material timeline.
    Type: Grant
    Filed: June 21, 2012
    Date of Patent: September 22, 2015
    Assignee: SONY CORPORATION
    Inventors: Ryoichi Sakuragi, Yukiko Nishimura
  • Patent number: 9050538
    Abstract: An object placement managing unit (44) controls whether or not to change the position or direction of a first object according to a positional relationship between a passive area determined based on the position of the first object, and a judging area determined based on the position of a second object. The object placement managing unit (44) controls whether or not to change the position or direction of the second object according to a positional relationship between an active area determined based on the position of the first object and differing from the passive area, and the judging area.
    Type: Grant
    Filed: April 3, 2012
    Date of Patent: June 9, 2015
    Assignees: Sony Corporation, Sony Corporation Entertainment Inc.
    Inventors: Hajime Sugiyama, Hitoshi Ishikawa, Chihiro Kanno, Tomohisa Kano
  • Patent number: 9046919
    Abstract: Provided herein is a wearable interface for providing information from a user to a control unit comprising at least one wearable patch in communication with the control unit, wherein the patch is adaptable to detect object data and transmit the object data to the control unit. Further provided herein is a parameter determining patch for detecting object data from an object, the parameter determining sensor comprising at least one wearable data obtaining patch adaptable to obtain data from an object; and at least one transmitter for transmitting object data. Also included herein are a system and method of use of the invention.
    Type: Grant
    Filed: August 19, 2008
    Date of Patent: June 2, 2015
    Assignee: HMicro, Inc.
    Inventor: Ali Niknejad
  • Publication number: 20150145870
    Abstract: A sketch-based interface within an animation engine provides an end-user with tools for creating emitter textures and oscillator textures. The end-user may create an emitter texture by sketching one or more patch elements and then sketching an emitter. The animation engine animates the sketch by generating a stream of patch elements that emanate from the emitter. The end-user may create an oscillator texture by sketching a patch that includes one or more patch elements, and then sketching a brush skeleton and an oscillation skeleton. The animation engine replicates the patch along the brush skeleton, and then interpolates the replicated patches between the brush skeleton and the oscillation skeleton, thereby causing those replicated patches to periodically oscillate between the two skeletons.
    Type: Application
    Filed: November 20, 2014
    Publication date: May 28, 2015
    Inventors: Tovi GROSSMAN, George FITZMAURICE, Rubaiat Habib KAZI, Fanny CHEVALIER, Shengdong ZHAO
  • Patent number: 9041717
    Abstract: Techniques are disclosed for creating animated video frames which include both computer generated elements and hand drawn elements. For example, a software tool may allows an artist to draw line work (or supply other 2D image data) to composite with an animation frame rendered from a three dimensional (3D) graphical model of an object. The software tool may be configured to determine how to animate such 2D image data provided for one frame in order to appear in subsequent (or prior) frames in a manner consistent with changes in rendering the underlying 3D geometry.
    Type: Grant
    Filed: September 12, 2011
    Date of Patent: May 26, 2015
    Assignee: Disney Enterprises, Inc.
    Inventors: Michael Kaschalk, Eric A. Daniels, Brian S. Whited, Kyle D. Odermatt, Patrick T. Osborne
  • Patent number: 9030479
    Abstract: Disclosed are a system and a method for motion editing multiple synchronized characters. The motion editing system comprises: a Laplacian motion editor which edits a spatial route of inputted character data according to user conditions, and processes the distortion of the interaction time; and a discrete motion editor which applies a discrete transformation while the character data is processed.
    Type: Grant
    Filed: June 19, 2009
    Date of Patent: May 12, 2015
    Assignee: SNU R&DB Foundation
    Inventors: Jehee Lee, Manmyung Kim
  • Patent number: 9019278
    Abstract: Systems, methods and products for animating non-humanoid characters with human motion are described. One aspect includes selecting key poses included in initial motion data at a computing system; obtaining non-humanoid character key poses which provide a one to one correspondence to selected key poses in said initial motion data; and statically mapping poses of said initial motion data to non-humanoid character poses using a model built based on said one to one correspondence from said key poses of said initial motion data to said non-humanoid character key poses. Other embodiments are described.
    Type: Grant
    Filed: December 2, 2013
    Date of Patent: April 28, 2015
    Assignee: Disney Enterprises, Inc.
    Inventors: Jessica Kate Hodgins, Katsu Yamane, Yuka Ariki
  • Patent number: 9007362
    Abstract: A virtual environment in generated in a server where at least one object representative of an entity interacts with other objects or attributes of the environment. A desired frame rate for rendering the virtual environment is identified, and compared to a maximum achievable frame rate at a client device. If the maximum achievable frame rate is slower than the desired frame rate, the number of objects displayed within the virtual environment is modified, in accordance with one or more rule sets, until the maximum achievable frame rate is at or near the desired frame rate. In addition, a server may provide and synchronize output for clients participating in the virtual environment using different target frame rates.
    Type: Grant
    Filed: January 17, 2012
    Date of Patent: April 14, 2015
    Inventors: Brian Mark Shuster, Aaron Burch, Gary S. Shuster
  • Patent number: 9007381
    Abstract: An exemplary method includes a transition animation system detecting a screen size of a display screen associated with a computing device executing an application, automatically generating, based on the detected screen size, a plurality of animation step values each corresponding to a different animation step included in a plurality of animation steps that are to be involved in an animation of a transition of a user interface associated with the application into the display screen, and directing the computing device to perform the plurality of animation steps in accordance with the generated animation step values. Corresponding methods and systems are also disclosed.
    Type: Grant
    Filed: September 2, 2011
    Date of Patent: April 14, 2015
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Jian Huang, Jack J. Hao
  • Patent number: 9007364
    Abstract: Disclosed are examples of methods, apparatus, systems, and computer program products for providing an augmented reality display of an image with record data. In one example, image data is received at one or more processors. A request message is sent requesting record data associated with the image data from one or more of a plurality of records stored in a database system. In some implementations, when the requested record data is received, a graphical display of the record data in combination with the image can be provided on a display device.
    Type: Grant
    Filed: August 24, 2012
    Date of Patent: April 14, 2015
    Assignee: salesforce.com, inc.
    Inventor: Samuel W. Bailey
  • Patent number: 9001132
    Abstract: A computer-implemented method for providing retargeting of actor motion includes: receiving, in a computer system, an input generated by user selection of at least one of multiple real-time constraint scenarios, each real-time constraint scenario corresponding to a relationship between motion capture information from an actor and a position or movement of a character driven by the motion capture information; activating, based on the selected real-time constraint scenario, one or more of multiple limb solvers for the character which determines at least a first joint angle for a corresponding character limb; registering, for at least one character limb where the corresponding limb solver is not activated, a corresponding joint angle from the motion capture information as a second joint angle; and determining a root location for the character based on at least the first and second joint angles.
    Type: Grant
    Filed: December 13, 2011
    Date of Patent: April 7, 2015
    Assignee: Lucasfilm Entertainment Company Ltd.
    Inventors: Kevin Wooley, Kiran S. Bhat, Michael Sanders
  • Patent number: 9001129
    Abstract: A processing apparatus for creating an avatar is provided. The processing apparatus calculates skeleton sizes of joints of the avatar and local coordinates corresponding to sensors attached to a target user, by minimizing a sum of a difference function and a skeleton prior function, the difference function representing a difference between a forward kinematics function regarding the joints with respect to reference poses of the target user and positions of the sensors, and the skeleton prior function based on statistics of skeleton sizes with respect to reference poses of a plurality of users.
    Type: Grant
    Filed: October 19, 2011
    Date of Patent: April 7, 2015
    Assignees: Samsung Electronics Co., Ltd., Texas A&M University System
    Inventors: Taehyun Rhee, Inwoo Ha, Dokyoon Kim, Xiaolin Wei, Jinxiang Chai, Huajun Liu
  • Patent number: 8988437
    Abstract: In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate.
    Type: Grant
    Filed: March 20, 2009
    Date of Patent: March 24, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook
  • Patent number: 8990057
    Abstract: In an embodiment, an element, that represents an entity in a system, is generated. The generated element may be incorporated in a network that represents the system. The generated element may include geometry information about a geometry of the entity. The geometry information may be used in one or more computations associated with a simulation or an analysis of the system. The element may have a frame port that exposes a frame. The frame may represent at least a position and an orientation in a two dimensional or three dimensional space with respect to another frame in the network. The other frame in the network may be a reference frame that may be defined by a “world” that the system resides in. The generated element may be incorporated into the network by connecting the frame port to the network.
    Type: Grant
    Filed: February 12, 2013
    Date of Patent: March 24, 2015
    Assignee: The MathWorks, Inc.
    Inventors: Brian Mirtich, Jeffrey Wendlandt
  • Patent number: 8988438
    Abstract: Provided are an apparatus and a method of effectively creating real-time movements of a three dimensional virtual character by use of a small number of sensors. More specifically, the motion capture method, which maps movements of a human body into a skeleton model to generate movements of a three-dimensional (3D) virtual character, includes measuring a distance between a portion of a human body to which a measurement sensor is positioned and a reference position and rotation angles of the portion, and estimating relative rotation angles and position coordinates of each portion of the human body by use of the measured distance and rotation angles.
    Type: Grant
    Filed: June 30, 2009
    Date of Patent: March 24, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Won-chul Bang, Hyong-euk Lee
  • Patent number: 8988422
    Abstract: Techniques are disclosed for augmenting hand-drawn animation of human characters with three-dimensional (3D) physical effects to create secondary motion. Secondary motion, or the motion of objects in response to that of the primary character, is widely used to amplify the audience's response to the character's motion and to provide a connection to the environment. These 3D effects are largely passive and tend to be time consuming to animate by hand, yet most are very effectively simulated in current animation software. The techniques enable hand-drawn characters to interact with simulated objects such as cloth and clothing, balls and particles, and fluids. The driving points or volumes for the secondary motion are tracked in two dimensions, reconstructed into three dimensions, and used to drive and collide with the simulated objects.
    Type: Grant
    Filed: December 17, 2010
    Date of Patent: March 24, 2015
    Assignee: Disney Enterprises, Inc.
    Inventors: Jessica Kate Hodgins, Eakta Jain, Yaser Sheikh
  • Patent number: 8988202
    Abstract: In an apparatus and method for processing a virtual world, haptic information regarding a virtual object in the virtual world, the haptic information corresponding to sensed information, is extracted and transmitted to a haptic feedback device. Accordingly, interaction between a real world and the virtual world is achieved. The processing speed of the haptic information with respect to the virtual object may be increased by varying data structures according to types of the virtual object.
    Type: Grant
    Filed: April 8, 2011
    Date of Patent: March 24, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae Joon Han, Seung Ju Han, Won Chul Bang, Do Kyoon Kim
  • Patent number: 8982053
    Abstract: A second user interface screen is presented to a user in response to detection of a predefined user motion associated with a first user interface screen. In one embodiment, a method includes: presenting, via a display of an end-user device, a first user interface screen; detecting, via a user input device of the end-user device, a predefined user motion associated with the first screen, wherein the user motion is substantially in a direction of movement in a plane parallel to the display; and in response to detecting the user motion, presenting a second user interface screen via the display, the second screen including information corresponding to the first screen.
    Type: Grant
    Filed: May 26, 2011
    Date of Patent: March 17, 2015
    Assignee: Yahoo! Inc.
    Inventors: Michael Holzer, Jeffrey Bonforte
  • Patent number: 8982122
    Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter, clothing selections, and texture-region color component selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the at least one texture selection via a user interface. In addition, the application server includes a generative model and the application server is configured to generate a 3D mesh based upon the user defined model parameters using the generative model and to apply texture to the generated mesh based upon the at least one texture selection.
    Type: Grant
    Filed: March 25, 2011
    Date of Patent: March 17, 2015
    Assignee: Mixamo, Inc.
    Inventors: Stefano Corazza, Emiliano Gambaretto
  • Patent number: 8982132
    Abstract: Methods and systems for animation timelines using value templates are disclosed. In some embodiments, a method includes generating a data structure corresponding to a graphical representation of a timeline and creating an animation of an element along the timeline, where the animation modifies a property of the element according to a function, and where the function uses a combination of a string with a numerical value to render the animation. The method also includes adding a command corresponding to the animation into the data structure, where the command is configured to return the numerical value, and where the data structure includes a value template that produces the combination of the string with the numerical value. The method further includes passing the produced combination of the string with the numerical value to the function and executing the function to animate the element.
    Type: Grant
    Filed: February 28, 2011
    Date of Patent: March 17, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Joaquin Cruz Blas, Jr., James W. Doubek
  • Patent number: 8979652
    Abstract: Some aspects discussed herein may provide for a dynamic and real-time analysis of a virtual environment around a player as the player traverses the virtual environment. The analysis may determine candidate hooks (interaction points) for player movement in the virtual environment, and a best hook may be selected for performing a movement action. The movement action may be selected based on the selected hook, providing fluid and natural movement through the virtual environment. The candidate hooks may be determined based on a plurality of three dimensional ray traces originating from points along a left side and a right side of the player and at a range of heights. Collision points from the ray traces may be analyzed to determine whether they may support movement interaction, and valid points may be used to create the candidate hooks.
    Type: Grant
    Filed: March 28, 2014
    Date of Patent: March 17, 2015
    Assignee: TECHLAND Sp. z o. o
    Inventors: Adrian Tomasz Ciszewski, Bartosz Andrzej Kulon, Mikolaj Filip Kulikowski
  • Patent number: 8976180
    Abstract: A 3D graphics rendering method, medium and system that provide a motion blur effect. Clone objects of an object are generated based on animation information of the object to which a motion blur effect is to be applied and 3D graphics data including the generated clone objects is rendered, thereby providing a motion blur effect without using accumulation buffers.
    Type: Grant
    Filed: February 24, 2014
    Date of Patent: March 10, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Sang-oak Woo
  • Patent number: 8976184
    Abstract: A game developer can “tag” an item in the game environment. When an animated character walks near the “tagged” item, the animation engine can cause the character's head to turn toward the item, and mathematically computes what needs to be done in order to make the action look real and normal. The tag can also be modified to elicit an emotional response from the character. For example, a tagged enemy can cause fear, while a tagged inanimate object may cause only indifference or indifferent interest.
    Type: Grant
    Filed: October 9, 2013
    Date of Patent: March 10, 2015
    Assignee: Nintendo Co., Ltd.
    Inventors: Henry Sterchi, Jeff Kalles, Shigeru Miyamoto, Denis Dyack, Carey Murray
  • Patent number: 8963927
    Abstract: A method for controlling presentation of three dimensional (3D) animation includes rendering a 3D animation sequence including a 3D vertex-baked model which is derived from a 3D animation file including vertex data of every vertex for every 3D image frame in the 3D animation sequence. The 3D vertex-baked model includes a control surface that provides a best-fit 3D shape to vertices of the 3D vertex-baked model.
    Type: Grant
    Filed: December 15, 2010
    Date of Patent: February 24, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Rudy Poot, Robert Crocco, Jr., Chris Miles
  • Patent number: 8963914
    Abstract: A computer system and related method for analysis of medical symptoms on a body of a living being is provided. The system comprises a first portion of computer readable medium which stores digital representation of a plurality of tissue layers in the body. The system also includes a processor unit and a second portion of computer readable medium. The second portion of the computer readable medium stores instructions executable by the processor unit to perform the steps of displaying at least one tissue layer from the plurality of the tissue layers on a display device, receiving a first input indicating a locale on the displayed at least one tissue layer, receiving a second input associating a medical symptom description to the indicated locale, and including information about the medical symptom description and the associated locale in the digital representation of the body. Related methods are also provided.
    Type: Grant
    Filed: January 17, 2012
    Date of Patent: February 24, 2015
    Inventors: Rishi Rawat, Radhika Rawat, Rachna Rawat, Rajeev Rawat
  • Patent number: 8963951
    Abstract: To allow a viewer to easily understand the details of a moving image shot by an image capturing apparatus in the case where the moving image is browsed. A camerawork detecting unit 120 detects the amount of movement of an image capturing apparatus at the time of shooting a moving image input from a moving-image input unit 110, and, on the basis of the amount of movement of the image capturing apparatus, calculates affine transformation parameters for transforming an image on a frame-by-frame basis. An image transforming unit 160 performs an affine transformation of at least one of the captured image and a history image held in an image memory 170, on the basis of the calculated affine transformation parameters. An image combining unit 180 combines, on a frame-by-frame basis, the captured image and the history image, at least one of which has been transformed, and causes the image memory 170 to hold a composite image.
    Type: Grant
    Filed: August 22, 2008
    Date of Patent: February 24, 2015
    Assignee: Sony Corporation
    Inventor: Shingo Tsurumi
  • Patent number: 8957907
    Abstract: A surface definition module of a hair/fur pipeline may be used to generate a shape defining a surface and an associated volume. A control hair module may be used to fill the volume with control hairs and an interpolation module may be used to interpolate final hair strands from the control hairs.
    Type: Grant
    Filed: May 11, 2007
    Date of Patent: February 17, 2015
    Assignees: Sony Corporation, Sony Pictures Entertainment Inc.
    Inventors: Armin Walter Bruderlin, Francois Chardavoine, Clint Chun, Gustav Melich
  • Patent number: 8957900
    Abstract: Animation coordination system and methods are provided that manage animation context transitions between and/or among multiple applications. A global coordinator can obtain initial information, such as initial graphical representations and object types, initial positions, etc., from initiator applications and final information, such as final graphical representations and object types, final positions, etc. from destination applications. The global coordination creates an animation context transition between initiator applications and destination applications based upon the initial information and the final information.
    Type: Grant
    Filed: December 13, 2010
    Date of Patent: February 17, 2015
    Assignee: Microsoft Corporation
    Inventors: Bonny Lau, Song Zou, Wei Zhang, Brian Beck, Jonathan Gleasman, Pai-Hung Chen
  • Patent number: 8957914
    Abstract: A method for providing virtual world functionality to a user of a base virtual world having base virtual world functionality and a base world list of base virtual world users, includes providing a virtual world layer, communicating to the base virtual world that the virtual world layer will overlay the base virtual world and adding the virtual world layer to the base world list in order to register the virtual world layer with the base virtual world.
    Type: Grant
    Filed: July 25, 2008
    Date of Patent: February 17, 2015
    Assignee: International Business Machines Corporation
    Inventors: George R. Dolbier, Rick A. Hamilton, II, Neil A. Katz, Brian M. O'Connell
  • Publication number: 20150042663
    Abstract: A user may create an avatar and/or animated sequence illustrating a particular object or living being performing a certain activity, using images of portions of the object or living being extracted from a still image or set of still images of the object or living being. A mathematical model used to represent the avatar may be animated according to user-selected motion information and may be modified according to various parameters including explicit end-user adjustments and information representative of a human emotion, mood, or feeling that may be derived from an image of the user or information from a news source or social network.
    Type: Application
    Filed: August 9, 2013
    Publication date: February 12, 2015
    Inventor: David Mandel
  • Patent number: RE45422
    Abstract: Annotation techniques are provided. In one aspect, a method for processing a computer-based material is provided. The method comprises the following steps. The computer-based material is presented. One or more portions of the computer-based material are determined to be of interest to a user. The one or more portions are annotated to permit return to the one or more portions at a later time. In another aspect, a user interface is provided. The user interface comprises a computer-based material; a viewing focal area encompassing a portion of the computer-based material; and one or more indicia associated with and annotating the portion of the computer-based material.
    Type: Grant
    Filed: December 27, 2012
    Date of Patent: March 17, 2015
    Assignee: Loughton Technology, L.L.C.
    Inventor: Christopher Vance Beckman