Motion Planning Or Control Patents (Class 345/474)
  • Patent number: 10136242
    Abstract: A cloud computing system is provided to support a consumer for programming a smart phone/touch pad. Set of accessory members is provided to enable said consumer to build and program a consumer designed article that comprises said consumer programmed smart phone/touch pad.
    Type: Grant
    Filed: September 24, 2011
    Date of Patent: November 20, 2018
    Inventor: Peter Ar-Fu Lam
  • Patent number: 10134168
    Abstract: One embodiment of the present invention includes a double solve unit that configures a kinematic chain representing an animated character. The double solve unit generates a first solution for the kinematic chain based on a first solving order. While generating the first solution, the doubles solve unit determines the recursion depth of each output connector included in the kinematic chain. Subsequently, the double solve unit identifies any output connectors for which the recursion length exceeds a corresponding expected recursion depth—indicating that a custom recursive dependency exists that is not reflected in the first solution. For these custom recursive output connectors, the double solve unit creates a second solving order and generates a more accurate solution.
    Type: Grant
    Filed: March 28, 2014
    Date of Patent: November 20, 2018
    Assignee: AUTODESK, INC.
    Inventor: Krystian Ligenza
  • Patent number: 10123164
    Abstract: In a server system, a computer-implemented method of initiating a proximity-based communication protocol involving a first and one or more second client devices. For each of plural candidate second devices location coordinates are retrieved and an associated axis aligned bounding box AABB is calculated. When AABB of such candidate second devices overlap with an AABB for the first device, the candidate is presented to the user of the first device. Next, a selection of one or more candidate second devices is received from the first device and causing the protocol to be initiated between the first device and the one or more selected candidate second devices.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: November 6, 2018
    Assignee: Bunq B.V.
    Inventors: Ali Niknam, Stijn Van Drongelen, Andreas Verhoeven, Menno Arnold Den Hollander, Robert-Jan Mahieu
  • Patent number: 10122982
    Abstract: Recording images, including: receiving an optical effects selection, which indicates a selected optical effect to apply to raw image data capturing the images; receiving an optical effects parameter, which indicates how to apply the selected optical effects to the raw image data; storing the optical effects selection and the optical effects parameter as effects metadata; recording the raw image data using a sensor of the digital camera; marking the effects metadata with time information to associate the effects metadata with the recorded raw image data over time; applying the selected optical effect to the raw image data according to the optical effects parameter to create processed image data while preserving the recorded raw image data; and displaying the processed image data on a display of the digital camera. Key words include raw image data and effects metadata.
    Type: Grant
    Filed: May 21, 2014
    Date of Patent: November 6, 2018
    Assignees: SONY CORPORATION, SONY PICTURES ENTERTAINMENT INC
    Inventors: Spencer Stephens, Chris Cookson, Scot Barbour
  • Patent number: 10120539
    Abstract: The disclosure provides a method for setting a User Interface (UI). The method comprises the following steps: acquiring and storing image data in a file of a selected background image on a UI management interface; marking space coordinates of a region with different shapes cut on the background image, performing display effect processing on the cut region with different shapes, and outputting a display effect processing result; and recording a preset directory name and a corresponding menu linking path of each icon. The disclosure also discloses a device for setting a UI. By adopting the scheme, a personalized UI can be obtained conveniently and quickly, and user experience is improved.
    Type: Grant
    Filed: February 15, 2011
    Date of Patent: November 6, 2018
    Assignee: ZTE Corporation
    Inventor: Qiang Wang
  • Patent number: 10097807
    Abstract: In an example embodiment a method, apparatus and computer program product are provided. The method includes facilitating access to a plurality of source multimedia content, wherein at least one source multimedia of the plurality of source multimedia content comprises corresponding depth information. The method further includes generating a blend map by defining a plurality of depth layers. At least one depth layer of the plurality of depth layers is associated with a respective depth limit. Defining the at least one depth layer comprises selecting pixels of the at least one depth layer from the at least one source multimedia content of the plurality of source multimedia content based on the respective depth limit associated with the at least one depth layer and the corresponding depth information of the at least one source multimedia content. The method also includes blending the plurality of source multimedia content based on the blend map.
    Type: Grant
    Filed: October 6, 2014
    Date of Patent: October 9, 2018
    Assignee: NOKIA TECHNOLOGIES OY
    Inventors: Tobias Karlsson, Tor Andrae, Amer Mustajbasic
  • Patent number: 10089909
    Abstract: A display control method includes: inputting user's image including a drawing portion made by hand drawing and being a display target image; and performing image control including causing the input user's image to emerge from any one of a left end and a right end of a predetermined display region, on which the user's image is to be displayed, and moving the user's image that has emerged.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: October 2, 2018
    Assignee: Ricoh Company, Limited
    Inventors: Atsushi Itoh, Aiko Ohtsuka, Tetsuya Sakayori, Hidekazu Suzuki, Takanobu Tanaka
  • Patent number: 10083007
    Abstract: Devices and methods for filtering data include calculating intermediate input values from input elements using a transformation function. The transformation function is based at least in part on a size of the filter and a number of filter outputs. Intermediate filter values are calculated from filter elements of the filter using the transformation function. Each intermediate input value is multiplied with a respective intermediate filter value to form intermediate values. These intermediate values are combined with each other using the transformation function to determine one or more output values.
    Type: Grant
    Filed: September 15, 2016
    Date of Patent: September 25, 2018
    Assignee: ALTERA CORPORATION
    Inventors: Utku Aydonat, Andrew Chaang Ling, Gordon Raymond Chiu, Shane O'Connell
  • Patent number: 10037709
    Abstract: A decision support method for use by an operator surrounded by adverse entities in a battlefield environment comprises generating a layered representation of the physical environment surrounding the operator from sensor information by mapping the spherical physical environment of the operator into a geometrical representation suitable for display on a screen, the representation being segmented into a plurality of layers having respective sizes, each layer being associated with a respective category of tactical actions. The representation further comprises visual elements representing adverse entities in the surrounding physical environment of the operator, each visual element being represented so as to be superposed with a given layer.
    Type: Grant
    Filed: December 8, 2014
    Date of Patent: July 31, 2018
    Assignee: THALES NEDERLAND B.V.
    Inventors: Jan-Egbert Hamming, Frank Koudijs, Frank Colijn, Pim Van Wensveen
  • Patent number: 10032305
    Abstract: A system includes hardware processor(s), an HMD, an input device, and an onion skin animation module. The animation modules is configured to receive a character rig of a 3D character, receive a first 3D animation of the 3D character, the first 3D animation defines a motion sequence of the 3D character based on the character rig, create a virtual time bar within the virtual environment, the virtual time bar displaying a timeline associated with the first 3D animation, identify a first animation time within the first 3D animation, the first animation time is a point in time during the motion sequence, create a first pose object of the 3D character in the virtual environment, pose the first pose object based on the first 3D animation at the animation time, and positioning the first pose object within the virtual environment proximate the first animation time on the virtual time bar.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: July 24, 2018
    Assignee: Unity IPR ApS
    Inventor: Timoni West
  • Patent number: 10022628
    Abstract: Embodiments of the systems and processes disclosed herein can use procedural techniques to calculate reactionary forces between character models. In some embodiments, the system can calculate a change in momentum of the character at the time of impact and simulate the reaction of the character model, using momentum-based inverse kinematic analysis. Procedural animation can be used to dynamically generate a target pose for the character model based on the inverse kinematic analysis for each rendered frame.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: July 17, 2018
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Masatoshi Matsumiya, Paolo Rigiroli
  • Patent number: 10019828
    Abstract: An image generating apparatus, including a memory storing avatar data representing a motion of an avatar in a virtual space and a processor coupled to the memory and the processor configured to obtain sensor information that represents a motion of a person in a real space acquired from at least one sensor, determine a first value that indicates an impression of the person based on the obtained sensor information, determine a type of the motion based on the obtained sensor information, select at least one candidate data set corresponding to the type of the motion from the memory, determine a second value that indicates impression of the avatar for each of the selected at least one data set, select a representative data set based on the determined first value and the determined second value, and generate an avatar image based on the representative data set.
    Type: Grant
    Filed: October 5, 2016
    Date of Patent: July 10, 2018
    Assignee: FUJITSU LIMITED
    Inventor: Naoko Hayashida
  • Patent number: 10008020
    Abstract: There is presented a method for interactive, real-time animation of soft body dynamics, comprising the steps of: providing a 3D model of a soft body, the model comprising a set of vertices connected by edges; defining a set of physical constraints between vertices in the 3D model, the set of constraints forming a system of linear equations comprising a set of unknowns representing the positions of the vertices; applying a Brooks-Vizing node coloring algorithm in order to partition the system of linear equations into a set of partitions each including an independent subset of unknowns; for each partition, applying a Gauss-Seidel based solver in parallel in order to determine an approximation of the unknowns; and using the determined approximation of the unknowns to update the 3D model. There is also presented an animation system configured to perform the above-described method.
    Type: Grant
    Filed: December 21, 2016
    Date of Patent: June 26, 2018
    Assignee: CHALMERS TEKNISKA HÖGSKOLA AB
    Inventor: Marco Fratarcangeli
  • Patent number: 10009550
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for synthetic imaging. In one aspect, a method includes receiving from each digital camera respective imaging data, each digital camera having a viewpoint that is different from the viewpoints of each other digital camera and having a field of view that is overlapping with at least one other digital camera; for a synthetic viewpoint that is a viewpoint that is within a geometry defined by the viewpoints of the digital cameras, selecting respective imaging data that each has a field of view that overlaps a field of view of the synthetic viewpoint and generating, from the selected respective imaging data, synthetic imaging data that depicts an image captured from a virtual camera positioned at the synthetic viewpoint.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: June 26, 2018
    Assignee: X Development LLC
    Inventor: Thomas Peter Hunt
  • Patent number: 9996940
    Abstract: Methods, devices, and systems for expression transfer are disclosed. The disclosure includes capturing a first image of a face of a person. The disclosure includes generating an avatar based on the first image of the face of the person, with the avatar approximating the first image of the face of the person. The disclosure includes transmitting the avatar to a destination device. The disclosure includes capturing a second image of the face of the person on a source device. The disclosure includes calculating expression information based on the second image of the face of the person, with the expression information approximating an expression on the face of the person as captured in the second image. The disclosure includes transmitting the expression information from the source device to the destination device. The disclosure includes animating the avatar on a display component of the destination device using the expression information.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: June 12, 2018
    Assignee: Connectivity Labs Inc.
    Inventors: Thomas Yamasaki, Rocky Chau-Hsiung Lin, Koichiro Kanda
  • Patent number: 9972123
    Abstract: Systems and methods for generating a model of an object that includes the surface reflectance details of the object are disclosed. The surface reflectance properties of the object comprising at least separate components for the object diffuse data and the object specular data are received. A 3D model of the object is generated wherein the reflectance properties of the model are configured based on the reflectance properties of the object surface. The object diffuse data determines the color to be used in generating the model and the object specular data determines one of the attributes of the coating for the model or the material to be used for generating the model.
    Type: Grant
    Filed: April 1, 2015
    Date of Patent: May 15, 2018
    Assignee: OTOY, INC.
    Inventor: Clay Sparks
  • Patent number: 9959039
    Abstract: Operating a touch-screen device includes displaying at least a portion of a keyboard on a touch-screen, detecting a touch on the touch-screen, and detecting movement of the touch on the touch-screen. Operating the touch-screen device also includes moving the displayed keyboard in response to the detected movement of the touch on the touch-screen, detecting a release of the touch from the touch-screen, and assigning a character according to a final location of the touch relative to a location of the displayed keyboard.
    Type: Grant
    Filed: July 7, 2014
    Date of Patent: May 1, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Olivier Artigue, Jean-Michel Douliez, Francois Trible
  • Patent number: 9959634
    Abstract: Methods and systems for identifying depth data associated with an object are disclosed. The method includes capturing, with an image capturing device, a plurality of source images of the object. The image capturing device has a sensor that is tilted at a known angle with respect to an object plane of the object such that the image capturing device has a depth of field associated with each source image, the depth of field defining a plane that is angled with respect to the object plane. An image processor analyzes the plurality of source images to identify segments of the source images that satisfy an image quality metric. Position data is assigned to the identified segments of the source images, the position data including depth positions based on the plane defined by the depth of field.
    Type: Grant
    Filed: March 13, 2012
    Date of Patent: May 1, 2018
    Assignee: Google LLC
    Inventors: Peter Gregory Brueckner, Iain Richard Tyrone McClatchie, Matthew Thomas Valente
  • Patent number: 9946436
    Abstract: An example method includes receiving, at a mobile device, one or more user selections by a user of the mobile device, where each user selection indicates a respective type of data item to be presented on the mobile device. The method also includes receiving, at the mobile device, one or more data items. The method also includes identifying data items that are associated with the types of data items to be presented on the mobile device, and responsive to identifying data items that are associated with the types of data items to be presented on the mobile device, presenting, on the mobile device, a dynamic icon to present the identified data items.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: April 17, 2018
    Assignee: Appelago Inc.
    Inventor: Peter Rolih
  • Patent number: 9940753
    Abstract: A method of augmenting a target object with projected light is disclosed. The method includes determining a blend of component attributes to define visual characteristics of the target object, modifying an input image based, at least in part, on an image of the target object, wherein the modified input image defines an augmented visual characteristic of the target object, determining a present location of one or more landmarks on the target object based, at least in part, on the image of the target object, predicting a future location of the one or more landmarks, deforming a model of the target object based on the future location of the one or more landmarks, generating an augmentation image based on the deformed model and the modified input image, and transmitting for projection the augmentation image.
    Type: Grant
    Filed: October 11, 2016
    Date of Patent: April 10, 2018
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Anselm Grundhöfer, Amit Bermano
  • Patent number: 9933923
    Abstract: A graphical user interface for a display apparatus comprises an addressable window, which is related to a first formation action of a first start object to a target object and related to a second formation action of a second start object to the same target object, wherein each formation action comprises a transit from the respective start object to the target object using elements which can be selected from a plurality of element types, wherein assigned to each transit is an arrival time resulting from the speed of the corresponding formation, wherein the window includes a synchronisation button, by means of which the second formation action can be synchronised with the first formation action by delaying the second arrival time to the first arrival time.
    Type: Grant
    Filed: August 8, 2014
    Date of Patent: April 3, 2018
    Assignee: XYRALITY GMBH
    Inventor: Alexander Spohr
  • Patent number: 9911227
    Abstract: A method and system for providing access to and control of parameters within a scenegraph includes redefining components or nodes' semantic within a scenegraph. The set of components or nodes (depending on the scenegraph structure) are required to enable access from the Application User Interface to selected scenegraph information. In one embodiment, a user interface is generated for controlling the scenegraph parameters. In addition, constraints can be implemented that allow or disallow access to certain scenegraph parameters and restrict their range of values.
    Type: Grant
    Filed: February 12, 2016
    Date of Patent: March 6, 2018
    Assignee: GVBB HOLDINGS S.A.R.L.
    Inventors: Ralph Andrew Silberstein, David Sahuc, Donald Johnson Childers
  • Patent number: 9892539
    Abstract: A method is disclosed for applying physics-based simulation to an animator provided rig. The disclosure presents equations of motions for simulations performed in the subspace of deformations defined by an animator's rig. The method receives an input rig with a plurality of deformation parameters, and the dynamics of the character are simulated in the subspace of deformations described by the character's rig. Stiffness values defined on rig parameters are transformed to a non-homogeneous distribution of material parameters for the underlying rig.
    Type: Grant
    Filed: October 18, 2013
    Date of Patent: February 13, 2018
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Bernhard Thomaszewski, Robert Sumner, Fabian Hahn, Stelian Coros, Markus Gross, Sebastian Martin
  • Patent number: 9893974
    Abstract: On a server, a collision handler is called by a physics simulation engine to categorize a plurality of rigid bodies in some simulation data as either colliding or not colliding. The simulation data relates to a triggering event involving the plurality of rigid bodies and is generated by a simulation of both gravitational trajectories and collisions of rigid bodies. Based on the categorization and the simulation data, a synchronization engine generates synchronization packets for the colliding bodies only and transmits the packets to one or more client computing devices configured to perform a reduced simulation function.
    Type: Grant
    Filed: October 11, 2015
    Date of Patent: February 13, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Marco Anastasi, Maurizio Sciglio
  • Patent number: 9892556
    Abstract: A real-time video exploration (RVE) system that allows users to pause, step into, and explore 2D or 3D modeled worlds of scenes in a video. The system may leverage network-based computation resources to render and stream new video content from the models to clients with low latency. A user may pause a video, step into a scene, and interactively change viewing positions and angles in the model to move through or explore the scene. The user may resume playback of the recorded video when done exploring the scene. Thus, rather than just viewing a pre-rendered scene in a movie from a pre-determined perspective, a user may step into and explore the scene from different angles, and may wander around the scene at will within the scope of the model to discover parts of the scene that are not visible in the original video.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: February 13, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Gerard Joseph Heinz, II, Michael Schleif Pesce, Collin Charles Davis, Michael Anthony Frazzini, Ashraf Alkarmi, Michael Martin George, David A. Limp, William Dugald Carr, Jr.
  • Patent number: 9894405
    Abstract: A real-time video exploration (RVE) system that allows users to pause, step into, move through, and explore 2D or 3D modeled worlds of scenes in a video. The RVE system may allow users to discover, select, explore, and manipulate objects within the modeled worlds used to generate video content. The RVE system may implement methods that allow users to view and explore in more detail the features, components, and/or accessories of selected objects that are being manipulated and explored. The RVE system may also implement methods that allow users to interact with interfaces of selected objects or interfaces of components of selected objects.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: February 13, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Gerard Joseph Heinz, II, Michael Schleif Pesce, Collin Charles Davis, Michael Anthony Frazzini, Ashraf Alkarmi, Michael Martin George, David A. Limp, William Dugald Carr, Jr.
  • Patent number: 9887112
    Abstract: An inkjet coating device comprises a support board and a sprinkler head. The support board is provided for placing the glass plate, the sprinkler head comprises a plurality of spray nozzles, wherein, the spray nozzles comprise an ink entrance port and an ink exit port, the ink is poured from the ink entrance port and poured onto the glass plate from the ink exit port, the internal diameter of the ink entrance port is larger than that of the ink exit port. The inkjet coating device comprises a trumpet-shaped spray nozzle which are closed together without gap, so as to not only increase the spraying capacity, but also make the trumpet-shaped ink drop more disperse in each coating interval belt uniformly to form an ink coating layers with uniform thickness.
    Type: Grant
    Filed: May 12, 2014
    Date of Patent: February 6, 2018
    Assignee: SHENZHEN CHINA STAR OPTOELECTRONICS TECHNOLOGY CO., LTD
    Inventor: Maocheng Yan
  • Patent number: 9870622
    Abstract: A method for determining posture-related information of a subject using the subject's image is provided. The method comprises: determining, from a first image, first positions of a first pair of joints and a first body segment length of a first body segment associated with the first pair of joints; determining, from a second image, second positions of a second pair of joints and a second body segment length of a second body segment associated with the second pair of joints; determining, based on an algorithm that reduces a difference between the first and second body segment lengths, whether the first and second pairs of joints correspond to a pair of joints; If the first and second pairs of joints are determined to correspond to a pair of joints, determining, based on the second positions, information of a posture of the subject; and providing an indication regarding the information.
    Type: Grant
    Filed: July 19, 2016
    Date of Patent: January 16, 2018
    Assignee: DYACO INTERNATIONAL, INC.
    Inventors: Tung-Wu Lu, Hsuan-Lun Lu, Cheng-Kai Lin, Hao Chiang
  • Patent number: 9852237
    Abstract: An object testing system and method for testing an object. A three-dimensional environment is displayed with a model of an object and an avatar from a viewpoint relative to the avatar on a display system viewed by a human operator. The object is under testing in a live environment. Information about motions of the human operator that are detected is generated. Live information about the object that is under testing in the live environment is received. A change in the object from applying the live information to the model of the object is identified. The change in the model of the object is displayed on the display system as seen from the viewpoint relative to the avatar in the three-dimensional environment.
    Type: Grant
    Filed: September 16, 2015
    Date of Patent: December 26, 2017
    Assignee: THE BOEING COMPANY
    Inventors: Jonathan Wayne Gabrys, David William Bowen, Anthony Mathew Montalbano, Chong Choi
  • Patent number: 9852327
    Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.
    Type: Grant
    Filed: January 13, 2017
    Date of Patent: December 26, 2017
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 9819711
    Abstract: A method of establishing a collaborative platform comprising performing a collaborative interactive session for a plurality of members, and analyzing affect and/or cognitive features of some or all of the plurality of members, wherein some or all of the plurality of members from different human interaction platforms interact via the collaborative platform, wherein the affect comprises an experience of feeling or emotion, and wherein the cognitive features comprise features in a cognitive state, the cognitive state comprising a state of an internal mental process.
    Type: Grant
    Filed: November 5, 2012
    Date of Patent: November 14, 2017
    Inventors: Neil S. Davey, Sonya Davey, Abhishek Biswas
  • Patent number: 9814982
    Abstract: To mitigate collisions in a physical space during gaming, a set of physical objects and a user situated in the 3D space are mapped to determine a spacing between an object in the set of physical objects and the user, where the user moves in the 3D space to cause a motion in a virtual environment of a game. A prediction is computed that the user will make a motion in the 3D space during the gaming. For the motion in the 3D space, a motion trajectory of the user is computed using a measurement parameter corresponding to the user stored in a user profile. A detection is made that the motion trajectory of the user violates a spacing threshold between the user and the object, and the user is alerted about a risk of collision between the user and the object in the 3D space.
    Type: Grant
    Filed: February 25, 2015
    Date of Patent: November 14, 2017
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Prach Jerry Chuaypradit, Wendy Chong, Ronald C. Geiger, Jr., Janani Janakiraman, Joefon Jann, Jenny S. Li, Anuradha Rao, Tai-chi Su, Singpui Zee
  • Patent number: 9811237
    Abstract: A computer system and method of operation thereof are provided that allow interactive navigation and exploration of logical processes. The computer system employs a data architecture comprising a network of nodes connected by branches. Each node in the network represents a decision point in the process that allows the user to select the next step in the process and each branch in the network represents a step or a sequence of steps in the logical process. The network is constructed directly from the target logical process. Navigation data such as image frame sequences, stages in the logical process, and other related information are associated with the elements of the network. This establishes a direct relationship between steps in the process and the data that represent them. From such an organization, the user may tour the process, viewing the image sequences associated with each step and choosing among different steps at will.
    Type: Grant
    Filed: April 30, 2003
    Date of Patent: November 7, 2017
    Assignee: III HOLDINGS 2, LLC
    Inventor: Rodica Schileru
  • Patent number: 9811555
    Abstract: A user performs a gesture with a hand-held or wearable device capable of sensing its own orientation. Orientation data, in the form of a sequence of rotation vectors, is collected throughout the duration of the gesture. To construct a trace representing the shape of the gesture and the direction of device motion, the orientation data is processed by a robotic chain model with four or fewer degrees of freedom, simulating a set of joints moved by the user to perform the gesture (e.g., a shoulder and an elbow). To classify the gesture, a trace is compared to contents of a training database including many different users' versions of the gesture and analyzed by a learning module such as support vector machine.
    Type: Grant
    Filed: September 27, 2014
    Date of Patent: November 7, 2017
    Assignee: Intel Corporation
    Inventors: Nicholas G. Mitri, Christopher B. Wilkerson, Mariette Awad
  • Patent number: 9802119
    Abstract: A multi-user virtual reality universe (VRU) process receives input from multiple remote clients to manipulate avatars through a modeled 3-D environment. A VRU host models movement of avatars in the VRU environment in response to client input, which each user providing input for control of a corresponding avatar. The modeled VRU data is provided by the host to client workstations for display of a simulated environment visible to all participants. The host maintains personalized data for selected modeled objects or areas that is personalized for specific users in response to client input. The host includes personalized data in modeling the VRU environment. The host may segregate VRU data provided to different clients participating in the same VRU environment according to limit personalized data to authorized users, while all users receive common data.
    Type: Grant
    Filed: March 24, 2014
    Date of Patent: October 31, 2017
    Inventor: Brian Mark Shuster
  • Patent number: 9805491
    Abstract: The disclosed implementations describe techniques and workflows for a computer graphics (CG) animation system. In some implementations, systems and methods are disclosed for representing scene composition and performing underlying computations within a unified generalized expression graph with cycles. Disclosed are natural mechanisms for level-of-detail control, adaptive caching, minimal re-compute, lazy evaluation, predictive computation and progressive refinement. The disclosed implementations provide real-time guarantees for minimum graphics frame rates and support automatic tradeoffs between rendering quality, accuracy and speed. The disclosed implementations also support new workflow paradigms, including layered animation and motion-path manipulation of articulated bodies.
    Type: Grant
    Filed: November 16, 2015
    Date of Patent: October 31, 2017
    Assignee: DIGITALFISH, INC.
    Inventors: Daniel Lawrence Herman, Mark J. Oftedal
  • Patent number: 9800859
    Abstract: Systems and methods for stereo imaging with camera arrays in accordance with embodiments of the invention are disclosed. In one embodiment, a method of generating depth information for an object using two or more array cameras that each include a plurality of imagers includes obtaining a first set of image data captured from a first set of viewpoints, identifying an object in the first set of image data, determining a first depth measurement, determining whether the first depth measurement is above a threshold, and when the depth is above the threshold: obtaining a second set of image data of the same scene from a second set of viewpoints located known distances from one viewpoint in the first set of viewpoints, identifying the object in the second set of image data, and determining a second depth measurement using the first set of image data and the second set of image data.
    Type: Grant
    Filed: May 6, 2015
    Date of Patent: October 24, 2017
    Assignee: FotoNation Cayman Limited
    Inventors: Kartik Venkataraman, Paul Gallagher, Ankit Jain, Semyon Nisenzon
  • Patent number: 9786087
    Abstract: Systems, devices, and techniques are provided for management of animation collisions. An animation that may collide with another animation is represented with a sequence of one or more animation states, wherein each animation state in the sequence is associated with or otherwise corresponds to a portion of the animation. In order to manage animation collisions, a state machine can be configured to include a group of states that comprises animation states from a group of animations that may collide and states that can control implementation of an animation in response to an animation collision. In one aspect, a state machine manager can implement the group of states in order to implement an animation and manage animation collisions.
    Type: Grant
    Filed: August 1, 2013
    Date of Patent: October 10, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Xiangyu Liu, Andrew Dean Christian
  • Patent number: 9755940
    Abstract: On a server, a collision handler is called by a physics simulation engine to categorize a plurality of rigid bodies in some simulation data as either colliding or not colliding. The simulation data relates to a triggering event involving the plurality of rigid bodies and is generated by a simulation of both gravitational trajectories and collisions of rigid bodies. Based on the categorization and the simulation data, a synchronization engine generates synchronization packets for the colliding bodies only and transmits the packets to one or more client computing devices configured to perform a reduced simulation function.
    Type: Grant
    Filed: October 11, 2015
    Date of Patent: September 5, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Marco Anastasi, Maurizio Sciglio
  • Patent number: 9741094
    Abstract: A system and method for morphing a design element which precisely and efficiently morphs a design element within a data file to new target parameters by changing its general proportions, dimensions or shape. The present invention is generally a computer software program which loads an existing data file which includes one or more design elements, such as parts or an assembly of parts, and then automatically morphs the design element's dimensions, proportions and/or shapes to meet target parameters input by a user. The present invention will create several groups of points corresponding to each surface and associated bounding curves of the existing design. It will then morph each group into a new shape as per the input requirements by the user, fit the morphed group into an infinite surface, create boundary curves for each morphed group and then trim the infinite surface to create the new, morphed design element.
    Type: Grant
    Filed: November 14, 2016
    Date of Patent: August 22, 2017
    Assignee: Detroit Engineered Products, Inc.
    Inventors: Radhakrishnan Mariappasamy, Radha Damodaran
  • Patent number: 9724605
    Abstract: A recorded experience in a virtual worlds system may be played back by one or more servers instantiating a new instance of a scene using one or more processors of the one or more servers and playing back the recorded experience in the new instance by modeling objects of a recorded initial scene state of the recorded experience in the new instance and updating the recorded initial scene state based on subsequent recorded changes over a time period, a recorded experience file includes the recorded initial scene state and the subsequent recorded changes and is stored in one or more memories of the one or more servers. One or more client devices are in communication with the one or more servers to participate in the new instance.
    Type: Grant
    Filed: August 12, 2014
    Date of Patent: August 8, 2017
    Inventors: Brian Shuster, Aaron Burch
  • Patent number: 9691179
    Abstract: In an example system, a computer is caused to function as: a feature detection unit which detects a feature arranged in a real space; an image generation unit which generates an image of a virtual space including a virtual object arranged based on the feature; a display control unit which causes a display apparatus to display an image in such a manner that a user perceives the image of the virtual space superimposed on the real space; a processing specification unit which specifies processing that can be executed in relation to the virtual space, based on the feature; and a menu output unit which outputs a menu for a user to instruct the processing specified by the processing specification unit, in such a manner that the menu can be operated by the user.
    Type: Grant
    Filed: July 3, 2013
    Date of Patent: June 27, 2017
    Assignee: Nintendo Co., Ltd.
    Inventor: Takeshi Hayakawa
  • Patent number: 9679400
    Abstract: Methods and devices provide a quick and intuitive method to launch a specific application, dial a number or send a message by drawing a pictorial key, symbol or shape on a computing device touchscreen, touchpad or other touchsurface. A shape drawn on a touchsurface is compared to one or more code shapes stored in memory to determine if there is a match or correlation. If the entered shape correlates to a stored code shape, an application, file, function or keystroke sequence linked to the correlated code shape is implemented. The methods also enable communication involving sending a shape or parameters defining a shape from one computing device to another where the shape is compared to code shapes in memory of the receiving computing device. If the received shape correlates to a stored code shape, an application, file, function or keystroke sequence linked to the correlated code shape is implemented.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: June 13, 2017
    Assignee: QUALCOMM Incorporated
    Inventor: Mong Suan Yee
  • Patent number: 9672411
    Abstract: An information processing apparatus may generate resource information used for playing back image content that can be divided into a plurality of zones. The information processing apparatus may include an image generator generating a still image from each of the plurality of zones, a face processor setting each of the plurality of zones to be a target zone and determining whether a face of a specific character which is determined to continuously appear in at least one zone before the target zone is contained in the still image generated from the target zone, and an information generator specifying, on the basis of a determination result obtained for each of the plurality of zones by the face processor, at least one zone in which the face of the specific character continuously appears as a face zone, and generating information concerning the face zone as one item of the resource information.
    Type: Grant
    Filed: March 9, 2015
    Date of Patent: June 6, 2017
    Assignee: Sony Corporation
    Inventors: Kaname Ogawa, Hiroshi Jinno, Makoto Yamada, Keiji Kanota
  • Patent number: 9639974
    Abstract: Systems, methods, apparatuses, and computer readable medium are provided that cause a two dimensional image to appear three dimensional and also create a dynamic or animated illustrated images. The systems, methods, apparatuses and computer readable mediums implement displacement maps in a number of novel ways in conjunction with among other software, facial feature recognition software to recognize the areas of the face and allow the users to then customize those areas that are recognized. Furthermore, the created displacement maps are used to create all of the dynamic effects of an image in motion.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: May 2, 2017
    Assignee: Facecake Technologies, Inc.
    Inventors: Linda Smith, Clayton Nicholas Graff, John Szeder
  • Patent number: 9632800
    Abstract: A method for accessing information in a software application using a computing device, the computing device comprising one or more processors, the one or more processors for executing a plurality of computer readable instructions, the computer readable instructions for implementing the method for accessing information, the method comprising the steps of determining that a pointer is hovering over an icon, the icon associated with icon specific information, displaying a Tooltip including a heading, a display window and an action button, the action button for launching an action in the application, displaying the icon specific information in the display window, detecting that a user has selected the action button, and launching the action.
    Type: Grant
    Filed: January 31, 2014
    Date of Patent: April 25, 2017
    Assignee: ALLSCRIPTS SOFTWARE, LLC
    Inventors: Mary Drechsler Chorley, Leo Benson, Melpakkam Sundar, John Lusk, Cassio Nishiguchi
  • Patent number: 9626878
    Abstract: An information processing apparatus includes a posture estimation unit, an abnormality determination unit, and a presentation unit. The posture estimation unit is configured to estimate a neck posture of a user. The abnormality determination unit is configured to determine whether a posture is abnormal based on the neck posture estimated by the posture estimation unit. The presentation unit is configured to present an abnormality of the posture to the user, when the abnormality determination unit determines that the posture is abnormal.
    Type: Grant
    Filed: October 22, 2013
    Date of Patent: April 18, 2017
    Assignee: SONY Corporation
    Inventor: Junichi Rekimoto
  • Patent number: 9626836
    Abstract: Systems for enhanced head-to-head hybrid gaming are provided.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: April 18, 2017
    Assignee: Gamblit Gaming, LLC
    Inventors: Miles Arnone, Frank Cire, Caitlyn Ross
  • Patent number: 9607573
    Abstract: A method, system and computer program for modifying avatar motion. The method includes receiving an input motion, determining an input motion model for the input motion sequence, and modifying an avatar motion model associated with the stored avatar to approximate the input motion model for the input motion sequence when the avatar motion model does not approximate the input motion model. The stored avatar is presented after the avatar motion model associated with the stored avatar is modified to approximate the input motion model for the input motion sequence.
    Type: Grant
    Filed: September 17, 2014
    Date of Patent: March 28, 2017
    Assignee: International Business Machines Corporation
    Inventors: Dimitri Kanevsky, James R. Kozloski, Clifford A. Pickover
  • Patent number: 9600133
    Abstract: Techniques for displaying object animations on a slide are disclosed. In accordance with these techniques, objects on a slide may be assigned actions when generating or editing the slide. The effects of the actions on the slide are depicted using one or more respective representations which represent the slide as it will appear after implementation of one or more corresponding actions.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: March 21, 2017
    Assignee: APPLE INC.
    Inventors: Paul Bradford Vaughan, James Eric Tilton, Christopher Morgan Connors, Ralph Lynn Melton, Jay Christopher Capela, Ted Stephen Boda