Motion Planning Or Control Patents (Class 345/474)
  • Patent number: 9819711
    Abstract: A method of establishing a collaborative platform comprising performing a collaborative interactive session for a plurality of members, and analyzing affect and/or cognitive features of some or all of the plurality of members, wherein some or all of the plurality of members from different human interaction platforms interact via the collaborative platform, wherein the affect comprises an experience of feeling or emotion, and wherein the cognitive features comprise features in a cognitive state, the cognitive state comprising a state of an internal mental process.
    Type: Grant
    Filed: November 5, 2012
    Date of Patent: November 14, 2017
    Inventors: Neil S. Davey, Sonya Davey, Abhishek Biswas
  • Patent number: 9811237
    Abstract: A computer system and method of operation thereof are provided that allow interactive navigation and exploration of logical processes. The computer system employs a data architecture comprising a network of nodes connected by branches. Each node in the network represents a decision point in the process that allows the user to select the next step in the process and each branch in the network represents a step or a sequence of steps in the logical process. The network is constructed directly from the target logical process. Navigation data such as image frame sequences, stages in the logical process, and other related information are associated with the elements of the network. This establishes a direct relationship between steps in the process and the data that represent them. From such an organization, the user may tour the process, viewing the image sequences associated with each step and choosing among different steps at will.
    Type: Grant
    Filed: April 30, 2003
    Date of Patent: November 7, 2017
    Assignee: III HOLDINGS 2, LLC
    Inventor: Rodica Schileru
  • Patent number: 9811555
    Abstract: A user performs a gesture with a hand-held or wearable device capable of sensing its own orientation. Orientation data, in the form of a sequence of rotation vectors, is collected throughout the duration of the gesture. To construct a trace representing the shape of the gesture and the direction of device motion, the orientation data is processed by a robotic chain model with four or fewer degrees of freedom, simulating a set of joints moved by the user to perform the gesture (e.g., a shoulder and an elbow). To classify the gesture, a trace is compared to contents of a training database including many different users' versions of the gesture and analyzed by a learning module such as support vector machine.
    Type: Grant
    Filed: September 27, 2014
    Date of Patent: November 7, 2017
    Assignee: Intel Corporation
    Inventors: Nicholas G. Mitri, Christopher B. Wilkerson, Mariette Awad
  • Patent number: 9802119
    Abstract: A multi-user virtual reality universe (VRU) process receives input from multiple remote clients to manipulate avatars through a modeled 3-D environment. A VRU host models movement of avatars in the VRU environment in response to client input, which each user providing input for control of a corresponding avatar. The modeled VRU data is provided by the host to client workstations for display of a simulated environment visible to all participants. The host maintains personalized data for selected modeled objects or areas that is personalized for specific users in response to client input. The host includes personalized data in modeling the VRU environment. The host may segregate VRU data provided to different clients participating in the same VRU environment according to limit personalized data to authorized users, while all users receive common data.
    Type: Grant
    Filed: March 24, 2014
    Date of Patent: October 31, 2017
    Inventor: Brian Mark Shuster
  • Patent number: 9805491
    Abstract: The disclosed implementations describe techniques and workflows for a computer graphics (CG) animation system. In some implementations, systems and methods are disclosed for representing scene composition and performing underlying computations within a unified generalized expression graph with cycles. Disclosed are natural mechanisms for level-of-detail control, adaptive caching, minimal re-compute, lazy evaluation, predictive computation and progressive refinement. The disclosed implementations provide real-time guarantees for minimum graphics frame rates and support automatic tradeoffs between rendering quality, accuracy and speed. The disclosed implementations also support new workflow paradigms, including layered animation and motion-path manipulation of articulated bodies.
    Type: Grant
    Filed: November 16, 2015
    Date of Patent: October 31, 2017
    Assignee: DIGITALFISH, INC.
    Inventors: Daniel Lawrence Herman, Mark J. Oftedal
  • Patent number: 9800859
    Abstract: Systems and methods for stereo imaging with camera arrays in accordance with embodiments of the invention are disclosed. In one embodiment, a method of generating depth information for an object using two or more array cameras that each include a plurality of imagers includes obtaining a first set of image data captured from a first set of viewpoints, identifying an object in the first set of image data, determining a first depth measurement, determining whether the first depth measurement is above a threshold, and when the depth is above the threshold: obtaining a second set of image data of the same scene from a second set of viewpoints located known distances from one viewpoint in the first set of viewpoints, identifying the object in the second set of image data, and determining a second depth measurement using the first set of image data and the second set of image data.
    Type: Grant
    Filed: May 6, 2015
    Date of Patent: October 24, 2017
    Assignee: FotoNation Cayman Limited
    Inventors: Kartik Venkataraman, Paul Gallagher, Ankit Jain, Semyon Nisenzon
  • Patent number: 9786087
    Abstract: Systems, devices, and techniques are provided for management of animation collisions. An animation that may collide with another animation is represented with a sequence of one or more animation states, wherein each animation state in the sequence is associated with or otherwise corresponds to a portion of the animation. In order to manage animation collisions, a state machine can be configured to include a group of states that comprises animation states from a group of animations that may collide and states that can control implementation of an animation in response to an animation collision. In one aspect, a state machine manager can implement the group of states in order to implement an animation and manage animation collisions.
    Type: Grant
    Filed: August 1, 2013
    Date of Patent: October 10, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Xiangyu Liu, Andrew Dean Christian
  • Patent number: 9755940
    Abstract: On a server, a collision handler is called by a physics simulation engine to categorize a plurality of rigid bodies in some simulation data as either colliding or not colliding. The simulation data relates to a triggering event involving the plurality of rigid bodies and is generated by a simulation of both gravitational trajectories and collisions of rigid bodies. Based on the categorization and the simulation data, a synchronization engine generates synchronization packets for the colliding bodies only and transmits the packets to one or more client computing devices configured to perform a reduced simulation function.
    Type: Grant
    Filed: October 11, 2015
    Date of Patent: September 5, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Marco Anastasi, Maurizio Sciglio
  • Patent number: 9741094
    Abstract: A system and method for morphing a design element which precisely and efficiently morphs a design element within a data file to new target parameters by changing its general proportions, dimensions or shape. The present invention is generally a computer software program which loads an existing data file which includes one or more design elements, such as parts or an assembly of parts, and then automatically morphs the design element's dimensions, proportions and/or shapes to meet target parameters input by a user. The present invention will create several groups of points corresponding to each surface and associated bounding curves of the existing design. It will then morph each group into a new shape as per the input requirements by the user, fit the morphed group into an infinite surface, create boundary curves for each morphed group and then trim the infinite surface to create the new, morphed design element.
    Type: Grant
    Filed: November 14, 2016
    Date of Patent: August 22, 2017
    Assignee: Detroit Engineered Products, Inc.
    Inventors: Radhakrishnan Mariappasamy, Radha Damodaran
  • Patent number: 9724605
    Abstract: A recorded experience in a virtual worlds system may be played back by one or more servers instantiating a new instance of a scene using one or more processors of the one or more servers and playing back the recorded experience in the new instance by modeling objects of a recorded initial scene state of the recorded experience in the new instance and updating the recorded initial scene state based on subsequent recorded changes over a time period, a recorded experience file includes the recorded initial scene state and the subsequent recorded changes and is stored in one or more memories of the one or more servers. One or more client devices are in communication with the one or more servers to participate in the new instance.
    Type: Grant
    Filed: August 12, 2014
    Date of Patent: August 8, 2017
    Inventors: Brian Shuster, Aaron Burch
  • Patent number: 9691179
    Abstract: In an example system, a computer is caused to function as: a feature detection unit which detects a feature arranged in a real space; an image generation unit which generates an image of a virtual space including a virtual object arranged based on the feature; a display control unit which causes a display apparatus to display an image in such a manner that a user perceives the image of the virtual space superimposed on the real space; a processing specification unit which specifies processing that can be executed in relation to the virtual space, based on the feature; and a menu output unit which outputs a menu for a user to instruct the processing specified by the processing specification unit, in such a manner that the menu can be operated by the user.
    Type: Grant
    Filed: July 3, 2013
    Date of Patent: June 27, 2017
    Assignee: Nintendo Co., Ltd.
    Inventor: Takeshi Hayakawa
  • Patent number: 9679400
    Abstract: Methods and devices provide a quick and intuitive method to launch a specific application, dial a number or send a message by drawing a pictorial key, symbol or shape on a computing device touchscreen, touchpad or other touchsurface. A shape drawn on a touchsurface is compared to one or more code shapes stored in memory to determine if there is a match or correlation. If the entered shape correlates to a stored code shape, an application, file, function or keystroke sequence linked to the correlated code shape is implemented. The methods also enable communication involving sending a shape or parameters defining a shape from one computing device to another where the shape is compared to code shapes in memory of the receiving computing device. If the received shape correlates to a stored code shape, an application, file, function or keystroke sequence linked to the correlated code shape is implemented.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: June 13, 2017
    Assignee: QUALCOMM Incorporated
    Inventor: Mong Suan Yee
  • Patent number: 9672411
    Abstract: An information processing apparatus may generate resource information used for playing back image content that can be divided into a plurality of zones. The information processing apparatus may include an image generator generating a still image from each of the plurality of zones, a face processor setting each of the plurality of zones to be a target zone and determining whether a face of a specific character which is determined to continuously appear in at least one zone before the target zone is contained in the still image generated from the target zone, and an information generator specifying, on the basis of a determination result obtained for each of the plurality of zones by the face processor, at least one zone in which the face of the specific character continuously appears as a face zone, and generating information concerning the face zone as one item of the resource information.
    Type: Grant
    Filed: March 9, 2015
    Date of Patent: June 6, 2017
    Assignee: Sony Corporation
    Inventors: Kaname Ogawa, Hiroshi Jinno, Makoto Yamada, Keiji Kanota
  • Patent number: 9639974
    Abstract: Systems, methods, apparatuses, and computer readable medium are provided that cause a two dimensional image to appear three dimensional and also create a dynamic or animated illustrated images. The systems, methods, apparatuses and computer readable mediums implement displacement maps in a number of novel ways in conjunction with among other software, facial feature recognition software to recognize the areas of the face and allow the users to then customize those areas that are recognized. Furthermore, the created displacement maps are used to create all of the dynamic effects of an image in motion.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: May 2, 2017
    Assignee: Facecake Technologies, Inc.
    Inventors: Linda Smith, Clayton Nicholas Graff, John Szeder
  • Patent number: 9632800
    Abstract: A method for accessing information in a software application using a computing device, the computing device comprising one or more processors, the one or more processors for executing a plurality of computer readable instructions, the computer readable instructions for implementing the method for accessing information, the method comprising the steps of determining that a pointer is hovering over an icon, the icon associated with icon specific information, displaying a Tooltip including a heading, a display window and an action button, the action button for launching an action in the application, displaying the icon specific information in the display window, detecting that a user has selected the action button, and launching the action.
    Type: Grant
    Filed: January 31, 2014
    Date of Patent: April 25, 2017
    Assignee: ALLSCRIPTS SOFTWARE, LLC
    Inventors: Mary Drechsler Chorley, Leo Benson, Melpakkam Sundar, John Lusk, Cassio Nishiguchi
  • Patent number: 9626878
    Abstract: An information processing apparatus includes a posture estimation unit, an abnormality determination unit, and a presentation unit. The posture estimation unit is configured to estimate a neck posture of a user. The abnormality determination unit is configured to determine whether a posture is abnormal based on the neck posture estimated by the posture estimation unit. The presentation unit is configured to present an abnormality of the posture to the user, when the abnormality determination unit determines that the posture is abnormal.
    Type: Grant
    Filed: October 22, 2013
    Date of Patent: April 18, 2017
    Assignee: SONY Corporation
    Inventor: Junichi Rekimoto
  • Patent number: 9626836
    Abstract: Systems for enhanced head-to-head hybrid gaming are provided.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: April 18, 2017
    Assignee: Gamblit Gaming, LLC
    Inventors: Miles Arnone, Frank Cire, Caitlyn Ross
  • Patent number: 9607573
    Abstract: A method, system and computer program for modifying avatar motion. The method includes receiving an input motion, determining an input motion model for the input motion sequence, and modifying an avatar motion model associated with the stored avatar to approximate the input motion model for the input motion sequence when the avatar motion model does not approximate the input motion model. The stored avatar is presented after the avatar motion model associated with the stored avatar is modified to approximate the input motion model for the input motion sequence.
    Type: Grant
    Filed: September 17, 2014
    Date of Patent: March 28, 2017
    Assignee: International Business Machines Corporation
    Inventors: Dimitri Kanevsky, James R. Kozloski, Clifford A. Pickover
  • Patent number: 9600133
    Abstract: Techniques for displaying object animations on a slide are disclosed. In accordance with these techniques, objects on a slide may be assigned actions when generating or editing the slide. The effects of the actions on the slide are depicted using one or more respective representations which represent the slide as it will appear after implementation of one or more corresponding actions.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: March 21, 2017
    Assignee: APPLE INC.
    Inventors: Paul Bradford Vaughan, James Eric Tilton, Christopher Morgan Connors, Ralph Lynn Melton, Jay Christopher Capela, Ted Stephen Boda
  • Patent number: 9589000
    Abstract: A machine-implemented method includes establishing a virtual or augmented reality entity, and establishing a state for the entity having a state time and state properties including a state spatial arrangement. The data entity and state are stored, and are subsequently received and outputted at a time other than the state time so as to exhibit a “virtual history machine” functionality. An apparatus includes a processor, a data store, and an output. A data entity establisher, a state establisher, a storer, a data entity receiver, a state receiver, and an outputter are instantiated on the processor.
    Type: Grant
    Filed: August 29, 2013
    Date of Patent: March 7, 2017
    Assignee: ATHEER, INC.
    Inventors: Sina Fateh, Ron Butterworth, Mohamed Nabil Hajj Chehade, Allen Yang Yang, Sleiman Itani
  • Patent number: 9575594
    Abstract: A virtual object can be controlled using one or more touch interfaces. A location for a first touch input can be determined on a first touch interface. A location for a second touch input can be determined on a second touch interface. A three-dimensional segment can be generated using the location of the first touch input, the location of the second touch input, and a pre-determined spatial relationship between the first touch interface and the second touch interface. The virtual object can be manipulated using the three-dimensional segment as a control input.
    Type: Grant
    Filed: June 21, 2016
    Date of Patent: February 21, 2017
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Ruxin Chen
  • Patent number: 9547379
    Abstract: A method for controlling an air mouse is disclosed. The method includes receiving a control mode of an air mouse sent by a set top box; acquiring angular velocities and moving time of the air mouse at various directions; determining speeds of the air mouse at various directions according to the control mode and the angular velocities of the air mouse at various directions; and calculating displacements of the air mouse at various directions according to the moving time and the speeds of the air mouse at various directions, and sending the displacements of the air mouse at various directions to the set top box, so as to control movement of a screen cursor.
    Type: Grant
    Filed: December 10, 2014
    Date of Patent: January 17, 2017
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Rao Fu, Jun Lu
  • Patent number: 9542975
    Abstract: Methods and systems for a centralized database for 3-D and other information in videos are presented. A centralized database contains video metadata such as camera, lighting, sound, object, depth, and annotation data that may be queried for and used in the editing of videos, including the addition and removal of objects and sounds. The metadata stored in the centralized database may be open to the public and admit contributor metadata.
    Type: Grant
    Filed: October 25, 2010
    Date of Patent: January 10, 2017
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Steven Osman, Vlad Stamate
  • Patent number: 9536386
    Abstract: A system for personalizable hybrid games including a gambling game and an entertainment game are provided.
    Type: Grant
    Filed: June 22, 2016
    Date of Patent: January 3, 2017
    Assignee: Gamblit Gaming, LLC
    Inventors: Miles Arnone, Eric Meyerhofer, Frank Cire
  • Patent number: 9536251
    Abstract: A computer-implemented method for providing advertisements in an augmented reality environment to a user includes receiving data related to a marker, the marker placed amongst one or more physical objects captured by the video camera. The computer-implemented method also includes retrieving dynamic digital content associated with the marker. Further, the computer-implemented method includes displaying the dynamic digital content amongst the one or more physical objects. Furthermore, the computer-implemented method includes receiving a user interaction with the dynamic digital content. Moreover, the computer-implemented method includes performing an action based on the user interaction.
    Type: Grant
    Filed: November 15, 2011
    Date of Patent: January 3, 2017
    Assignee: Excalibur IP, LLC
    Inventors: Wyatt (Ling-Wei) Huang, Balduran (Chia-Chun) Chang, Connie (Shih-Ting) Huang
  • Patent number: 9519274
    Abstract: In a method for an electronic device to adjust fool-proofing functions of operations, an algorithm corresponding to each of the operations, and ranges for triggering the fool-proofing functions of the operations are preset. When an operation inputted by an operator is obtained, the method calculates a skilled value of the operation according to reference parameters of the operator and an algorithm corresponding to the operation. The method further determines a fool-proofing function of the operation that is triggered by the electronic device according to the skilled value and the ranges for triggering the fool-proofing functions, and adjusts the electronic device to execute the determined fool-proofing function.
    Type: Grant
    Filed: March 11, 2014
    Date of Patent: December 13, 2016
    Assignee: Shenzhen Airdrawing Technology Service Co., Ltd
    Inventors: Ke-Fei Lin, Shan-Chuan Jeng, Chien-Fa Yeh, Chung-I Lee
  • Patent number: 9513766
    Abstract: Embodiments relate to a graphical user interface to be displayed on a display apparatus, the graphical user interface comprising an addressable window which is assigned to a selectable object, where the window has a list with a plurality of buttons, where there is assigned to each button an action of a particular type in relation to the object assigned to the window, where there is assigned to one button a formation action which can be performed with elements from a plurality of element types, where the performance of the formation action requires the selection of the number of elements to be used for the formation action from a maximum number of elements for at least one element type. A further formation action can be assigned to a further button, where the element types and the number of elements to be used for the further formation action are determined by the element types and elements used in the last performed formation action.
    Type: Grant
    Filed: March 8, 2013
    Date of Patent: December 6, 2016
    Assignee: XYRALITY GMBH
    Inventor: Alexander Spohr
  • Patent number: 9463386
    Abstract: A gaming environment may be established, by executing a game engine module to provide an interactive game instance, and instantiating a state machine instance using one or both of a state machine client module or a state machine server module. In an example, during execution of the game engine module, scripting commands within a state machine definition may be parsed and executed to obtain information indicative of one or more of a state of an in-game object or a state transition of an in-game object. An in-game object may be controlled within the game instance via the state machine using at least a portion of the information obtained from parsing and executing the scripting commands. Use of the state machine definitions in conjunction with the scripting commands may enable representation of complex scenarios for virtual objects and events in the gaming environment in a simplified format.
    Type: Grant
    Filed: April 30, 2012
    Date of Patent: October 11, 2016
    Assignee: Zynga Inc.
    Inventors: Peter Chapman, Andrew Foster, Michael Capps
  • Patent number: 9454925
    Abstract: According to an aspect, an image degradation prevention module for reducing image degradation includes a screen region monitor configured to derive light information for each of a plurality of regions of a display screen, an element movement detector configured to derive element motion information for a plurality of display elements displayed in the plurality of regions, and a decision engine configured to select a corrective action among a plurality of corrective actions for at least one display element of the plurality of display elements to reduce image degradation based on the light information and the element motion information. The light information may include light intensity information indicating a rate of change in light intensity of pixels within each region. The element motion information may include a rate of movement for each display element within the display screen.
    Type: Grant
    Filed: September 10, 2014
    Date of Patent: September 27, 2016
    Assignee: Google Inc.
    Inventors: James Grafton, James Kent
  • Patent number: 9449416
    Abstract: The invention relates to a method and system of forming an animation of a virtual object within a virtual environment, and a storage medium storing a computer program for carrying out such a method. The virtual object comprises a plurality of object parts, and one or more predetermined object part groups each being a sequence of linked object parts. The method includes generating a target configuration for the parts of the object part group, using a scale factor to scale the target configuration.
    Type: Grant
    Filed: February 29, 2012
    Date of Patent: September 20, 2016
    Assignee: Zynga Inc.
    Inventors: Danny Chapman, Thomas Lowe
  • Patent number: 9448840
    Abstract: A runtime management system is described herein that allows a hosting layer to dynamically control an underlying runtime to selectively turn on and off various subsystems of the runtime to save power and extend battery life of devices on which the system operates. The hosting layer has information about usage of the runtime that is not available within the runtime, and can do a more effective job of disabling parts of the runtime that will not be needed without negatively affecting application performance or device responsiveness. The runtime management system includes a protocol of communication between arbitrary hosts and underlying platforms to expose a set of options to allow the host to selectively turn parts of a runtime on and off depending on varying environmental pressures. Thus, the runtime management system provides more effective use of potentially scarce power resources available on mobile platforms.
    Type: Grant
    Filed: December 29, 2014
    Date of Patent: September 20, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Akhilesh Kaza, Gunjan A. Shah, Shawn T. Oster, Jonathan D. Sheller, Alan C. T. Liu, Nimesh I. Amin, Randal J. Ramig
  • Patent number: 9443137
    Abstract: Provided is an apparatus and method for detecting body parts, the method including identifying a group of sub-images relevant to a body part in an image to be detected, assigning a reliability coefficient for the body part to the sub-images in the group of sub-images based on a basic vision feature of the sub-images and an extension feature of the sub-images to neighboring regions, and detecting a location of the body part by overlaying sub-images having reliability coefficients higher than a threshold value.
    Type: Grant
    Filed: April 5, 2013
    Date of Patent: September 13, 2016
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Liu Rong, Zhang Fan, Chen Maolin, Chang Kyu Choi, Ji Yeun Kim, Kee Chang Lee
  • Patent number: 9424463
    Abstract: A system for image manipulation enables an improved video conferencing experience. The system includes a camera; a display screen adjacent to the camera; a processor coupled to the camera and the display screen; and a memory coupled to the processor. Instructions executable by the processor enable receiving a source image from the camera and generating a synthetic image based upon the source image. The synthetic image corresponds to a view of a virtual camera located at the display screen.
    Type: Grant
    Filed: April 14, 2015
    Date of Patent: August 23, 2016
    Assignee: Commonwealth Scientific and Industrial Research Organisation
    Inventor: Simon Lucey
  • Patent number: 9426606
    Abstract: An electronic apparatus is provided. The electronic apparatus includes a communication unit configured to transmit an altered apparatus Identification (ID) and a controller configured to detect a user's action in a pairing mode of the electronic apparatus, to determine additional information according to the user's action, and to transmit the altered apparatus ID including an apparatus ID of the electronic apparatus and the additional information to another electronic apparatus that can be paired with the electronic apparatus.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: August 23, 2016
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ki-Wan Lee, Jong-Hyun Ahn
  • Patent number: 9377901
    Abstract: The present invention discloses a display method, for determining user's operations and thereby reducing a rate of error response of electronic devices. The method includes determining a first image to be projected; projecting the first image, the first image being capable of forming on the touch sensing unit when the second electronic device is in the projection area and the touch sensing unit faces towards the first electronic device; receiving first operation information transmitted by the second electronic device, the first operation information being information acquired by the second electronic device in response to a first operation performed by a user on the touch sensing unit; determining a second image based on the first image and the first operation information; and projecting the second image. The present invention also discloses a display control method, and electronic devices for implementing the previously mentioned two methods respectively.
    Type: Grant
    Filed: February 10, 2014
    Date of Patent: June 28, 2016
    Assignees: BEIJING LENOVO SOFTWARE LTD., LENOVO (BEIJING) CO., LTD.
    Inventor: Zhiqiang He
  • Patent number: 9373191
    Abstract: The disclosed subject matter relates to computer implemented methods for generating an exterior geometry of a building based on a corresponding collection of interior geometry. In one aspect, a method includes receiving a collection of interior geometry data of a building. The interior geometry data of the building corresponds to one or more levels. Each of the level(s) is associated with a corresponding vertical span, and to one or more 2-D section polygons. The method further includes extruding the 2-D section polygons into 2.5-D section polygons, by assigning to each of the 2-D section polygons, the vertical span associated with the level(s) to which the 2-D section polygons correspond. The method further includes constructing a 2.5-D merged polygon set based on the extruded 2.5-D section polygons. The outer shell of the 2.5-D merged polygon set corresponds to an exterior geometry corresponding to the building.
    Type: Grant
    Filed: January 14, 2013
    Date of Patent: June 21, 2016
    Assignee: Google Inc.
    Inventors: Sascha Benjamin Brawer, Andrew Lookingbill, Brian Edmond Brewington, Michael Edward Goss
  • Patent number: 9348430
    Abstract: A method and apparatus that incorporate teachings of the present disclosure may include, for example, receiving at a mobile communication device a video stream from a computing device. The video stream is associated with images generated by a software application and is transmitted by the computing device responsive to a request to redirect control of the software application to the mobile communication device. The method may also include presenting the streamed video at the mobile communication device and transmitting to the computing device a stimulation of a remote user input function associated with the mobile communication device, where the transmitted stimulation corresponds to at least one action of a plurality of associable actions that can be executed by the software application. Additional embodiments are disclosed.
    Type: Grant
    Filed: February 6, 2012
    Date of Patent: May 24, 2016
    Assignee: STEELSERIES ApS
    Inventors: Bruce Hawver, Jacob Wolff-Petersen
  • Patent number: 9345967
    Abstract: A method, device, and system for interacting with a virtual character in a smart terminal are provided. The method can shoot and display a user reality scene image on a screen of the smart terminal, and superimpose the virtual character on the user reality scene image; acquire position of the smart terminal during movement; and determine whether change of the position of the smart terminal exceeds a preset threshold value, and if the change of the position of the smart terminal exceeds the preset threshold value, the method moves the virtual character in the user reality scene image according to the current position of the smart terminal. The interaction scene between the user and the virtual character is not a virtual scene but a true reality scene, both numbers and contents of the scenes are not limited, and the interaction efficiency in this manner is higher.
    Type: Grant
    Filed: March 21, 2014
    Date of Patent: May 24, 2016
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Zhenwei Zhang, Ling Wang, Fen Xiao, Zhehui Wu
  • Patent number: 9344707
    Abstract: A depth sensor obtains images of articulated portions of a user's body such as the hand. A predefined model of the articulated body portions is provided. Representative attract points of the model are matched to centroids of the depth sensor data, and a rigid transform of the model is performed, in an initial, relatively coarse matching process. This matching process is then refined in a non-rigid transform of the model, using attract point-to-centroid matching. In a further refinement, an iterative process rasterizes the model to provide depth pixels of the model, and compares the depth pixels of the model to the depth pixels of the depth sensor. The refinement is guided by whether the depth pixels of the model are overlapping or non-overlapping with the depth pixels of the depth sensor. Collision, distance and angle constraints are also imposed on the model.
    Type: Grant
    Filed: November 28, 2012
    Date of Patent: May 17, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kyungsuk David Lee, Alexandru Balan
  • Patent number: 9338437
    Abstract: Apparatus and method for reconstructing a high-density three-dimensional (3D) image are provided. The method includes: generating an initial 3D image by matching a first image captured using a first camera and a second image captured using a second camera; searching for a first area and a second area from the initial 3D image by using a number of characteristic points included in the initial 3D image; detecting a plane from a divided first area; filtering a divided second area; and synthesizing the detected plane and the filtered second area.
    Type: Grant
    Filed: December 27, 2012
    Date of Patent: May 10, 2016
    Assignee: Hanwha Techwin Co., Ltd.
    Inventor: Soon-Min Bae
  • Patent number: 9330464
    Abstract: Embodiments are disclosed that relate to controlling a depth camera. In one example, a method comprises emitting light from an illumination source toward a scene through an optical window, selectively routing a at least a portion of the light emitted from the illumination source to an image sensor such that the portion of the light is not transmitted through the optical window, receiving an output signal generated by the image sensor based on light reflected by the scene, the output signal including at least one depth value of the scene, and adjusting the output signal based on the selectively routed portion of the light.
    Type: Grant
    Filed: December 12, 2014
    Date of Patent: May 3, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Nathan Ackerman, Andrew C. Goris, Amir Nevet, David Mandelboum, Asaf Pellman
  • Patent number: 9329598
    Abstract: A method of localizing a mobile robot includes receiving sensor data of a scene about the robot and executing a particle filter having a set of particles. Each particle has associated maps representing a robot location hypothesis. The method further includes updating the maps associated with each particle based on the received sensor data, assessing a weight for each particle based on the received sensor data, selecting a particle based on its weight, and determining a location of the robot based on the selected particle.
    Type: Grant
    Filed: April 13, 2015
    Date of Patent: May 3, 2016
    Assignee: iRobot Corporation
    Inventors: Robert Todd Pack, Scott R. Lenser, Justin H. Kearns, Orjeta Taka
  • Patent number: 9317175
    Abstract: Systems and approaches provide for a user interface (UI) that is based on the position of a user's head with respect to a computing device. In particular, a three-dimensional (3D) rendering engine that is independent of a particular operating system can be integrated with the UI framework of the operating system such that a window or view into a fully 3D world can be drawn using the independent renderer. This window or view can then be laid out and manipulated in a manner similar to other elements of the UI framework. Further, the 3D window or view can be configured to monitor head tracking data as input events to the UI framework. The contents of the window or view can be redrawn or rendered based on the head tracking data to simulate three-dimensionality of the content.
    Type: Grant
    Filed: September 24, 2013
    Date of Patent: April 19, 2016
    Assignee: Amazon Technologies, Inc.
    Inventor: Christopher Wayne Lockhart
  • Patent number: 9292954
    Abstract: Systems and methods can be used to render an animated scene using a temporal voxel buffer. A voxel buffer including a plurality of voxel arrays is received. A voxel array includes at least one time value associated with a voxel and at least one parameter value associated with each time value. For each pixel of an image to rendered, a plurality of rays are cast through the voxel grid. A time value is associated with each ray. A parameter value is sampled at each voxel along a ray at the time associated with the ray. A pixel value is determined based on the sampled parameter values for the plurality of rays.
    Type: Grant
    Filed: January 17, 2014
    Date of Patent: March 22, 2016
    Assignee: PIXAR
    Inventor: Carl Magnus Wrenninge
  • Patent number: 9286690
    Abstract: A method for moving object detection based on a Fisher's Linear Discriminant-based Radial Basis Function Network (FLD-based RBF network) includes the following steps. A sequence of incoming frames of a fixed location delivered over a network are received. A plurality of discriminant patterns are generated from the sequence of incoming frames based on a Fisher's Linear Discriminant (FLD) model. A background model is constructed from the sequence of incoming frames based on a Radial Basis Function (RBF) network model. A current incoming frame is received and divided into a plurality of current incoming blocks. Each of the current incoming blocks is classified as either a background block or a moving object block according to the discriminant patterns. Whether a current incoming pixel of the moving object blocks among the current incoming blocks is a moving object pixel or a background pixel is determined according to the background model.
    Type: Grant
    Filed: March 14, 2014
    Date of Patent: March 15, 2016
    Assignee: National Taipei University of Technology
    Inventors: Shih-Chia Huang, Bo-Hao Chen
  • Patent number: 9275490
    Abstract: A method of applying a post-render motion blur to an object may include receiving a first image of the object. The first image need not be motion blurred, and the first image may include a first pixel and rendered color information for the first pixel. The method may also include receiving a second image of the object. The second image may be motion blurred, and the second image may include a second pixel and a location of the second pixel before the second image was motion blurred. Areas that are occluded in the second image may be identified and colored using a third image rendering only those areas. Unoccluded areas of the second image may be colored using information from the first image.
    Type: Grant
    Filed: May 4, 2015
    Date of Patent: March 1, 2016
    Assignee: Lucasfilm Entertainment Company Ltd.
    Inventors: Victor Schutz, Patrick Conran
  • Patent number: 9269178
    Abstract: Some embodiments provide a non-transitory machine-readable medium that stores a mapping application which when executed on a device by at least one processing unit provides automated animation of a three-dimensional (3D) map along a navigation route. The mapping application identifies a first set of attributes for determining a first position of a virtual camera in the 3D map at a first instance in time. Based on the identified first set of attributes, the mapping application determines the position of the virtual camera in the 3D map at the first instance in time. The mapping application identifies a second set of attributes for determining a second position of the virtual camera in the 3D map at a second instance in time. Based on the identified second set of attributes, the mapping application determines the position of the virtual camera in the 3D map at the second instance in time.
    Type: Grant
    Filed: September 30, 2012
    Date of Patent: February 23, 2016
    Assignee: APPLE INC.
    Inventors: Patrick S. Piemonte, Aroon Pahwa, Christopher D. Moore
  • Patent number: 9262855
    Abstract: An animation system is described herein that uses a transfer function on the progress of an animation that realistically simulates a bounce behavior. The transfer function maps normalized time and allows a user to specify both a number of bounces and a bounciness factor. Given a normalized time input, the animation system maps the time input onto a unit space where a single unit is the duration of the first bounce. In this coordinate space, the system can find the corresponding bounce and compute the start unit and end unit of this bounce. The system projects the start and end units back onto a normalized time scale and fits these points to a quadratic curve. The quadratic curve can be directly evaluated at the normalized time input to produce a particular output.
    Type: Grant
    Filed: March 18, 2010
    Date of Patent: February 16, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Brandon C. Furtwangler, Saied Khanahmadi
  • Patent number: 9258168
    Abstract: One exemplary embodiment can describe a method for communicating. The method for communicating can include a step for identifying characteristics of a communications channel, a step for identifying a set of nonlinear functions used to generate waveforms, a step for assigning a unique numeric code to each waveform, a step for transmitting a numeric sequence as a series of waveforms, a step for receiving the series of waveforms, and a step for decoding the series of waveforms.
    Type: Grant
    Filed: December 24, 2014
    Date of Patent: February 9, 2016
    Assignee: ASTRAPI CORPORATION
    Inventor: Jerrold D. Prothero
  • Patent number: 9251618
    Abstract: The movement of skin on an animated target, such as a character or other object, is simulated via a simulation software application. The software application creates a finite element model (FEM) comprising a plurality of finite elements based on an animated target. The software application attaches a first constraint force to a node associated with a first finite element in the plurality of finite elements. The software application attaches a second constraint force to the node. The software application detects a movement of the first finite element that results in a corresponding movement of the node. The software application determines a new position for the node based on the movement of at least one of the first finite element, the first constraint force, and the second constraint force.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: February 2, 2016
    Assignee: PIXAR
    Inventors: Ryan Kautzman, Jiayi Chong, Patrick Coleman