Abstract: Systems and methods for generating and facilitating access to a personalized augmented rendering of a user to be presented in an augmented reality environment are discussed herein. The augmented rendering of a user may be personalized by the user to comprise a desired representation of the user in an augmented reality environment. When a second user is detected within the field of view of a first user, the second user may be identified and virtual content (e.g., an augmented rendering) for the second user may be obtained. The virtual content obtained may differ based on one or more subscriptions of the first user and/or permissions associated with the virtual content of the second user. The virtual content obtained may be rendered and appear superimposed over or in conjunction with a view of the second in the augmented reality environment.
Abstract: An inclination detection unit detects an inclination of the terminal device. A screen image rotation unit rotates a screen image when an inclination state of the terminal device detected by the inclination detection unit becomes a predetermined state. A captured image obtaining unit obtains a captured image of an image capturing unit for capturing an image of a user seeing the screen image. A comparison unit compares a captured image of the image capturing unit before the inclination state of the terminal device becomes the predetermined state with a captured image of the image capturing unit when or after the inclination state of the terminal device becomes the predetermined state. A suppression unit suppresses a rotation of the screen image by the screen image rotation unit, based on a result of comparison by the comparison unit.
Abstract: A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control.
Type:
Grant
Filed:
February 26, 2015
Date of Patent:
November 8, 2016
Assignee:
MICROSOFT TECHNOLOGY LICENSING, LLC
Inventors:
Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic
Abstract: A system for reproducing virtual objects includes a detector device that carries a known tracking pattern or tracking feature; and a host device configured for virtually reproducing a template pattern to a surface and producing an image combining the tracking pattern and the template pattern. The template pattern corresponds to a virtual object. The host device is configured to process the image and thereby transmit information regarding the geometrical relationship between the tracking pattern and the template pattern to a user so that the user can reproduce the virtual object on the surface based on the information.
Abstract: An image processing apparatus comprises a simulation unit which performs a simulation of operation of an object and a display unit which generates based on a result of the simulation an image in which a virtual object is operating and displays the image. The apparatus further comprises a calculating unit which calculates a position and orientation of a predetermined part of an observer who observes the virtual object being displayed, and a generating unit which generates a parameter for use in the simulation based on the position and orientation of the predetermined part of the observer and the position and orientation of the virtual object. This structure makes it possible to set parameters of simulation in a virtual-reality or mixed-reality space through manipulation performed by the observer.
Abstract: At an animation authoring component, an inputted movement of an object displayed in a graphical user interface is received. Further, at a physics animation rule engine, a physics generated movement of the object that results from a set of physics animation rules is applied to the inputted movement. In addition, at the graphical user interface, the inputted movement of the object is displayed in addition to the physics generated movement of the object. At the animation authoring component, the physics generated movement of the object in addition to the inputted movement of the object is recorded.