Patents by Inventor Otmar Hilliges

Otmar Hilliges has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9053571
    Abstract: Generating computer models of 3D objects is described. In one example, depth images of an object captured by a substantially static depth camera are used to generate the model, which is stored in a memory device in a three-dimensional volume. Portions of the depth image determined to relate to the background are removed to leave a foreground depth image. The position and orientation of the object in the foreground depth image is tracked by comparison to a preceding depth image, and the foreground depth image is integrated into the volume by using the position and orientation to determine where to add data derived from the foreground depth image into the volume. In examples, the object is hand-rotated by a user before the depth camera. Hands that occlude the object are integrated out of the model as they do not move in sync with the object due to re-gripping.
    Type: Grant
    Filed: June 6, 2011
    Date of Patent: June 9, 2015
    Assignee: Microsoft Corporation
    Inventors: Jamie Daniel Joseph Shotton, Shahram Izadi, Otmar Hilliges, David Kim, David Molyneaux, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges
  • Patent number: 8971612
    Abstract: Learning image processing tasks from scene reconstructions is described where the tasks may include but are not limited to: image de-noising, image in-painting, optical flow detection, interest point detection. In various embodiments training data is generated from a 2 or higher dimensional reconstruction of a scene and from empirical images of the same scene. In an example a machine learning system learns at least one parameter of a function for performing the image processing task by using the training data. In an example, the machine learning system comprises a random decision forest. In an example, the scene reconstruction is obtained by moving an image capture apparatus in an environment where the image capture apparatus has an associated dense reconstruction and camera tracking system.
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: March 3, 2015
    Assignee: Microsoft Corporation
    Inventors: Jamie Daniel Joseph Shotton, Pushmeet Kohli, Stefan Johannes Josef Holzer, Shahram Izadi, Carsten Curt Eckard Rother, Sebastian Nowozin, David Kim, David Molyneaux, Otmar Hilliges
  • Publication number: 20140184749
    Abstract: Detecting material properties such reflectivity, true color and other properties of surfaces in a real world environment is described in various examples using a single hand-held device. For example, the detected material properties are calculated using a photometric stereo system which exploits known relationships between lighting conditions, surface normals, true color and image intensity. In examples, a user moves around in an environment capturing color images of surfaces in the scene from different orientations under known lighting conditions. In various examples, surfaces normals of patches of surfaces are calculated using the captured data to enable fine detail such as human hair, netting, textured surfaces to be modeled. In examples, the modeled data is used to render images depicting the scene with realism or to superimpose virtual graphics on the real world in a realistic manner.
    Type: Application
    Filed: December 28, 2012
    Publication date: July 3, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Otmar Hilliges, Malte Hanno Weiss, Shahram Izadi, David Kim, Carsten Curt Eckard Rother
  • Patent number: 8711206
    Abstract: Mobile camera localization using depth maps is described for robotics, immersive gaming, augmented reality and other applications. In an embodiment a mobile depth camera is tracked in an environment at the same time as a 3D model of the environment is formed using the sensed depth data. In an embodiment, when camera tracking fails, this is detected and the camera is relocalized either by using previously gathered keyframes or in other ways. In an embodiment, loop closures are detected in which the mobile camera revisits a location, by comparing features of a current depth map with the 3D model in real time. In embodiments the detected loop closures are used to improve the consistency and accuracy of the 3D model of the environment.
    Type: Grant
    Filed: January 31, 2011
    Date of Patent: April 29, 2014
    Assignee: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20140104274
    Abstract: An augmented reality system which enables grasping of virtual objects is described such as to stack virtual cubes or to manipulate virtual objects in other ways. In various embodiments a user's hand or another real object is tracked in an augmented reality environment. In examples, the shape of the tracked real object is approximated using at least two different types of particles and the virtual objects are updated according to simulated forces exerted between the augmented reality environment and at least some of the particles. In various embodiments 3D positions of a first one of the types of particles, kinematic particles, are updated according to the tracked real object; and passive particles move with linked kinematic particles without penetrating virtual objects. In some examples a real-time optic flow process is used to track motion of the real object.
    Type: Application
    Filed: October 17, 2012
    Publication date: April 17, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Otmar Hilliges, David Kim, Shahram Izadi, Malte Hanno Weiss
  • Publication number: 20140098018
    Abstract: A wearable sensor for tracking articulated body parts is described such as a wrist-worn device which enables 3D tracking of fingers and optionally also the arm and hand without the need to wear a glove or markers on the hand. In an embodiment a camera captures images of an articulated part of a body of a wearer of the device and an articulated model of the body part is tracked in real time to enable gesture-based control of a separate computing device such as a smart phone, laptop computer or other computing device. In examples the device has a structured illumination source and a diffuse illumination source for illuminating the articulated body part.
    Type: Application
    Filed: October 4, 2012
    Publication date: April 10, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: David Kim, Shahram Izadi, Otmar Hilliges, David Alexander Butler, Stephen Hodges, Patrick Luke Olivier, Jiawen Chen, Iason Oikonomidis
  • Patent number: 8660303
    Abstract: A system and method for detecting and tracking targets including body parts and props is described. In one aspect, the disclosed technology acquires one or more depth images, generates one or more classification maps associated with one or more body parts and one or more props, tracks the one or more body parts using a skeletal tracking system, tracks the one or more props using a prop tracking system, and reports metrics regarding the one or more body parts and the one or more props. In some embodiments, feedback may occur between the skeletal tracking system and the prop tracking system.
    Type: Grant
    Filed: December 20, 2010
    Date of Patent: February 25, 2014
    Assignee: Microsoft Corporation
    Inventors: Shahram Izadi, Jamie Shotton, John Winn, Antonio Criminisi, Otmar Hilliges, Mat Cook, David Molyneaux
  • Patent number: 8638985
    Abstract: Techniques for human body pose estimation are disclosed herein. Images such as depth images, silhouette images, or volumetric images may be generated and pixels or voxels of the images may be identified. The techniques may process the pixels or voxels to determine a probability that each pixel or voxel is associated with a segment of a body captured in the image or to determine a three-dimensional representation for each pixel or voxel that is associated with a location on a canonical body. These probabilities or three-dimensional representations may then be utilized along with the images to construct a posed model of the body captured in the image.
    Type: Grant
    Filed: March 3, 2011
    Date of Patent: January 28, 2014
    Assignee: Microsoft Corporation
    Inventors: Jamie Daniel Joseph Shotton, Shahram Izadi, Otmar Hilliges, David Kim, David Geoffrey Molyneaux, Matthew Darius Cook, Pushmeet Kohli, Antonio Criminisi, Ross Brook Girshick, Andrew William Fitzgibbon
  • Patent number: 8587583
    Abstract: Three-dimensional environment reconstruction is described. In an example, a 3D model of a real-world environment is generated in a 3D volume made up of voxels stored on a memory device. The model is built from data describing a camera location and orientation, and a depth image with pixels indicating a distance from the camera to a point in the environment. A separate execution thread is assigned to each voxel in a plane of the volume. Each thread uses the camera location and orientation to determine a corresponding depth image location for its associated voxel, determines a factor relating to the distance between the associated voxel and the point in the environment at the corresponding location, and updates a stored value at the associated voxel using the factor. Each thread iterates through an equivalent voxel in the remaining planes of the volume, repeating the process to update the stored value.
    Type: Grant
    Filed: January 31, 2011
    Date of Patent: November 19, 2013
    Assignee: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Stephen Edward Hodges, David Alexander Butler, Andrew Fitzgibbon, Pushmeet Kohli
  • Patent number: 8570320
    Abstract: Use of a 3D environment model in gameplay is described. In an embodiment, a mobile depth camera is used to capture a series of depth images as it is moved around and a dense 3D model of the environment is generated from this series of depth images. This dense 3D model is incorporated within an interactive application, such as a game. The mobile depth camera is then placed in a static position for an interactive phase, which in some examples is gameplay, and the system detects motion of a user within a part of the environment from a second series of depth images captured by the camera. This motion provides a user input to the interactive application, such as a game. In further embodiments, automatic recognition and identification of objects within the 3D model may be performed and these identified objects then change the way that the interactive application operates.
    Type: Grant
    Filed: January 31, 2011
    Date of Patent: October 29, 2013
    Assignee: Microsoft Corporation
    Inventors: Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20130244782
    Abstract: Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used.
    Type: Application
    Filed: February 23, 2013
    Publication date: September 19, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Patent number: 8502795
    Abstract: The claimed subject matter provides a system and/or a method that facilitates enhancing interactive surface technologies for data manipulation. A surface detection component can employ a multiple contact surfacing technology to detect a surface input, wherein the detected surface input enables a physical interaction with a portion of displayed data that represents a corporeal object. A physics engine can integrate a portion of Newtonian physics into the interaction with the portion of displayed data in order to model at least one quantity related associated with the corporeal object, the quantity is at least one of a force, a mass, a velocity, or a friction.
    Type: Grant
    Filed: February 29, 2012
    Date of Patent: August 6, 2013
    Assignee: Microsoft Corporation
    Inventors: Andrew David Wilson, Shahram Izadi, Armando Garcia-Mendoza, David Kirk, Otmar Hilliges
  • Patent number: 8502816
    Abstract: A tabletop display providing multiple views to users is described. In an embodiment the display comprises a rotatable view-angle restrictive filter and a display system. The display system displays a sequence of images synchronized with the rotation of the filter to provide multiple views according to viewing angle. These multiple views provide a user with a 3D display or with personalized content which is not visible to a user at a sufficiently different viewing angle. In some embodiments, the display comprises a diffuser layer on which the sequence of images are displayed. In further embodiments, the diffuser is switchable between a diffuse state when images are displayed and a transparent state when imaging beyond the surface can be performed. The device may form part of a tabletop comprising with a touch-sensitive surface. Detected touch events and images captured through the surface may be used to modify the images being displayed.
    Type: Grant
    Filed: December 2, 2010
    Date of Patent: August 6, 2013
    Assignee: Microsoft Corporation
    Inventors: David Alexander Butler, Stephen Edward Hodges, Shahram Izadi, Nicolas Villar, Stuart Taylor, David Molyneaux, Otmar Hilliges
  • Publication number: 20130156297
    Abstract: Learning image processing tasks from scene reconstructions is described where the tasks may include but are not limited to: image de-noising, image in-painting, optical flow detection, interest point detection. In various embodiments training data is generated from a 2 or higher dimensional reconstruction of a scene and from empirical images of the same scene. In an example a machine learning system learns at least one parameter of a function for performing the image processing task by using the training data. In an example, the machine learning system comprises a random decision forest. In an example, the scene reconstruction is obtained by moving an image capture apparatus in an environment where the image capture apparatus has an associated dense reconstruction and camera tracking system.
    Type: Application
    Filed: December 15, 2011
    Publication date: June 20, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Jamie Daniel Joseph SHOTTON, Pushmeet KOHLI, Stefan Johannes Josef HOLZER, Shahram IZADI, Carsten Curt Eckard ROTHER, Sebastian NOWOZIN, David KIM, David MOLYNEAUX, Otmar HILLIGES
  • Patent number: 8401225
    Abstract: Moving object segmentation using depth images is described. In an example, a moving object is segmented from the background of a depth image of a scene received from a mobile depth camera. A previous depth image of the scene is retrieved, and compared to the current depth image using an iterative closest point algorithm. The iterative closest point algorithm includes a determination of a set of points that correspond between the current depth image and the previous depth image. During the determination of the set of points, one or more outlying points are detected that do not correspond between the two depth images, and the image elements at these outlying points are labeled as belonging to the moving object. In examples, the iterative closest point algorithm is executed as part of an algorithm for tracking the mobile depth camera, and hence the segmentation does not add substantial additional computational complexity.
    Type: Grant
    Filed: January 31, 2011
    Date of Patent: March 19, 2013
    Assignee: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, Otmar Hilliges, David Kim, David Molyneaux, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Patent number: 8401242
    Abstract: Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used.
    Type: Grant
    Filed: January 31, 2011
    Date of Patent: March 19, 2013
    Assignee: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20120306850
    Abstract: A system and method for providing an augmented reality environment in which the environmental mapping process is decoupled from the localization processes performed by one or more mobile devices is described. In some embodiments, an augmented reality system includes a mapping system with independent sensing devices for mapping a particular real-world environment and one or more mobile devices. Each of the one or more mobile devices utilizes a separate asynchronous computing pipeline for localizing the mobile device and rendering virtual objects from a point of view of the mobile device. This distributed approach provides an efficient way for supporting mapping and localization processes for a large number of mobile devices, which are typically constrained by form factor and battery life limitations.
    Type: Application
    Filed: June 2, 2011
    Publication date: December 6, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Alexandru Balan, Jason Flaks, Steve Hodges, Michael Isard, Oliver Williams, Paul Barham, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim
  • Publication number: 20120306876
    Abstract: Generating computer models of 3D objects is described. In one example, depth images of an object captured by a substantially static depth camera are used to generate the model, which is stored in a memory device in a three-dimensional volume. Portions of the depth image determined to relate to the background are removed to leave a foreground depth image. The position and orientation of the object in the foreground depth image is tracked by comparison to a preceding depth image, and the foreground depth image is integrated into the volume by using the position and orientation to determine where to add data derived from the foreground depth image into the volume. In examples, the object is hand-rotated by a user before the depth camera. Hands that occlude the object are integrated out of the model as they do not move in sync with the object due to re-gripping.
    Type: Application
    Filed: June 6, 2011
    Publication date: December 6, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Jamie Daniel Joseph SHOTTON, Shahram IZADI, Otmar HILLIGES, David KIM, David MOLYNEAUX, Pushmeet KOHLI, Andrew FITZGIBBON, Stephen Edward HODGES
  • Publication number: 20120196679
    Abstract: Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20120195471
    Abstract: Moving object segmentation using depth images is described. In an example, a moving object is segmented from the background of a depth image of a scene received from a mobile depth camera. A previous depth image of the scene is retrieved, and compared to the current depth image using an iterative closest point algorithm. The iterative closest point algorithm includes a determination of a set of points that correspond between the current depth image and the previous depth image. During the determination of the set of points, one or more outlying points are detected that do not correspond between the two depth images, and the image elements at these outlying points are labeled as belonging to the moving object. In examples, the iterative closest point algorithm is executed as part of an algorithm for tracking the mobile depth camera, and hence the segmentation does not add substantial additional computational complexity.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Richard NEWCOMBE, Shahram IZADI, Otmar HILLIGES, David KIM, David MOLYNEAUX, Jamie Daniel Joseph SHOTTON, Pushmeet KOHLI, Andrew FITZGIBBON, Stephen Edward HODGES, David Alexander BUTLER