Patents by Inventor David Molyneaux

David Molyneaux has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130290910
    Abstract: User interface control using a keyboard is described. In an embodiment, a user interface displayed on a display device is controlled using a computer connected to a keyboard. The keyboard has a plurality of alphanumeric keys that can be used for text entry. The computer receives data comprising a sequence of key-presses from the keyboard, and generates for each key-press a physical location on the keyboard. The relative physical locations of the key-presses are compared to calculate a movement path over the keyboard. The movement path describes the path of a user's digit over the keyboard. The movement path is mapped to a sequence of coordinates in the user interface, and the movement of an object displayed in the user interface is controlled in accordance with the sequence of coordinates.
    Type: Application
    Filed: June 24, 2013
    Publication date: October 31, 2013
    Inventors: Harper LaFave, Stephen Hodges, James Scott, Shahram Izadi, David Molyneaux, Nicolas Villar, David Alexander Butler, Mike Hazas
  • Patent number: 8570320
    Abstract: Use of a 3D environment model in gameplay is described. In an embodiment, a mobile depth camera is used to capture a series of depth images as it is moved around and a dense 3D model of the environment is generated from this series of depth images. This dense 3D model is incorporated within an interactive application, such as a game. The mobile depth camera is then placed in a static position for an interactive phase, which in some examples is gameplay, and the system detects motion of a user within a part of the environment from a second series of depth images captured by the camera. This motion provides a user input to the interactive application, such as a game. In further embodiments, automatic recognition and identification of objects within the 3D model may be performed and these identified objects then change the way that the interactive application operates.
    Type: Grant
    Filed: January 31, 2011
    Date of Patent: October 29, 2013
    Assignee: Microsoft Corporation
    Inventors: Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20130244782
    Abstract: Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used.
    Type: Application
    Filed: February 23, 2013
    Publication date: September 19, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Patent number: 8502816
    Abstract: A tabletop display providing multiple views to users is described. In an embodiment the display comprises a rotatable view-angle restrictive filter and a display system. The display system displays a sequence of images synchronized with the rotation of the filter to provide multiple views according to viewing angle. These multiple views provide a user with a 3D display or with personalized content which is not visible to a user at a sufficiently different viewing angle. In some embodiments, the display comprises a diffuser layer on which the sequence of images are displayed. In further embodiments, the diffuser is switchable between a diffuse state when images are displayed and a transparent state when imaging beyond the surface can be performed. The device may form part of a tabletop comprising with a touch-sensitive surface. Detected touch events and images captured through the surface may be used to modify the images being displayed.
    Type: Grant
    Filed: December 2, 2010
    Date of Patent: August 6, 2013
    Assignee: Microsoft Corporation
    Inventors: David Alexander Butler, Stephen Edward Hodges, Shahram Izadi, Nicolas Villar, Stuart Taylor, David Molyneaux, Otmar Hilliges
  • Publication number: 20130169626
    Abstract: A system and method for providing an augmented reality environment in which the environmental mapping process is decoupled from the localization processes performed by one or more mobile devices is described. In some embodiments, an augmented reality system includes a mapping system with independent sensing devices for mapping a particular real-world environment and one or more mobile devices. Each of the one or more mobile devices utilizes a separate asynchronous computing pipeline for localizing the mobile device and rendering virtual objects from a point of view of the mobile device. This distributed approach provides an efficient way for supporting mapping and localization processes for a large number of mobile devices, which are typically constrained by form factor and battery life limitations.
    Type: Application
    Filed: November 28, 2012
    Publication date: July 4, 2013
    Inventors: Alexandru Balan, Jason Flaks, Steve Hodges, Michael Isard, Oliver Williams, Paul Barham, Shahram Izadi, Otmar Hiliges, David Molyneaux, David Kim
  • Patent number: 8471814
    Abstract: User interface control using a keyboard is described. In an embodiment, a user interface displayed on a display device is controlled using a computer connected to a keyboard. The keyboard has a plurality of alphanumeric keys that can be used for text entry. The computer receives data comprising a sequence of key-presses from the keyboard, and generates for each key-press a physical location on the keyboard. The relative physical locations of the key-presses are compared to calculate a movement path over the keyboard. The movement path describes the path of a user's digit over the keyboard. The movement path is mapped to a sequence of coordinates in the user interface, and the movement of an object displayed in the user interface is controlled in accordance with the sequence of coordinates.
    Type: Grant
    Filed: February 26, 2010
    Date of Patent: June 25, 2013
    Assignee: Microsoft Corporation
    Inventors: Harper LaFave, Stephen Hodges, James Scott, Shahram Izadi, David Molyneaux, Nicolas Villar, David Alexander Butler, Mike Hazas
  • Publication number: 20130156297
    Abstract: Learning image processing tasks from scene reconstructions is described where the tasks may include but are not limited to: image de-noising, image in-painting, optical flow detection, interest point detection. In various embodiments training data is generated from a 2 or higher dimensional reconstruction of a scene and from empirical images of the same scene. In an example a machine learning system learns at least one parameter of a function for performing the image processing task by using the training data. In an example, the machine learning system comprises a random decision forest. In an example, the scene reconstruction is obtained by moving an image capture apparatus in an environment where the image capture apparatus has an associated dense reconstruction and camera tracking system.
    Type: Application
    Filed: December 15, 2011
    Publication date: June 20, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Jamie Daniel Joseph SHOTTON, Pushmeet KOHLI, Stefan Johannes Josef HOLZER, Shahram IZADI, Carsten Curt Eckard ROTHER, Sebastian NOWOZIN, David KIM, David MOLYNEAUX, Otmar HILLIGES
  • Patent number: 8401242
    Abstract: Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used.
    Type: Grant
    Filed: January 31, 2011
    Date of Patent: March 19, 2013
    Assignee: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Patent number: 8401225
    Abstract: Moving object segmentation using depth images is described. In an example, a moving object is segmented from the background of a depth image of a scene received from a mobile depth camera. A previous depth image of the scene is retrieved, and compared to the current depth image using an iterative closest point algorithm. The iterative closest point algorithm includes a determination of a set of points that correspond between the current depth image and the previous depth image. During the determination of the set of points, one or more outlying points are detected that do not correspond between the two depth images, and the image elements at these outlying points are labeled as belonging to the moving object. In examples, the iterative closest point algorithm is executed as part of an algorithm for tracking the mobile depth camera, and hence the segmentation does not add substantial additional computational complexity.
    Type: Grant
    Filed: January 31, 2011
    Date of Patent: March 19, 2013
    Assignee: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, Otmar Hilliges, David Kim, David Molyneaux, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20120306876
    Abstract: Generating computer models of 3D objects is described. In one example, depth images of an object captured by a substantially static depth camera are used to generate the model, which is stored in a memory device in a three-dimensional volume. Portions of the depth image determined to relate to the background are removed to leave a foreground depth image. The position and orientation of the object in the foreground depth image is tracked by comparison to a preceding depth image, and the foreground depth image is integrated into the volume by using the position and orientation to determine where to add data derived from the foreground depth image into the volume. In examples, the object is hand-rotated by a user before the depth camera. Hands that occlude the object are integrated out of the model as they do not move in sync with the object due to re-gripping.
    Type: Application
    Filed: June 6, 2011
    Publication date: December 6, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Jamie Daniel Joseph SHOTTON, Shahram IZADI, Otmar HILLIGES, David KIM, David MOLYNEAUX, Pushmeet KOHLI, Andrew FITZGIBBON, Stephen Edward HODGES
  • Publication number: 20120306850
    Abstract: A system and method for providing an augmented reality environment in which the environmental mapping process is decoupled from the localization processes performed by one or more mobile devices is described. In some embodiments, an augmented reality system includes a mapping system with independent sensing devices for mapping a particular real-world environment and one or more mobile devices. Each of the one or more mobile devices utilizes a separate asynchronous computing pipeline for localizing the mobile device and rendering virtual objects from a point of view of the mobile device. This distributed approach provides an efficient way for supporting mapping and localization processes for a large number of mobile devices, which are typically constrained by form factor and battery life limitations.
    Type: Application
    Filed: June 2, 2011
    Publication date: December 6, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Alexandru Balan, Jason Flaks, Steve Hodges, Michael Isard, Oliver Williams, Paul Barham, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim
  • Publication number: 20120194650
    Abstract: Systems and methods for reducing interference between multiple infra-red depth cameras are described. In an embodiment, the system comprises multiple infra-red sources, each of which projects a structured light pattern into the environment. A controller is used to control the sources in order to reduce the interference caused by overlapping light patterns. Various methods are described including: cycling between the different sources, where the cycle used may be fixed or may change dynamically based on the scene detected using the cameras; setting the wavelength of each source so that overlapping patterns are at different wavelengths; moving source-camera pairs in independent motion patterns; and adjusting the shape of the projected light patterns to minimize overlap. These methods may also be combined in any way. In another embodiment, the system comprises a single source and a mirror system is used to cast the projected structured light pattern around the environment.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Stephen Edward Hodges, David Alexander Butler, Andrew Fitzgibbon, Pushmeet Kohli
  • Publication number: 20120196679
    Abstract: Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20120195471
    Abstract: Moving object segmentation using depth images is described. In an example, a moving object is segmented from the background of a depth image of a scene received from a mobile depth camera. A previous depth image of the scene is retrieved, and compared to the current depth image using an iterative closest point algorithm. The iterative closest point algorithm includes a determination of a set of points that correspond between the current depth image and the previous depth image. During the determination of the set of points, one or more outlying points are detected that do not correspond between the two depth images, and the image elements at these outlying points are labeled as belonging to the moving object. In examples, the iterative closest point algorithm is executed as part of an algorithm for tracking the mobile depth camera, and hence the segmentation does not add substantial additional computational complexity.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Richard NEWCOMBE, Shahram IZADI, Otmar HILLIGES, David KIM, David MOLYNEAUX, Jamie Daniel Joseph SHOTTON, Pushmeet KOHLI, Andrew FITZGIBBON, Stephen Edward HODGES, David Alexander BUTLER
  • Publication number: 20120194516
    Abstract: Three-dimensional environment reconstruction is described. In an example, a 3D model of a real-world environment is generated in a 3D volume made up of voxels stored on a memory device. The model is built from data describing a camera location and orientation, and a depth image with pixels indicating a distance from the camera to a point in the environment. A separate execution thread is assigned to each voxel in a plane of the volume. Each thread uses the camera location and orientation to determine a corresponding depth image location for its associated voxel, determines a factor relating to the distance between the associated voxel and the point in the environment at the corresponding location, and updates a stored value at the associated voxel using the factor. Each thread iterates through an equivalent voxel in the remaining planes of the volume, repeating the process to update the stored value.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Stephen Edward Hodges, David Alexander Butler, Andrew Fitzgibbon, Pushmeet Kohli
  • Publication number: 20120194517
    Abstract: Use of a 3D environment model in gameplay is described. In an embodiment, a mobile depth camera is used to capture a series of depth images as it is moved around and a dense 3D model of the environment is generated from this series of depth images. This dense 3D model is incorporated within an interactive application, such as a game. The mobile depth camera is then placed in a static position for an interactive phase, which in some examples is gameplay, and the system detects motion of a user within a part of the environment from a second series of depth images captured by the camera. This motion provides a user input to the interactive application, such as a game. In further embodiments, automatic recognition and identification of objects within the 3D model may be performed and these identified objects then change the way that the interactive application operates.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20120194644
    Abstract: Mobile camera localization using depth maps is described for robotics, immersive gaming, augmented reality and other applications. In an embodiment a mobile depth camera is tracked in an environment at the same time as a 3D model of the environment is formed using the sensed depth data. In an embodiment, when camera tracking fails, this is detected and the camera is relocalized either by using previously gathered keyframes or in other ways. In an embodiment, loop closures are detected in which the mobile camera revisits a location, by comparing features of a current depth map with the 3D model in real time. In embodiments the detected loop closures are used to improve the consistency and accuracy of the 3D model of the environment.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20120139897
    Abstract: A tabletop display providing multiple views to users is described. In an embodiment the display comprises a rotatable view-angle restrictive filter and a display system. The display system displays a sequence of images synchronized with the rotation of the filter to provide multiple views according to viewing angle. These multiple views provide a user with a 3D display or with personalized content which is not visible to a user at a sufficiently different viewing angle. In some embodiments, the display comprises a diffuser layer on which the sequence of images are displayed. In further embodiments, the diffuser is switchable between a diffuse state when images are displayed and a transparent state when imaging beyond the surface can be performed. The device may form part of a tabletop comprising with a touch-sensitive surface. Detected touch events and images captured through the surface may be used to modify the images being displayed.
    Type: Application
    Filed: December 2, 2010
    Publication date: June 7, 2012
    Applicant: Microsoft Corporation
    Inventors: David Alexander Butler, Stephen Edward Hodges, Shahram Izadi, Nicolas Villar, Stuart Taylor, David Molyneaux, Otmar Hilliges
  • Publication number: 20120113140
    Abstract: Augmented reality with direct user interaction is described. In one example, an augmented reality system comprises a user-interaction region, a camera that captures images of an object in the user-interaction region, and a partially transparent display device which combines a virtual environment with a view of the user-interaction region, so that both are visible at the same time to a user. A processor receives the images, tracks the object's movement, calculates a corresponding movement within the virtual environment, and updates the virtual environment based on the corresponding movement. In another example, a method of direct interaction in an augmented reality system comprises generating a virtual representation of the object having the corresponding movement, and updating the virtual environment so that the virtual representation interacts with virtual objects in the virtual environment. From the user's perspective, the object directly interacts with the virtual objects.
    Type: Application
    Filed: November 5, 2010
    Publication date: May 10, 2012
    Applicant: Microsoft Corporation
    Inventors: Otmar Hilliges, David Kim, Shahram Izadi, David Molyneaux, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20120113223
    Abstract: Techniques for user-interaction in augmented reality are described. In one example, a direct user-interaction method comprises displaying a 3D augmented reality environment having a virtual object and a real first and second object controlled by a user, tracking the position of the objects in 3D using camera images, displaying the virtual object on the first object from the user's viewpoint, and enabling interaction between the second object and the virtual object when the first and second objects are touching. In another example, an augmented reality system comprises a display device that shows an augmented reality environment having a virtual object and a real user's hand, a depth camera that captures depth images of the hand, and a processor. The processor receives the images, tracks the hand pose in six degrees-of-freedom, and enables interaction between the hand and the virtual object.
    Type: Application
    Filed: November 5, 2010
    Publication date: May 10, 2012
    Applicant: Microsoft Corporation
    Inventors: Otmar Hilliges, David Kim, Shahram Izadi, David Molyneaux, Stephen Edward Hodges, David Alexander Butler