Patents by Inventor David Alexander Butler

David Alexander Butler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8391786
    Abstract: Methods of controlling the transfer of data between devices are described in which the manner of control is determined by a movement experienced by at least one of the devices. The method involves detecting a triggering movement and determining a characteristic of this movement. The transfer of data is then controlled based on the characteristic which has been identified.
    Type: Grant
    Filed: January 25, 2007
    Date of Patent: March 5, 2013
    Inventors: Stephen Hodges, Shahram Izadi, David Alexander Butler
  • Patent number: 8368663
    Abstract: A touch panel is described which uses at least one infrared source and an array of infrared sensors to detect objects which are in contact with, or close to, the touchable surface of the panel. The panel may be operated in both reflective and shadow modes, in arbitrary per-pixel combinations which change over time. For example, if the level of ambient infrared is detected and if that level exceeds a threshold, shadow mode is used for detection of touch events over some or all of the display. If the threshold is not exceeded, reflective mode is used to detect touch events. The touch panel includes an infrared source and an array of infrared sensors.
    Type: Grant
    Filed: December 7, 2011
    Date of Patent: February 5, 2013
    Assignee: Microsoft Corporation
    Inventors: Shahram Izadi, Stephen Hodges, David Alexander Butler, Alban Rrustemi
  • Patent number: 8325020
    Abstract: Methods and apparatus for uniquely identifying wireless devices in close physical proximity are described. When two wireless devices are brought into close proximity, one of the devices displays an optical indicator, such as a light pattern. This device then sends messages to other devices which are within wireless range to cause them to use any light sensor to detect a signal. In an embodiment, the light sensor is a camera and the detected signal is an image captured by the camera. Each device then sends data identifying what was detected back to the device displaying the pattern. By analyzing this data, the first device can determine which other device detected the indicator that it displayed and therefore determine that this device is within close physical proximity. In an example, the first device is an interactive surface arranged to identify the wireless addresses of devices which are placed on the surface.
    Type: Grant
    Filed: January 31, 2011
    Date of Patent: December 4, 2012
    Assignee: Microsoft Corporation
    Inventors: Shahram Izadi, Malcolm Hall, Stephen Hodges, William A. S. Buxton, David Alexander Butler
  • Patent number: 8287281
    Abstract: A system that can enhance cognitive ability by viewing sequences of images captured during an event is disclosed. For example, the innovation can employ captured event sequences to improve failing memories in patients with a diagnosed memory condition such as acquired brain injury or neurodegenerative disease such as Alzheimer's disease. These event sequences can be captured in the point-of-view of a user (e.g., first person) as well as from a third person or other monitoring location (e.g., car).
    Type: Grant
    Filed: December 6, 2006
    Date of Patent: October 16, 2012
    Assignee: Microsoft Corporation
    Inventors: Chris Demetrios Karkanias, Stephen E. Hodges, Emma L. Berry, Georgina E. Browne, Hilary Lyndsay Williams, Kenneth R. Wood, Samuel Gavin Smyth, David Alexander Butler
  • Patent number: 8272743
    Abstract: The techniques described herein provide a surface computing device that includes a surface layer configured to be in a transparent state and a diffuse state. In the diffuse state, an image can be projected onto the surface. In the transparent state, an image can be projected through the surface.
    Type: Grant
    Filed: October 24, 2011
    Date of Patent: September 25, 2012
    Assignee: Microsoft Corporation
    Inventors: Stuart Taylor, Shahram Izadi, Daniel A. Rosenfeld, Stephen Hodges, David Alexander Butler, James Scott, Nicolas Villar
  • Patent number: 8269746
    Abstract: A touch panel is arranged to enable communication using infrared signals with nearby devices. The touch panel includes an array of infrared sensors, arranged parallel to the touchable surface of the panel and at least one of the sensors is capable of detecting an infrared signal received from a nearby device.
    Type: Grant
    Filed: March 29, 2007
    Date of Patent: September 18, 2012
    Assignee: Microsoft Corporation
    Inventors: Stephen Hodges, Shahram Izadi, David Alexander Butler, Alban Rrustemi
  • Publication number: 20120194650
    Abstract: Systems and methods for reducing interference between multiple infra-red depth cameras are described. In an embodiment, the system comprises multiple infra-red sources, each of which projects a structured light pattern into the environment. A controller is used to control the sources in order to reduce the interference caused by overlapping light patterns. Various methods are described including: cycling between the different sources, where the cycle used may be fixed or may change dynamically based on the scene detected using the cameras; setting the wavelength of each source so that overlapping patterns are at different wavelengths; moving source-camera pairs in independent motion patterns; and adjusting the shape of the projected light patterns to minimize overlap. These methods may also be combined in any way. In another embodiment, the system comprises a single source and a mirror system is used to cast the projected structured light pattern around the environment.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Stephen Edward Hodges, David Alexander Butler, Andrew Fitzgibbon, Pushmeet Kohli
  • Publication number: 20120195471
    Abstract: Moving object segmentation using depth images is described. In an example, a moving object is segmented from the background of a depth image of a scene received from a mobile depth camera. A previous depth image of the scene is retrieved, and compared to the current depth image using an iterative closest point algorithm. The iterative closest point algorithm includes a determination of a set of points that correspond between the current depth image and the previous depth image. During the determination of the set of points, one or more outlying points are detected that do not correspond between the two depth images, and the image elements at these outlying points are labeled as belonging to the moving object. In examples, the iterative closest point algorithm is executed as part of an algorithm for tracking the mobile depth camera, and hence the segmentation does not add substantial additional computational complexity.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Richard NEWCOMBE, Shahram IZADI, Otmar HILLIGES, David KIM, David MOLYNEAUX, Jamie Daniel Joseph SHOTTON, Pushmeet KOHLI, Andrew FITZGIBBON, Stephen Edward HODGES, David Alexander BUTLER
  • Publication number: 20120196679
    Abstract: Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20120198103
    Abstract: A modular development platform is described which enables creation of reliable, compact, physically robust and power efficient embedded device prototypes. The platform consists of a base module which holds a processor and one or more peripheral modules each having an interface element. The base module and the peripheral modules may be electrically and/or physically connected together. The base module communicates with peripheral modules using packets of data with an addressing portion which identifies the peripheral module that is the intended recipient of the data packet.
    Type: Application
    Filed: April 4, 2012
    Publication date: August 2, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Stephen E. Hodges, David Alexander Butler, Shahram Izadi, Chih-Chieh Han
  • Publication number: 20120194516
    Abstract: Three-dimensional environment reconstruction is described. In an example, a 3D model of a real-world environment is generated in a 3D volume made up of voxels stored on a memory device. The model is built from data describing a camera location and orientation, and a depth image with pixels indicating a distance from the camera to a point in the environment. A separate execution thread is assigned to each voxel in a plane of the volume. Each thread uses the camera location and orientation to determine a corresponding depth image location for its associated voxel, determines a factor relating to the distance between the associated voxel and the point in the environment at the corresponding location, and updates a stored value at the associated voxel using the factor. Each thread iterates through an equivalent voxel in the remaining planes of the volume, repeating the process to update the stored value.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Stephen Edward Hodges, David Alexander Butler, Andrew Fitzgibbon, Pushmeet Kohli
  • Publication number: 20120194644
    Abstract: Mobile camera localization using depth maps is described for robotics, immersive gaming, augmented reality and other applications. In an embodiment a mobile depth camera is tracked in an environment at the same time as a 3D model of the environment is formed using the sensed depth data. In an embodiment, when camera tracking fails, this is detected and the camera is relocalized either by using previously gathered keyframes or in other ways. In an embodiment, loop closures are detected in which the mobile camera revisits a location, by comparing features of a current depth map with the 3D model in real time. In embodiments the detected loop closures are used to improve the consistency and accuracy of the 3D model of the environment.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20120194517
    Abstract: Use of a 3D environment model in gameplay is described. In an embodiment, a mobile depth camera is used to capture a series of depth images as it is moved around and a dense 3D model of the environment is generated from this series of depth images. This dense 3D model is incorporated within an interactive application, such as a game. The mobile depth camera is then placed in a static position for an interactive phase, which in some examples is gameplay, and the system detects motion of a user within a part of the environment from a second series of depth images captured by the camera. This motion provides a user input to the interactive application, such as a game. In further embodiments, automatic recognition and identification of objects within the 3D model may be performed and these identified objects then change the way that the interactive application operates.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20120139841
    Abstract: A user interface device with actuated buttons is described. In an embodiment, the user interface device comprises two or more buttons and the motion of the buttons is controlled by actuators under software control such that their motion is inter-related. The position or motion of the buttons may provide a user with feedback about the current state of a software program they are using or provide them with enhanced user input functionality. In another embodiment, the ability to move the buttons is used to reconfigure the user interface buttons and this may be performed dynamically, based on the current state of the software program, or may be performed dependent upon the software program being used. The user interface device may be a peripheral device, such as a mouse or keyboard, or may be integrated within a computing device such as a games device.
    Type: Application
    Filed: December 1, 2010
    Publication date: June 7, 2012
    Applicant: Microsoft Corporation
    Inventors: Stuart Taylor, Jonathan Hook, David Alexander Butler, Shahram Izadi, Nicolas Villar, Stephen Edward Hodges
  • Publication number: 20120139897
    Abstract: A tabletop display providing multiple views to users is described. In an embodiment the display comprises a rotatable view-angle restrictive filter and a display system. The display system displays a sequence of images synchronized with the rotation of the filter to provide multiple views according to viewing angle. These multiple views provide a user with a 3D display or with personalized content which is not visible to a user at a sufficiently different viewing angle. In some embodiments, the display comprises a diffuser layer on which the sequence of images are displayed. In further embodiments, the diffuser is switchable between a diffuse state when images are displayed and a transparent state when imaging beyond the surface can be performed. The device may form part of a tabletop comprising with a touch-sensitive surface. Detected touch events and images captured through the surface may be used to modify the images being displayed.
    Type: Application
    Filed: December 2, 2010
    Publication date: June 7, 2012
    Applicant: Microsoft Corporation
    Inventors: David Alexander Butler, Stephen Edward Hodges, Shahram Izadi, Nicolas Villar, Stuart Taylor, David Molyneaux, Otmar Hilliges
  • Publication number: 20120113223
    Abstract: Techniques for user-interaction in augmented reality are described. In one example, a direct user-interaction method comprises displaying a 3D augmented reality environment having a virtual object and a real first and second object controlled by a user, tracking the position of the objects in 3D using camera images, displaying the virtual object on the first object from the user's viewpoint, and enabling interaction between the second object and the virtual object when the first and second objects are touching. In another example, an augmented reality system comprises a display device that shows an augmented reality environment having a virtual object and a real user's hand, a depth camera that captures depth images of the hand, and a processor. The processor receives the images, tracks the hand pose in six degrees-of-freedom, and enables interaction between the hand and the virtual object.
    Type: Application
    Filed: November 5, 2010
    Publication date: May 10, 2012
    Applicant: Microsoft Corporation
    Inventors: Otmar Hilliges, David Kim, Shahram Izadi, David Molyneaux, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20120113140
    Abstract: Augmented reality with direct user interaction is described. In one example, an augmented reality system comprises a user-interaction region, a camera that captures images of an object in the user-interaction region, and a partially transparent display device which combines a virtual environment with a view of the user-interaction region, so that both are visible at the same time to a user. A processor receives the images, tracks the object's movement, calculates a corresponding movement within the virtual environment, and updates the virtual environment based on the corresponding movement. In another example, a method of direct interaction in an augmented reality system comprises generating a virtual representation of the object having the corresponding movement, and updating the virtual environment so that the virtual representation interacts with virtual objects in the virtual environment. From the user's perspective, the object directly interacts with the virtual objects.
    Type: Application
    Filed: November 5, 2010
    Publication date: May 10, 2012
    Applicant: Microsoft Corporation
    Inventors: Otmar Hilliges, David Kim, Shahram Izadi, David Molyneaux, Stephen Edward Hodges, David Alexander Butler
  • Patent number: 8175099
    Abstract: A modular development platform is described which enables creation of reliable, compact, physically robust and power efficient embedded device prototypes. The platform consists of a base module which holds the processor and one or more peripheral modules each having a peripheral device and an interface element. The modules can be electrically and physically connected together. The base module communicates with peripheral modules using packets of data with an addressing portion which identifies the peripheral module that is the intended recipient of the data packet.
    Type: Grant
    Filed: May 14, 2007
    Date of Patent: May 8, 2012
    Assignee: Microsoft Corporation
    Inventors: Stephen E. Hodges, David Alexander Butler, Shahram Izadi, Chih-Chieh Han
  • Publication number: 20120075256
    Abstract: A touch panel is described which uses at least one infrared source and an array of infrared sensors to detect objects which are in contact with, or close to, the touchable surface of the panel. The panel may be operated in both reflective and shadow modes, in arbitrary per-pixel combinations which change over time. For example, if the level of ambient infrared is detected and if that level exceeds a threshold, shadow mode is used for detection of touch events over some or all of the display. If the threshold is not exceeded, reflective mode is used to detect touch events. The touch panel includes an infrared source and an array of infrared sensors.
    Type: Application
    Filed: December 7, 2011
    Publication date: March 29, 2012
    Applicant: Microsoft Corporation
    Inventors: Shahram Izadi, Stephen Hodges, David Alexander Butler, Alban Rrustemi
  • Publication number: 20120038891
    Abstract: The techniques described herein provide a surface computing device that includes a surface layer configured to be in a transparent state and a diffuse state. In the diffuse state, an image can be projected onto the surface. In the transparent state, an image can be projected through the surface.
    Type: Application
    Filed: October 24, 2011
    Publication date: February 16, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Stuart Taylor, Shahram Izadi, Daniel A. Rosenfeld, Stephen Hodges, David Alexander Butler, James Scott, Nicolas Villar