Patents by Inventor Stephen Edward Hodges

Stephen Edward Hodges has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8401225
    Abstract: Moving object segmentation using depth images is described. In an example, a moving object is segmented from the background of a depth image of a scene received from a mobile depth camera. A previous depth image of the scene is retrieved, and compared to the current depth image using an iterative closest point algorithm. The iterative closest point algorithm includes a determination of a set of points that correspond between the current depth image and the previous depth image. During the determination of the set of points, one or more outlying points are detected that do not correspond between the two depth images, and the image elements at these outlying points are labeled as belonging to the moving object. In examples, the iterative closest point algorithm is executed as part of an algorithm for tracking the mobile depth camera, and hence the segmentation does not add substantial additional computational complexity.
    Type: Grant
    Filed: January 31, 2011
    Date of Patent: March 19, 2013
    Assignee: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, Otmar Hilliges, David Kim, David Molyneaux, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Patent number: 8401242
    Abstract: Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used.
    Type: Grant
    Filed: January 31, 2011
    Date of Patent: March 19, 2013
    Assignee: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20130007192
    Abstract: An embedded device sensor and actuation web page access system and method for providing a web application (such as a web page) access to sensor data about an embedded device and access to actuation mechanisms (such as vibration) associated with the device. The system and method can use the sensor data to obtain context information about the embedded device and understand what a user of the device is doing at any given moment. The sensor data can be used by the web application to influence how content is served up to the user. In some embodiments, the sensor data is provided to the web server using the headers in HTTP requests. Moreover, actuation commands for actuation mechanisms on the embedded device are provided using the headers of HTTP responses. Embodiments of the system and method provide a website access to sensor data and actuation commands without changing website operation.
    Type: Application
    Filed: June 28, 2011
    Publication date: January 3, 2013
    Applicant: Microsoft Corporation
    Inventors: Albrecht Schmidt, Nicolas Villar, James Scott, Stephen Edward Hodges
  • Publication number: 20120310376
    Abstract: Methods and systems for occupancy prediction using historical occupancy patterns are described. In an embodiment, an occupancy probability is computed by comparing a recent occupancy pattern to historic occupancy patterns. Sensor data for a room, or other space, is used to generate a table of past occupancy which comprises these historic occupancy patterns. The comparison which is performed identifies a number of similar historic occupancy patterns and data from these similar historic occupancy patterns is combined to generate an occupancy probability for a time in the future. In an example, time may be divided into discrete slots and binary values may be used to indicate occupancy or non-occupancy in each slot. An occupancy probability for a defined future time slot then comprises a combination of the binary values for corresponding time slots from each of the identified similar occupancy patterns.
    Type: Application
    Filed: June 2, 2011
    Publication date: December 6, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: John Charles Krumm, James W. Scott, Alice Jane Bernheim Brush, Brian R. Meyers, Stephen Edward Hodges
  • Publication number: 20120306876
    Abstract: Generating computer models of 3D objects is described. In one example, depth images of an object captured by a substantially static depth camera are used to generate the model, which is stored in a memory device in a three-dimensional volume. Portions of the depth image determined to relate to the background are removed to leave a foreground depth image. The position and orientation of the object in the foreground depth image is tracked by comparison to a preceding depth image, and the foreground depth image is integrated into the volume by using the position and orientation to determine where to add data derived from the foreground depth image into the volume. In examples, the object is hand-rotated by a user before the depth camera. Hands that occlude the object are integrated out of the model as they do not move in sync with the object due to re-gripping.
    Type: Application
    Filed: June 6, 2011
    Publication date: December 6, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Jamie Daniel Joseph SHOTTON, Shahram IZADI, Otmar HILLIGES, David KIM, David MOLYNEAUX, Pushmeet KOHLI, Andrew FITZGIBBON, Stephen Edward HODGES
  • Publication number: 20120309531
    Abstract: A sensing floor to locate people and devices is described. In an embodiment, the sensing floor (or sensing surface), is formed from a flexible substrate on which a number of distributed sensing elements and connections between sensing elements are formed in a conductive material. In an example, these elements and connections may be printed onto the flexible substrate. The sensing floor operates in one or more modes in order to detect people in proximity to the floor. In passive mode, the floor detects signals from the environment, such as electric hum, which are coupled into a sensing element when a person stands on the sensing element. In active mode, one sensing element transmits a signal which is detected in another sensing element when a person bridges those two elements. In hybrid mode, the floor switches between passive and active mode, for example, on detection of a person in passive mode.
    Type: Application
    Filed: June 6, 2011
    Publication date: December 6, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Nan-Wei GONG, Stephen Edward HODGES, Nicolas VILLAR, Joseph A. PARADISO
  • Publication number: 20120294117
    Abstract: A location system comprises a plurality of transponders whose locations are detectable by a base system. The base system interrogates (51-55) the transponders one at a time in accordance with a schedule of consecutive time slots. In response to a priority request received (53) from one of the transponders, the base system interrupts the schedule and interrogates substantially immediately (56, 57, 55) the signaling transponder so as to determine its location with minimal latency.
    Type: Application
    Filed: July 31, 2012
    Publication date: November 22, 2012
    Applicant: AT&T Corp.
    Inventors: Andrew Martin Robert Ward, Stephen Edward Hodges, Peter Joseph Steggles, Rupert William Meldrum Curwen, Joseph Francis Newman
  • Patent number: 8285259
    Abstract: A wireless opportunistic network that can facilitate resource aggregation by way of interconnected devices is disclosed. In accordance with this opportunistic network, a mobile device can effectively ‘dock’ into the network thereby enabling resources to be shared between devices within the network. In this manner, the docked mobile device can leverage resources available in each of the individual devices of the network. This functionality can be used in many scenarios related to health, from monitoring patients and analyzing basic diagnostic data to identifying bioterrorism by way of collaborating resources between devices within the network.
    Type: Grant
    Filed: May 29, 2007
    Date of Patent: October 9, 2012
    Assignee: Microsoft Corporation
    Inventors: Chris Demetrios Karkanias, Stephen Edward Hodges
  • Patent number: 8264319
    Abstract: A location system comprises a plurality of transponders whose locations are detectable by a base system. The base system interrogates (51-55) the transponders one at a time in accordance with a schedule of consecutive time slots. In response to a priority request received (53) from one of the transponders, the base system interrupts the schedule and interrogates substantially immediately (56, 57, 55) the signaling transponder so as to determine its location with minimal latency.
    Type: Grant
    Filed: May 15, 2008
    Date of Patent: September 11, 2012
    Assignee: AT&T Intellectual Property II, L.P
    Inventors: Andrew Martin Robert Ward, Stephen Edward Hodges, Peter Joseph Steggles, Rupert William Meldrum Curwen, Joseph Francis Newman
  • Patent number: 8260272
    Abstract: A wireless opportunistic network that can facilitate data transfer by way of interconnected devices is disclosed. In accordance with this opportunistic network, each of the devices effectively contributes to the transfer of the information thereby obviating the need for an external carrier. In this manner, the carrier infrastructure is embodied and distributed throughout the individual devices of the network. In a particular aspect, the opportunistic network is employed to transfer and make available health-related data. This functionality can be used in many scenarios related to heath from, monitoring patients and conveying basic diagnostic data to identifying bioterrorism by way of collaborating data between a number of devices within the network. Essentially, the innovation provides for at least two core functional ideas, the opportunistic network infrastructure and the use of the network in health related scenarios.
    Type: Grant
    Filed: September 1, 2011
    Date of Patent: September 4, 2012
    Assignee: Microsoft Corporation
    Inventors: Chris Demetrios Karkanias, Stephen Edward Hodges, James William Scott
  • Publication number: 20120195471
    Abstract: Moving object segmentation using depth images is described. In an example, a moving object is segmented from the background of a depth image of a scene received from a mobile depth camera. A previous depth image of the scene is retrieved, and compared to the current depth image using an iterative closest point algorithm. The iterative closest point algorithm includes a determination of a set of points that correspond between the current depth image and the previous depth image. During the determination of the set of points, one or more outlying points are detected that do not correspond between the two depth images, and the image elements at these outlying points are labeled as belonging to the moving object. In examples, the iterative closest point algorithm is executed as part of an algorithm for tracking the mobile depth camera, and hence the segmentation does not add substantial additional computational complexity.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Richard NEWCOMBE, Shahram IZADI, Otmar HILLIGES, David KIM, David MOLYNEAUX, Jamie Daniel Joseph SHOTTON, Pushmeet KOHLI, Andrew FITZGIBBON, Stephen Edward HODGES, David Alexander BUTLER
  • Publication number: 20120194650
    Abstract: Systems and methods for reducing interference between multiple infra-red depth cameras are described. In an embodiment, the system comprises multiple infra-red sources, each of which projects a structured light pattern into the environment. A controller is used to control the sources in order to reduce the interference caused by overlapping light patterns. Various methods are described including: cycling between the different sources, where the cycle used may be fixed or may change dynamically based on the scene detected using the cameras; setting the wavelength of each source so that overlapping patterns are at different wavelengths; moving source-camera pairs in independent motion patterns; and adjusting the shape of the projected light patterns to minimize overlap. These methods may also be combined in any way. In another embodiment, the system comprises a single source and a mirror system is used to cast the projected structured light pattern around the environment.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Stephen Edward Hodges, David Alexander Butler, Andrew Fitzgibbon, Pushmeet Kohli
  • Publication number: 20120196679
    Abstract: Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20120194516
    Abstract: Three-dimensional environment reconstruction is described. In an example, a 3D model of a real-world environment is generated in a 3D volume made up of voxels stored on a memory device. The model is built from data describing a camera location and orientation, and a depth image with pixels indicating a distance from the camera to a point in the environment. A separate execution thread is assigned to each voxel in a plane of the volume. Each thread uses the camera location and orientation to determine a corresponding depth image location for its associated voxel, determines a factor relating to the distance between the associated voxel and the point in the environment at the corresponding location, and updates a stored value at the associated voxel using the factor. Each thread iterates through an equivalent voxel in the remaining planes of the volume, repeating the process to update the stored value.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Stephen Edward Hodges, David Alexander Butler, Andrew Fitzgibbon, Pushmeet Kohli
  • Publication number: 20120194644
    Abstract: Mobile camera localization using depth maps is described for robotics, immersive gaming, augmented reality and other applications. In an embodiment a mobile depth camera is tracked in an environment at the same time as a 3D model of the environment is formed using the sensed depth data. In an embodiment, when camera tracking fails, this is detected and the camera is relocalized either by using previously gathered keyframes or in other ways. In an embodiment, loop closures are detected in which the mobile camera revisits a location, by comparing features of a current depth map with the 3D model in real time. In embodiments the detected loop closures are used to improve the consistency and accuracy of the 3D model of the environment.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20120194517
    Abstract: Use of a 3D environment model in gameplay is described. In an embodiment, a mobile depth camera is used to capture a series of depth images as it is moved around and a dense 3D model of the environment is generated from this series of depth images. This dense 3D model is incorporated within an interactive application, such as a game. The mobile depth camera is then placed in a static position for an interactive phase, which in some examples is gameplay, and the system detects motion of a user within a part of the environment from a second series of depth images captured by the camera. This motion provides a user input to the interactive application, such as a game. In further embodiments, automatic recognition and identification of objects within the 3D model may be performed and these identified objects then change the way that the interactive application operates.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
  • Publication number: 20120139841
    Abstract: A user interface device with actuated buttons is described. In an embodiment, the user interface device comprises two or more buttons and the motion of the buttons is controlled by actuators under software control such that their motion is inter-related. The position or motion of the buttons may provide a user with feedback about the current state of a software program they are using or provide them with enhanced user input functionality. In another embodiment, the ability to move the buttons is used to reconfigure the user interface buttons and this may be performed dynamically, based on the current state of the software program, or may be performed dependent upon the software program being used. The user interface device may be a peripheral device, such as a mouse or keyboard, or may be integrated within a computing device such as a games device.
    Type: Application
    Filed: December 1, 2010
    Publication date: June 7, 2012
    Applicant: Microsoft Corporation
    Inventors: Stuart Taylor, Jonathan Hook, David Alexander Butler, Shahram Izadi, Nicolas Villar, Stephen Edward Hodges
  • Publication number: 20120139897
    Abstract: A tabletop display providing multiple views to users is described. In an embodiment the display comprises a rotatable view-angle restrictive filter and a display system. The display system displays a sequence of images synchronized with the rotation of the filter to provide multiple views according to viewing angle. These multiple views provide a user with a 3D display or with personalized content which is not visible to a user at a sufficiently different viewing angle. In some embodiments, the display comprises a diffuser layer on which the sequence of images are displayed. In further embodiments, the diffuser is switchable between a diffuse state when images are displayed and a transparent state when imaging beyond the surface can be performed. The device may form part of a tabletop comprising with a touch-sensitive surface. Detected touch events and images captured through the surface may be used to modify the images being displayed.
    Type: Application
    Filed: December 2, 2010
    Publication date: June 7, 2012
    Applicant: Microsoft Corporation
    Inventors: David Alexander Butler, Stephen Edward Hodges, Shahram Izadi, Nicolas Villar, Stuart Taylor, David Molyneaux, Otmar Hilliges
  • Publication number: 20120117514
    Abstract: Three-dimensional user interaction is described. In one example, a virtual environment having virtual objects and a virtual representation of a user's hand with digits formed from jointed portions is generated, a point on each digit of the user's hand is tracked, and the virtual representation's digits controlled to correspond to those of the user. An algorithm is used to calculate positions for the jointed portions, and the physical forces acting between the virtual representation and objects are simulated. In another example, an interactive computer graphics system comprises a processor that generates the virtual environment, a display device that displays the virtual objects, and a camera that capture images of the user's hand. The processor uses the images to track the user's digits, computes the algorithm, and controls the display device to update the virtual objects on the display device by simulating the physical forces.
    Type: Application
    Filed: November 4, 2010
    Publication date: May 10, 2012
    Applicant: Microsoft Corporation
    Inventors: David Kim, Otmar Hilliges, Shahram Izadi, David Molyneaux, Stephen Edward Hodges
  • Publication number: 20120113140
    Abstract: Augmented reality with direct user interaction is described. In one example, an augmented reality system comprises a user-interaction region, a camera that captures images of an object in the user-interaction region, and a partially transparent display device which combines a virtual environment with a view of the user-interaction region, so that both are visible at the same time to a user. A processor receives the images, tracks the object's movement, calculates a corresponding movement within the virtual environment, and updates the virtual environment based on the corresponding movement. In another example, a method of direct interaction in an augmented reality system comprises generating a virtual representation of the object having the corresponding movement, and updating the virtual environment so that the virtual representation interacts with virtual objects in the virtual environment. From the user's perspective, the object directly interacts with the virtual objects.
    Type: Application
    Filed: November 5, 2010
    Publication date: May 10, 2012
    Applicant: Microsoft Corporation
    Inventors: Otmar Hilliges, David Kim, Shahram Izadi, David Molyneaux, Stephen Edward Hodges, David Alexander Butler