Patents by Inventor Robert M. Craig
Robert M. Craig has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10552666Abstract: A candidate human head is found in depth video using a head detector. A head region of light intensity video is spatially resolved with a three-dimensional location of the candidate human head in the depth video. Facial recognition is performed on the head region of the light intensity video using a face recognizer.Type: GrantFiled: August 1, 2017Date of Patent: February 4, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Robert M. Craig, Vladimir Tankovich, Craig Peeper, Ketan Dalal, Bhaven Dedhia, Casey Meekhof
-
Publication number: 20180075288Abstract: A candidate human head is found in depth video using a head detector. A head region of light intensity video is spatially resolved with a three-dimensional location of the candidate human head in the depth video. Facial recognition is performed on the head region of the light intensity video using a face recognizer.Type: ApplicationFiled: August 1, 2017Publication date: March 15, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Robert M. Craig, Vladimir Tankovich, Craig Peeper, Ketan Dalal, Bhaven Dedhia, Casey Meekhof
-
Patent number: 9761054Abstract: Example embodiments of the present disclosure provide techniques for receiving measurements from one or more inertial sensors (i.e. accelerometer and angular rate gyros) attached to a device with a camera or other environment capture capability. In one embodiment, the inertial measurements may be combined with pose estimates obtained from computer vision algorithms executing with real time camera images. Using such inertial measurements, a system may more quickly and efficiently obtain higher accuracy orientation estimates of the device with respect to an object known to be stationary in the environment.Type: GrantFiled: May 4, 2015Date of Patent: September 12, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Matthew L. Bronder, Michael A. Dougherty, Adam Green, Joseph Bertolami, Robert M. Craig
-
Patent number: 9754154Abstract: A candidate human head is found in depth video using a head detector. A head region of light intensity video is spatially resolved with a three-dimensional location of the candidate human head in the depth video. Facial recognition is performed on the head region of the light intensity video using a face recognizer.Type: GrantFiled: December 3, 2014Date of Patent: September 5, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Robert M. Craig, Vladimir Tankovich, Craig Peeper, Ketan Dalal, Bhaven Dedhia, Casey Meekhof
-
Patent number: 9749619Abstract: Systems and methods are disclosed for generating stereoscopic images for a user based on one or more images captured by one or more scene-facing cameras or detectors and the position of the user's eyes or other parts relative to a component of the system as determined from one or more images captured by one or more user-facing detectors. The image captured by the scene-facing detector is modified based on the user's eye or other position. The resulting image represents the scene as seen from the perspective of the eye of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. Stereoscopic mechanisms may also be adjusted or configured based on the location or the user's eyes or other parts.Type: GrantFiled: October 31, 2012Date of Patent: August 29, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Samuel A. Mann, Robert M. Craig, John A. Tardif, Joseph C. Bertolami
-
Patent number: 9639166Abstract: A computer system comprises a hardware interface, a computer-memory device, an update engine, and a posture-recognition engine. The hardware interface is configured to receive depth video of an environment from a camera. The computer-memory device stores a background model of the environment preservable over a reboot of the computer system, the background model including a plurality of trusted coordinates derived from the depth video. The update engine is configured to update the background model, including moving a trusted coordinate to greater depth if an observed pixel is behind the trusted coordinate over a first duration, but retaining the trusted coordinate if the observed pixel depth is in front of the trusted coordinate over the first duration. The posture-recognition engine is configured to recognize posture of a user in front of a background portion of the video, which is bounded by the trusted coordinates of the background model.Type: GrantFiled: March 11, 2015Date of Patent: May 2, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Casey Meekhof, Robert M. Craig, Craig Peeper, Patrick O. Cook, Ketan Dalal, Vladimir Tankovich, Anton Rakovchuk
-
Publication number: 20160266650Abstract: A computer system comprises a hardware interface, a computer-memory device, an update engine, and a posture-recognition engine. The hardware interface is configured to receive depth video of an environment from a camera. The computer-memory device stores a background model of the environment preservable over a reboot of the computer system, the background model including a plurality of trusted coordinates derived from the depth video. The update engine is configured to update the background model, including moving a trusted coordinate to greater depth if an observed pixel is behind the trusted coordinate over a first duration, but retaining the trusted coordinate if the observed pixel depth is in front of the trusted coordinate over the first duration. The posture-recognition engine is configured to recognize posture of a user in front of a background portion of the video, which is bounded by the trusted coordinates of the background model.Type: ApplicationFiled: March 11, 2015Publication date: September 15, 2016Inventors: Casey Meekhof, Robert M. Craig, Craig Peeper, Patrick O. Cook, Ketan Dalal, Vladimir Tankovich, Anton Rakovchuk
-
Publication number: 20150235432Abstract: Example embodiments of the present disclosure provide techniques for receiving measurements from one or more inertial sensors (i.e. accelerometer and angular rate gyros) attached to a device with a camera or other environment capture capability. In one embodiment, the inertial measurements may be combined with pose estimates obtained from computer vision algorithms executing with real time camera images. Using such inertial measurements, a system may more quickly and efficiently obtain higher accuracy orientation estimates of the device with respect to an object known to be stationary in the environment.Type: ApplicationFiled: May 4, 2015Publication date: August 20, 2015Inventors: Matthew L. Bronder, Michael A. Dougherty, Adam Green, Joseph Bertolami, Robert M. Craig
-
Patent number: 9024972Abstract: Example embodiments of the present disclosure provide techniques for receiving measurements from one or more inertial sensors (i.e. accelerometer and angular rate gyros) attached to a device with a camera or other environment capture capability. In one embodiment, the inertial measurements may be combined with pose estimates obtained from computer vision algorithms executing with real time camera images. Using such inertial measurements, a system may more quickly and efficiently obtain higher accuracy orientation estimates of the device with respect to an object known to be stationary in the environment.Type: GrantFiled: April 1, 2009Date of Patent: May 5, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Matthew L. Bronder, Michael A. Dougherty, Adam Green, Joseph Bertolami, Robert M. Craig
-
Publication number: 20150086108Abstract: A candidate human head is found in depth video using a head detector. A head region of light intensity video is spatially resolved with a three-dimensional location of the candidate human head in the depth video. Facial recognition is performed on the head region of the light intensity video using a face recognizer.Type: ApplicationFiled: December 3, 2014Publication date: March 26, 2015Inventors: Robert M. Craig, Vladimir Tankovich, Craig Peeper, Ketan Dalal, Bhaven Dedhia, Casey Meekhof
-
Patent number: 8839121Abstract: Systems and methods for unifying coordinate systems in an augmented reality application or system are disclosed. User devices capture an image of a scene, and determine a location based on the scene image. The scene image may be compared to cartography data or images to determine the location. User devices may propose an origin and orientation or transformation data for a common coordinate system and exchange proposed coordinate system data to agree on a common coordinate system. User devices may also transmit location information to an augmented reality system that then determines an a common coordinate system and transmits coordinate system data such as transformation matrices to the user devices. Images presented to users may be adjusted based on user device locations relative to the coordinate system.Type: GrantFiled: May 6, 2009Date of Patent: September 16, 2014Inventors: Joseph Bertolami, Samuel A. Mann, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig, Matthew W. Lee
-
Patent number: 8797321Abstract: A method and apparatus for rendering the lighting of virtual objects in an augmented reality display. The method includes determining local and ambient light sources based on data provided by one or more light sensors. The light in the physical lighting environment is accounted for by attributing the light to local light sources and/or ambient light sources. A synthesized physical lighting environment is constructed based on the light characteristics of the local and/or ambient light sources, and is used in properly rendering virtual objects in the augmented reality display.Type: GrantFiled: April 1, 2009Date of Patent: August 5, 2014Assignee: Microsoft CorporationInventors: Joseph Bertolami, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig
-
Patent number: 8379057Abstract: Systems and methods are disclosed for generating an image for a user based on an image captured by a scene-facing camera or detector. The user's position relative to a component of the system is determined, and the image captured by the scene-facing detector is modified based on the user's position. The resulting image represents the scene as seen from the perspective of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data.Type: GrantFiled: May 14, 2012Date of Patent: February 19, 2013Assignee: Microsoft CorporationInventors: Samuel A. Mann, Joseph Bertolami, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig, John A. Tardif
-
Patent number: 8314832Abstract: Systems and methods are disclosed for generating stereoscopic images for a user based on one or more images captured by one or more scene-facing cameras or detectors and the position of the user's eyes or other parts relative to a component of the system as determined from one or more images captured by one or more user-facing detectors. The image captured by the scene-facing detector is modified based on the user's eye or other position. The resulting image represents the scene as seen from the perspective of the eye of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. Stereoscopic mechanisms may also be adjusted or configured based on the location or the user's eyes or other parts.Type: GrantFiled: April 1, 2009Date of Patent: November 20, 2012Assignee: Microsoft CorporationInventors: Samuel A. Mann, Robert M. Craig, John A. Tardif, Joseph Bertolami
-
Publication number: 20120223967Abstract: Systems and methods are disclosed for generating an image for a user based on an image captured by a scene-facing camera or detector. The user's position relative to a component of the system is determined, and the image captured by the scene-facing detector is modified based on the user's position. The resulting image represents the scene as seen from the perspective of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data.Type: ApplicationFiled: May 14, 2012Publication date: September 6, 2012Applicant: MICROSOFT CORPORATIONInventors: Samuel A. Mann, Joseph Bertolami, Matthew L. Bronder, Michael Dougherty, Robert M. Craig, John A. Tardif
-
Patent number: 8194101Abstract: Systems and methods are disclosed for generating an image for a user based on an image captured by a scene-facing camera or detector. The user's position relative to a component of the system is determined, and the image captured by the scene-facing detector is modified based on the user's position. The resulting image represents the scene as seen from the perspective of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data.Type: GrantFiled: April 1, 2009Date of Patent: June 5, 2012Assignee: Microsoft CorporationInventors: Samuel A. Mann, Joseph Bertolami, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig, John A. Tardif
-
Patent number: 7884823Abstract: Game data is rendered in three dimensions in the GPU of a game console. A left camera view and a right camera view are generated from a single camera view. The left and right camera positions are derived as an offset from a default camera. The focal distance of the left and right cameras is infinity. A game developer does not have to encode dual images into a specific hardware format. When a viewer sees the two slightly offset images, the user's brain combines the two offset images into a single 3D image to give the illusion that objects either pop out from or recede into the display screen. In another embodiment, individual, private video is rendered, on a single display screen, for different viewers. Rather than rendering two similar offset images, two completely different images are rendered allowing each player to view only one of the images.Type: GrantFiled: June 12, 2007Date of Patent: February 8, 2011Assignee: Microsoft CorporationInventors: Joe Bertolami, Robert M. Craig, Dax Hawkins, Sing Bing Kang, Jonathan E. Lange
-
Publication number: 20100287485Abstract: Systems and methods for unifying coordinate systems in an augmented reality application or system are disclosed. User devices capture an image of a scene, and determine a location based on the scene image. The scene image may be compared to cartography data or images to determine the location. User devices may propose an origin and orientation or transformation data for a common coordinate system and exchange proposed coordinate system data to agree on a common coordinate system. User devices may also transmit location information to an augmented reality system that then determines an a common coordinate system and transmits coordinate system data such as transformation matrices to the user devices. Images presented to users may be adjusted based on user device locations relative to the coordinate system.Type: ApplicationFiled: May 6, 2009Publication date: November 11, 2010Inventors: Joseph Bertolami, Samuel A. Mann, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig, Matthew W. Lee
-
Publication number: 20100253766Abstract: Systems and methods are disclosed for generating stereoscopic images for a user based on one or more images captured by one or more scene-facing cameras or detectors and the position of the user's eyes or other parts relative to a component of the system as determined from one or more images captured by one or more user-facing detectors. The image captured by the scene-facing detector is modified based on the user's eye or other position. The resulting image represents the scene as seen from the perspective of the eye of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. Stereoscopic mechanisms may also be adjusted or configured based on the location or the user's eyes or other parts.Type: ApplicationFiled: April 1, 2009Publication date: October 7, 2010Inventors: Samuel A. Mann, Robert M. Craig, John A. Tardif, Joseph Bertolami
-
Publication number: 20100257252Abstract: Example embodiments of the present disclosure provide techniques for capturing and analyzing information gathered by a mobile device equipped with one or more sensors. Recognition and tracking software and localization techniques may be used to extrapolate pertinent information about the surrounding environment and transmit the information to a service that can analyze the transmitted information. In one embodiment, when a user views a particular object or landmark on a device with image capture capability, the device may be provided with information through a wireless connection via a database that may provide the user with rich metadata regarding the objects in view. Information may be presented through rendering means such as a web browser, rendered as a 2D overlay on top of the live image, and rendered in augmented reality.Type: ApplicationFiled: April 1, 2009Publication date: October 7, 2010Applicant: Microsoft CorporationInventors: Michael A. Dougherty, Samuel A. Mann, Matthew L. Bronder, Joseph Bertolami, Robert M. Craig