Patents by Inventor Robert M. Craig

Robert M. Craig has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10552666
    Abstract: A candidate human head is found in depth video using a head detector. A head region of light intensity video is spatially resolved with a three-dimensional location of the candidate human head in the depth video. Facial recognition is performed on the head region of the light intensity video using a face recognizer.
    Type: Grant
    Filed: August 1, 2017
    Date of Patent: February 4, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Robert M. Craig, Vladimir Tankovich, Craig Peeper, Ketan Dalal, Bhaven Dedhia, Casey Meekhof
  • Publication number: 20180075288
    Abstract: A candidate human head is found in depth video using a head detector. A head region of light intensity video is spatially resolved with a three-dimensional location of the candidate human head in the depth video. Facial recognition is performed on the head region of the light intensity video using a face recognizer.
    Type: Application
    Filed: August 1, 2017
    Publication date: March 15, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Robert M. Craig, Vladimir Tankovich, Craig Peeper, Ketan Dalal, Bhaven Dedhia, Casey Meekhof
  • Patent number: 9761054
    Abstract: Example embodiments of the present disclosure provide techniques for receiving measurements from one or more inertial sensors (i.e. accelerometer and angular rate gyros) attached to a device with a camera or other environment capture capability. In one embodiment, the inertial measurements may be combined with pose estimates obtained from computer vision algorithms executing with real time camera images. Using such inertial measurements, a system may more quickly and efficiently obtain higher accuracy orientation estimates of the device with respect to an object known to be stationary in the environment.
    Type: Grant
    Filed: May 4, 2015
    Date of Patent: September 12, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Matthew L. Bronder, Michael A. Dougherty, Adam Green, Joseph Bertolami, Robert M. Craig
  • Patent number: 9754154
    Abstract: A candidate human head is found in depth video using a head detector. A head region of light intensity video is spatially resolved with a three-dimensional location of the candidate human head in the depth video. Facial recognition is performed on the head region of the light intensity video using a face recognizer.
    Type: Grant
    Filed: December 3, 2014
    Date of Patent: September 5, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Robert M. Craig, Vladimir Tankovich, Craig Peeper, Ketan Dalal, Bhaven Dedhia, Casey Meekhof
  • Patent number: 9749619
    Abstract: Systems and methods are disclosed for generating stereoscopic images for a user based on one or more images captured by one or more scene-facing cameras or detectors and the position of the user's eyes or other parts relative to a component of the system as determined from one or more images captured by one or more user-facing detectors. The image captured by the scene-facing detector is modified based on the user's eye or other position. The resulting image represents the scene as seen from the perspective of the eye of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. Stereoscopic mechanisms may also be adjusted or configured based on the location or the user's eyes or other parts.
    Type: Grant
    Filed: October 31, 2012
    Date of Patent: August 29, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Samuel A. Mann, Robert M. Craig, John A. Tardif, Joseph C. Bertolami
  • Patent number: 9639166
    Abstract: A computer system comprises a hardware interface, a computer-memory device, an update engine, and a posture-recognition engine. The hardware interface is configured to receive depth video of an environment from a camera. The computer-memory device stores a background model of the environment preservable over a reboot of the computer system, the background model including a plurality of trusted coordinates derived from the depth video. The update engine is configured to update the background model, including moving a trusted coordinate to greater depth if an observed pixel is behind the trusted coordinate over a first duration, but retaining the trusted coordinate if the observed pixel depth is in front of the trusted coordinate over the first duration. The posture-recognition engine is configured to recognize posture of a user in front of a background portion of the video, which is bounded by the trusted coordinates of the background model.
    Type: Grant
    Filed: March 11, 2015
    Date of Patent: May 2, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Casey Meekhof, Robert M. Craig, Craig Peeper, Patrick O. Cook, Ketan Dalal, Vladimir Tankovich, Anton Rakovchuk
  • Publication number: 20160266650
    Abstract: A computer system comprises a hardware interface, a computer-memory device, an update engine, and a posture-recognition engine. The hardware interface is configured to receive depth video of an environment from a camera. The computer-memory device stores a background model of the environment preservable over a reboot of the computer system, the background model including a plurality of trusted coordinates derived from the depth video. The update engine is configured to update the background model, including moving a trusted coordinate to greater depth if an observed pixel is behind the trusted coordinate over a first duration, but retaining the trusted coordinate if the observed pixel depth is in front of the trusted coordinate over the first duration. The posture-recognition engine is configured to recognize posture of a user in front of a background portion of the video, which is bounded by the trusted coordinates of the background model.
    Type: Application
    Filed: March 11, 2015
    Publication date: September 15, 2016
    Inventors: Casey Meekhof, Robert M. Craig, Craig Peeper, Patrick O. Cook, Ketan Dalal, Vladimir Tankovich, Anton Rakovchuk
  • Publication number: 20150235432
    Abstract: Example embodiments of the present disclosure provide techniques for receiving measurements from one or more inertial sensors (i.e. accelerometer and angular rate gyros) attached to a device with a camera or other environment capture capability. In one embodiment, the inertial measurements may be combined with pose estimates obtained from computer vision algorithms executing with real time camera images. Using such inertial measurements, a system may more quickly and efficiently obtain higher accuracy orientation estimates of the device with respect to an object known to be stationary in the environment.
    Type: Application
    Filed: May 4, 2015
    Publication date: August 20, 2015
    Inventors: Matthew L. Bronder, Michael A. Dougherty, Adam Green, Joseph Bertolami, Robert M. Craig
  • Patent number: 9024972
    Abstract: Example embodiments of the present disclosure provide techniques for receiving measurements from one or more inertial sensors (i.e. accelerometer and angular rate gyros) attached to a device with a camera or other environment capture capability. In one embodiment, the inertial measurements may be combined with pose estimates obtained from computer vision algorithms executing with real time camera images. Using such inertial measurements, a system may more quickly and efficiently obtain higher accuracy orientation estimates of the device with respect to an object known to be stationary in the environment.
    Type: Grant
    Filed: April 1, 2009
    Date of Patent: May 5, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Matthew L. Bronder, Michael A. Dougherty, Adam Green, Joseph Bertolami, Robert M. Craig
  • Publication number: 20150086108
    Abstract: A candidate human head is found in depth video using a head detector. A head region of light intensity video is spatially resolved with a three-dimensional location of the candidate human head in the depth video. Facial recognition is performed on the head region of the light intensity video using a face recognizer.
    Type: Application
    Filed: December 3, 2014
    Publication date: March 26, 2015
    Inventors: Robert M. Craig, Vladimir Tankovich, Craig Peeper, Ketan Dalal, Bhaven Dedhia, Casey Meekhof
  • Patent number: 8839121
    Abstract: Systems and methods for unifying coordinate systems in an augmented reality application or system are disclosed. User devices capture an image of a scene, and determine a location based on the scene image. The scene image may be compared to cartography data or images to determine the location. User devices may propose an origin and orientation or transformation data for a common coordinate system and exchange proposed coordinate system data to agree on a common coordinate system. User devices may also transmit location information to an augmented reality system that then determines an a common coordinate system and transmits coordinate system data such as transformation matrices to the user devices. Images presented to users may be adjusted based on user device locations relative to the coordinate system.
    Type: Grant
    Filed: May 6, 2009
    Date of Patent: September 16, 2014
    Inventors: Joseph Bertolami, Samuel A. Mann, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig, Matthew W. Lee
  • Patent number: 8797321
    Abstract: A method and apparatus for rendering the lighting of virtual objects in an augmented reality display. The method includes determining local and ambient light sources based on data provided by one or more light sensors. The light in the physical lighting environment is accounted for by attributing the light to local light sources and/or ambient light sources. A synthesized physical lighting environment is constructed based on the light characteristics of the local and/or ambient light sources, and is used in properly rendering virtual objects in the augmented reality display.
    Type: Grant
    Filed: April 1, 2009
    Date of Patent: August 5, 2014
    Assignee: Microsoft Corporation
    Inventors: Joseph Bertolami, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig
  • Patent number: 8379057
    Abstract: Systems and methods are disclosed for generating an image for a user based on an image captured by a scene-facing camera or detector. The user's position relative to a component of the system is determined, and the image captured by the scene-facing detector is modified based on the user's position. The resulting image represents the scene as seen from the perspective of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data.
    Type: Grant
    Filed: May 14, 2012
    Date of Patent: February 19, 2013
    Assignee: Microsoft Corporation
    Inventors: Samuel A. Mann, Joseph Bertolami, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig, John A. Tardif
  • Patent number: 8314832
    Abstract: Systems and methods are disclosed for generating stereoscopic images for a user based on one or more images captured by one or more scene-facing cameras or detectors and the position of the user's eyes or other parts relative to a component of the system as determined from one or more images captured by one or more user-facing detectors. The image captured by the scene-facing detector is modified based on the user's eye or other position. The resulting image represents the scene as seen from the perspective of the eye of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. Stereoscopic mechanisms may also be adjusted or configured based on the location or the user's eyes or other parts.
    Type: Grant
    Filed: April 1, 2009
    Date of Patent: November 20, 2012
    Assignee: Microsoft Corporation
    Inventors: Samuel A. Mann, Robert M. Craig, John A. Tardif, Joseph Bertolami
  • Publication number: 20120223967
    Abstract: Systems and methods are disclosed for generating an image for a user based on an image captured by a scene-facing camera or detector. The user's position relative to a component of the system is determined, and the image captured by the scene-facing detector is modified based on the user's position. The resulting image represents the scene as seen from the perspective of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data.
    Type: Application
    Filed: May 14, 2012
    Publication date: September 6, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Samuel A. Mann, Joseph Bertolami, Matthew L. Bronder, Michael Dougherty, Robert M. Craig, John A. Tardif
  • Patent number: 8194101
    Abstract: Systems and methods are disclosed for generating an image for a user based on an image captured by a scene-facing camera or detector. The user's position relative to a component of the system is determined, and the image captured by the scene-facing detector is modified based on the user's position. The resulting image represents the scene as seen from the perspective of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data.
    Type: Grant
    Filed: April 1, 2009
    Date of Patent: June 5, 2012
    Assignee: Microsoft Corporation
    Inventors: Samuel A. Mann, Joseph Bertolami, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig, John A. Tardif
  • Patent number: 7884823
    Abstract: Game data is rendered in three dimensions in the GPU of a game console. A left camera view and a right camera view are generated from a single camera view. The left and right camera positions are derived as an offset from a default camera. The focal distance of the left and right cameras is infinity. A game developer does not have to encode dual images into a specific hardware format. When a viewer sees the two slightly offset images, the user's brain combines the two offset images into a single 3D image to give the illusion that objects either pop out from or recede into the display screen. In another embodiment, individual, private video is rendered, on a single display screen, for different viewers. Rather than rendering two similar offset images, two completely different images are rendered allowing each player to view only one of the images.
    Type: Grant
    Filed: June 12, 2007
    Date of Patent: February 8, 2011
    Assignee: Microsoft Corporation
    Inventors: Joe Bertolami, Robert M. Craig, Dax Hawkins, Sing Bing Kang, Jonathan E. Lange
  • Publication number: 20100287485
    Abstract: Systems and methods for unifying coordinate systems in an augmented reality application or system are disclosed. User devices capture an image of a scene, and determine a location based on the scene image. The scene image may be compared to cartography data or images to determine the location. User devices may propose an origin and orientation or transformation data for a common coordinate system and exchange proposed coordinate system data to agree on a common coordinate system. User devices may also transmit location information to an augmented reality system that then determines an a common coordinate system and transmits coordinate system data such as transformation matrices to the user devices. Images presented to users may be adjusted based on user device locations relative to the coordinate system.
    Type: Application
    Filed: May 6, 2009
    Publication date: November 11, 2010
    Inventors: Joseph Bertolami, Samuel A. Mann, Matthew L. Bronder, Michael A. Dougherty, Robert M. Craig, Matthew W. Lee
  • Publication number: 20100253766
    Abstract: Systems and methods are disclosed for generating stereoscopic images for a user based on one or more images captured by one or more scene-facing cameras or detectors and the position of the user's eyes or other parts relative to a component of the system as determined from one or more images captured by one or more user-facing detectors. The image captured by the scene-facing detector is modified based on the user's eye or other position. The resulting image represents the scene as seen from the perspective of the eye of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. Stereoscopic mechanisms may also be adjusted or configured based on the location or the user's eyes or other parts.
    Type: Application
    Filed: April 1, 2009
    Publication date: October 7, 2010
    Inventors: Samuel A. Mann, Robert M. Craig, John A. Tardif, Joseph Bertolami
  • Publication number: 20100257252
    Abstract: Example embodiments of the present disclosure provide techniques for capturing and analyzing information gathered by a mobile device equipped with one or more sensors. Recognition and tracking software and localization techniques may be used to extrapolate pertinent information about the surrounding environment and transmit the information to a service that can analyze the transmitted information. In one embodiment, when a user views a particular object or landmark on a device with image capture capability, the device may be provided with information through a wireless connection via a database that may provide the user with rich metadata regarding the objects in view. Information may be presented through rendering means such as a web browser, rendered as a 2D overlay on top of the live image, and rendered in augmented reality.
    Type: Application
    Filed: April 1, 2009
    Publication date: October 7, 2010
    Applicant: Microsoft Corporation
    Inventors: Michael A. Dougherty, Samuel A. Mann, Matthew L. Bronder, Joseph Bertolami, Robert M. Craig