Patents by Inventor Christopher Douglas Edmonds

Christopher Douglas Edmonds has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210208390
    Abstract: A head-mounted display is presented. The head-mounted display includes an inertial measurement unit (IMU), one or more displays and a controller. The controller is configured to establish a reprojection plane for displaying imagery on the one or more displays. Based on output from the IMU, the position and orientation of the HMD relative to the reprojection plane is determined. Image data is then reprojected to the displays. A depth treatment is applied to each of a plurality of locations on the displays based on the determined position and orientation of the HMD relative to the reprojection plane.
    Type: Application
    Filed: January 7, 2020
    Publication date: July 8, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Raymond Kirk PRICE, Michael BLEYER, Christopher Douglas EDMONDS, Brent Michael WILSON
  • Patent number: 11049277
    Abstract: Techniques for aligning images generated by an integrated camera physically mounted to an HMD with images generated by a detached camera physically unmounted from the HMD are disclosed. A 3D feature map is generated and shared with the detached camera. Both the integrated camera and the detached camera use the 3D feature map to relocalize themselves and to determine their respective 6 DOF poses. The HMD receives the detached camera's image of the environment and the 6 DOF pose of the detached camera. A depth map of the environment is accessed. An overlaid image is generated by reprojecting a perspective of the detached camera's image to align with a perspective of the integrated camera and by overlaying the reprojected detached camera's image onto the integrated camera's image.
    Type: Grant
    Filed: July 17, 2020
    Date of Patent: June 29, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Raymond Kirk Price, Michael Bleyer, Christopher Douglas Edmonds
  • Patent number: 11037359
    Abstract: Improved techniques for providing passthrough images in the form of a stylized image embodying a novel perspective. A raw texture image of an environment is generated. A depth map is acquired for the environment. A stylized image is generated by applying a stylization filter to the raw texture image. Subsequent to acquiring the depth map and subsequent to generating the stylized image, a stylized parallax-corrected image is generated by reprojecting the stylized image to a new perspective using depth data.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: June 15, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael Bleyer, Christopher Douglas Edmonds, Raymond Kirk Price
  • Publication number: 20210174474
    Abstract: Techniques for de-aliasing depth ambiguities included within infrared phase depth images are described herein. An illuminator emits reference light towards a target object. Some of this light is reflected back and detected. A phase image is generated based on phase differences between the reference light and the reflected light. The phase differences represent changes in depth within overlapping sinusoidal periods of the reference and reflected light. The phase image also includes ambiguities because multiple different depths within the phase image share the same phase difference value, even though these depths actually correspond to different real-world depths. The phase image is fed as input to a machine learning (“ML”) component, which is configured to de-alias the ambiguities by determining, for each pixel in the phase image, a corresponding de-aliasing interval. A depth map is generated based on the phase image and any de-aliasing intervals generated by the ML component.
    Type: Application
    Filed: February 5, 2021
    Publication date: June 10, 2021
    Inventors: Michael BLEYER, Christopher Douglas EDMONDS, Raymond Kirk PRICE
  • Publication number: 20210174570
    Abstract: Systems and methods for providing a mixed-reality pass-through experience include implement acts of obtaining a texture map of a real-world environment, obtaining a depth map of the real-world environment, obtaining an updated texture map of the real-world environment subsequent to the obtaining of the depth map and the texture map, and rendering a virtual representation of the real-world environment utilizing both the depth map and the updated texture map that was obtained subsequent to the depth map. The texture map and the depth map may be based on a same image pair obtained from a pair of stereo cameras, the depth map being obtained by performing stereo matching on the same image pair. Additionally, the acts may further include detecting a predicted pose of a user and reprojecting a portion of the depth map to conform to a user perspective associated with the predicted pose.
    Type: Application
    Filed: December 4, 2019
    Publication date: June 10, 2021
    Inventors: Michael Bleyer, Christopher Douglas Edmonds, Donald John Patrick O'Neil, Raymond Kirk Price
  • Patent number: 11032530
    Abstract: Improved techniques for generating depth maps are disclosed. A stereo pair of images of an environment is accessed. This stereo pair of images includes first and second texture images. A signal to noise ratio (SNR) is identified within one or both of those images. Based on the SNR, which may be based on the texture image quality or the quality of the stereo match, there is a process of selectively computing and imposing a smoothness penalty against a smoothness term of a cost function used by a stereo depth matching algorithm. A depth map is generated by using the stereo depth matching algorithm to perform stereo depth matching on the stereo pair of images. The stereo depth matching algorithm performs the stereo depth matching using the smoothness penalty.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: June 8, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael Bleyer, Christopher Douglas Edmonds, Raymond Kirk Price
  • Publication number: 20210158080
    Abstract: Mapping common features between images that commonly represent an environment using different light spectrum data is performed. A first image having first light spectrum data is accessed, and a second image having second light spectrum data is accessed. These images are fed as input to a DNN, which then identifies feature points that are common between the two images. A generated mapping lists the feature points and lists coordinates of the feature points from both of the images. Differences between the coordinates of the feature points in the two images are determined. Based on these differences, the second image is warped to cause the coordinates of the feature points in the second image to correspond to the coordinates of the feature points in the first image.
    Type: Application
    Filed: November 26, 2019
    Publication date: May 27, 2021
    Inventors: Raymond Kirk Price, Michael Bleyer, Christopher Douglas Edmonds
  • Publication number: 20210158548
    Abstract: Modifications are performed to cause a style of an image to match a different style. A first image is accessed, where the first image has the first style. A second image is also accessed, where the second image has a second style. Subsequent to a deep neural network (DNN) learning these styles, a copy of the first image is fed as input to the DNN. The DNN modifies the first image copy by transitioning the first image copy from being of the first style to subsequently being of the second style. As a consequence, a modified style of the transitioned first image copy bilaterally matches the second style.
    Type: Application
    Filed: November 26, 2019
    Publication date: May 27, 2021
    Inventors: Raymond Kirk Price, Michael Bleyer, Christopher Douglas Edmonds
  • Publication number: 20210160440
    Abstract: Enhanced passthrough images are generated and displayed. A current visibility condition of an environment is determined. Based on the current visibility condition, a first camera or a second camera, which detect light spanning different ranges of illuminance, is selected to generate a passthrough image of the environment. The selected camera is then caused to generate the passthrough image. Additionally, a third camera, which is structured to detect long wave infrared radiation, is caused to generate a thermal image of the environment. Parallax correction is performed by aligning coordinates of the thermal image with corresponding coordinates identified within the passthrough image. Subsequently, the parallax-corrected thermal image is overlaid onto the passthrough image to generate a composite passthrough image, which is then displayed.
    Type: Application
    Filed: November 26, 2019
    Publication date: May 27, 2021
    Inventors: Michael Bleyer, Christopher Douglas Edmonds, Raymond Kirk Price
  • Publication number: 20210160441
    Abstract: Enhanced passthrough images are generated and displayed. A current visibility condition of an environment is determined. Based on the current visibility condition, a first camera or a second camera, which detect light spanning different ranges of illuminance, is selected to generate a passthrough image of the environment. The selected camera is then caused to generate the passthrough image. Additionally, a third camera, which is structured to detect long wave infrared radiation, is caused to generate a thermal image of the environment. Parallax correction is performed by aligning coordinates of the thermal image with corresponding coordinates identified within the passthrough image. Subsequently, the parallax-corrected thermal image is overlaid onto the passthrough image to generate a composite passthrough image, which is then displayed.
    Type: Application
    Filed: November 6, 2020
    Publication date: May 27, 2021
    Inventors: Michael BLEYER, Christopher Douglas EDMONDS, Raymond Kirk PRICE
  • Patent number: 11012677
    Abstract: Systems having rolling shutter sensors with a plurality of sensor rows are configured for compensating for rolling shutter artifacts that result from different sensor rows in the plurality of sensor rows outputting sensor data at different times. The systems compensate for the rolling shutter artifacts by identifying readout timepoints for the plurality of sensor rows of the rolling shutter sensor while the rolling shutter sensor captures an image of an environment and identifying readout poses each readout timepoint, as well as obtaining a depth map based on the image. The depth map includes a plurality of different rows of depth data that correspond to the different sensor rows. The system further compensates for the rolling shutter artifacts by generating a 3D representation of the environment while unprojecting the rows of depth data into 3D space using the readout poses.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: May 18, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael Bleyer, Christopher Douglas Edmonds, Raymond Kirk Price
  • Patent number: 10929956
    Abstract: Techniques for de-aliasing depth ambiguities included within infrared phase depth images are described herein. An illuminator emits reference light towards a target object. Some of this light is reflected back and detected. A phase image is generated based on phase differences between the reference light and the reflected light. The phase differences represent changes in depth within overlapping sinusoidal periods of the reference and reflected light. The phase image also includes ambiguities because multiple different depths within the phase image share the same phase difference value, even though these depths actually correspond to different real-world depths. The phase image is fed as input to a machine learning (“ML”) component, which is configured to de-alias the ambiguities by determining, for each pixel in the phase image, a corresponding de-aliasing interval. A depth map is generated based on the phase image and any de-aliasing intervals generated by the ML component.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: February 23, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael Bleyer, Christopher Douglas Edmonds, Raymond Kirk Price
  • Publication number: 20210004937
    Abstract: Techniques for de-aliasing depth ambiguities included within infrared phase depth images are described herein. An illuminator emits reference light towards a target object. Some of this light is reflected back and detected. A phase image is generated based on phase differences between the reference light and the reflected light. The phase differences represent changes in depth within overlapping sinusoidal periods of the reference and reflected light. The phase image also includes ambiguities because multiple different depths within the phase image share the same phase difference value, even though these depths actually correspond to different real-world depths. The phase image is fed as input to a machine learning (“ML”) component, which is configured to de-alias the ambiguities by determining, for each pixel in the phase image, a corresponding de-aliasing interval. A depth map is generated based on the phase image and any de-aliasing intervals generated by the ML component.
    Type: Application
    Filed: July 2, 2019
    Publication date: January 7, 2021
    Inventors: Michael Bleyer, Christopher Douglas Edmonds, Raymond Kirk Price
  • Patent number: 10866425
    Abstract: A head-mounted display comprises a stereo pair of outward-facing cameras that are separated by a baseline distance when the head-mounted display is being worn by the user, and a controller. The controller is configured to receive an inter-pupil distance (IPD) of the user and to compare the IPD to the baseline distance of the stereo pair of outward-facing cameras. Image data of an environment is received from the stereo pair of outward-facing cameras. If the difference between the IPD and the baseline distance is less than a first threshold, the image data is passed through to the head-mounted display without correcting for the IPD. If the difference between the IPD and the baseline distance is greater than the first threshold, the image data is reprojected based on the IPD prior to displaying the image data on the head-mounted display.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: December 15, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Raymond Kirk Price, Michael Bleyer, Christopher Douglas Edmonds
  • Patent number: 10867441
    Abstract: An apparatus for detecting pose of an object is described. The apparatus has a processor configured to receive captured sensor data depicting the object. The apparatus has a memory storing a model of a class of object of which the depicted object is a member, the model comprising a plurality of parameters specifying the pose, comprising global position and global orientation, of the model. The processor is configured to compute values of the parameters of the model by calculating an optimization to fit the model to the captured sensor data, wherein the optimization comprises iterated computation of updates to the values of the parameters and updates to values of variables representing correspondences between the captured sensor data and the model, the updates being interdependent in computation. The processor is configured to discard updates to values of the variables representing correspondences without applying the updates.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: December 15, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Thomas Joseph Cashman, Andrew William Fitzgibbon, Erroll William Wood, Federica Bogo, Paul Malcolm McIlroy, Christopher Douglas Edmonds
  • Patent number: 10861165
    Abstract: A method to identify one or more depth-image segments that correspond to a predetermined object type is enacted in a depth-imaging controller operatively coupled to an optical time-of-flight (ToF) camera; it comprises: receiving depth-image data from the optical ToF camera, the depth-image data exhibiting an aliasing uncertainty, such that a coordinate (X, Y) of the depth-image data maps to a periodic series of depth values {Zk}; and labeling, as corresponding to the object type, one or more coordinates of the depth-image data exhibiting the aliasing uncertainty.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: December 8, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Erroll William Wood, Michael Bleyer, Christopher Douglas Edmonds, Michael Scott Fenton, Mark James Finocchio, John Albert Judnich
  • Publication number: 20200265641
    Abstract: An apparatus for detecting pose of an object is described. The apparatus has a processor configured to receive captured sensor data depicting the object. The apparatus has a memory storing a model of a class of object of which the depicted object is a member, the model comprising a plurality of parameters specifying the pose, comprising global position and global orientation, of the model. The processor is configured to compute values of the parameters of the model by calculating an optimization to fit the model to the captured sensor data, wherein the optimization comprises iterated computation of updates to the values of the parameters and updates to values of variables representing correspondences between the captured sensor data and the model, the updates being interdependent in computation. The processor is configured to discard updates to values of the variables representing correspondences without applying the updates.
    Type: Application
    Filed: February 15, 2019
    Publication date: August 20, 2020
    Inventors: Thomas Joseph CASHMAN, Andrew William FITZGIBBON, Erroll William WOOD, Federica BOGO, Paul Malcolm MCILROY, Christopher Douglas EDMONDS
  • Publication number: 20200226765
    Abstract: A method to identify one or more depth-image segments that correspond to a predetermined object type is enacted in a depth-imaging controller operatively coupled to an optical time-of-flight (ToF) camera; it comprises: receiving depth-image data from the optical ToF camera, the depth-image data exhibiting an aliasing uncertainty, such that a coordinate (X, Y) of the depth-image data maps to a periodic series of depth values {Zk}; and labeling, as corresponding to the object type, one or more coordinates of the depth-image data exhibiting the aliasing uncertainty.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Erroll William WOOD, Michael BLEYER, Christopher Douglas EDMONDS, Michael Scott FENTON, Mark James FINOCCHIO, John Albert JUDNICH
  • Patent number: 10620717
    Abstract: In embodiments of a camera-based input device, the input device includes an inertial measurement unit that collects motion data associated with velocity and acceleration of the input device in an environment, such as in three-dimensional (3D) space. The input device also includes at least two visual light cameras that capture images of the environment. A positioning application is implemented to receive the motion data from the inertial measurement unit, and receive the images of the environment from the at least two visual light cameras. The positioning application can then determine positions of the input device based on the motion data and the images correlated with a map of the environment, and track a motion of the input device in the environment based on the determined positions of the input device.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: April 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel Joseph McCulloch, Nicholas Gervase Fajt, Adam G. Poulos, Christopher Douglas Edmonds, Lev Cherkashin, Brent Charles Allen, Constantin Dulu, Muhammad Jabir Kapasi, Michael Grabner, Michael Edward Samples, Cecilia Bong, Miguel Angel Susffalich, Varun Ramesh Mani, Anthony James Ambrus, Arthur C. Tomlin, James Gerard Dack, Jeffrey Alan Kohler, Eric S. Rehmeyer, Edward D. Parker
  • Patent number: 10521026
    Abstract: Systems are provided that include a wireless hand-held inertial controller with passive optical and inertial tracking in a slim form-factor. These systems are configured for use with a head mounted virtual or augmented reality display device (HMD) that operates with six degrees of freedom by fusing (i) data related to the position of the controller derived from a forward-facing optical sensor located in the HMD with (ii) data relating to the orientation of the controller derived from an inertial measurement unit located in the controller.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: December 31, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Alexandru Octavian Balan, Constantin Dulu, Christopher Douglas Edmonds, Mark James Finocchio