Patents by Inventor Nathan Ackerman

Nathan Ackerman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9529246
    Abstract: A camera module is disclosed including an image sensor having an associated optical filter configured to receive a first set of one or more wavelengths of light, and a housing around the image sensor. The housing has an associated optical filter configured to allow a second set of one or more wavelengths of light to pass through the housing and to block the first set of one or more wavelengths of light from passing through the housing. In examples, the second set of one or more wavelengths of light may be light in the visible range of wavelengths, and the housing may be transparent.
    Type: Grant
    Filed: January 20, 2015
    Date of Patent: December 27, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Nathan Ackerman, Barry Corlett
  • Patent number: 9514571
    Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image.
    Type: Grant
    Filed: July 25, 2013
    Date of Patent: December 6, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Oliver Michael Christian Williams, Paul Barham, Michael Isard, Tuan Wong, Kevin Woo, Georg Klein, Douglas Kevin Service, Ashraf Ayman Michail, Andrew Pearson, Martin Shetter, Jeffrey Neil Margolis, Nathan Ackerman, Calvin Chan, Arthur C. Tomlin
  • Publication number: 20160343172
    Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image.
    Type: Application
    Filed: August 3, 2016
    Publication date: November 24, 2016
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Oliver Michael Christian Williams, Paul Barham, Michael Isard, Tuan Wong, Kevin Woo, Georg Klein, Douglas Kevin Service, Ashraf Ayman Michail, Andrew Pearson, Martin Shetter, Jeffrey Neil Margolis, Nathan Ackerman, Calvin Chan, Arthur C. Tomlin
  • Patent number: 9442293
    Abstract: An optical stack includes a first variable element and a second variable element. The first variable element is configured to vary light transmission through the first variable element as a function of a first control signal. The second variable element is in series with the first variable element and is configured to vary light transmission through the second variable element as a function of a second control signal. A controller dynamically supplies the first control signal to the first variable element and supplies the second control signal to the second variable element.
    Type: Grant
    Filed: May 6, 2014
    Date of Patent: September 13, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Daniel J. Alton, Nathan Ackerman
  • Publication number: 20160209730
    Abstract: A camera module is disclosed including an image sensor having an associated optical filter configured to receive a first set of one or more wavelengths of light, and a housing around the image sensor. The housing has an associated optical filter configured to allow a second set of one or more wavelengths of light to pass through the housing and to block the first set of one or more wavelengths of light from passing through the housing. In examples, the second set of one or more wavelengths of light may be light in the visible range of wavelengths, and the housing may be transparent.
    Type: Application
    Filed: January 20, 2015
    Publication date: July 21, 2016
    Inventors: Nathan Ackerman, Barry Corlett
  • Patent number: 9330464
    Abstract: Embodiments are disclosed that relate to controlling a depth camera. In one example, a method comprises emitting light from an illumination source toward a scene through an optical window, selectively routing a at least a portion of the light emitted from the illumination source to an image sensor such that the portion of the light is not transmitted through the optical window, receiving an output signal generated by the image sensor based on light reflected by the scene, the output signal including at least one depth value of the scene, and adjusting the output signal based on the selectively routed portion of the light.
    Type: Grant
    Filed: December 12, 2014
    Date of Patent: May 3, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Nathan Ackerman, Andrew C. Goris, Amir Nevet, David Mandelboum, Asaf Pellman
  • Patent number: 9304594
    Abstract: Methods for recognizing gestures within a near-field environment are described. In some embodiments, a mobile device, such as a head-mounted display device (HMD), may capture a first image of an environment while illuminating the environment using an IR light source with a first range (e.g., due to the exponential decay of light intensity) and capture a second image of the environment without illumination. The mobile device may generate a difference image based on the first image and the second image in order to eliminate background noise due to other sources of IR light within the environment (e.g., due to sunlight or artificial light sources). In some cases, object and gesture recognition techniques may be applied to the difference image in order to detect the performance of hand and/or finger gestures by an end user of the mobile device within a near-field environment of the mobile device.
    Type: Grant
    Filed: April 12, 2013
    Date of Patent: April 5, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mark Finocchio, Alexandru Balan, Nathan Ackerman, Jeffrey Margolis
  • Patent number: 9256987
    Abstract: Methods for tracking the head position of an end user of a head-mounted display device (HMD) relative to the HMD are described. In some embodiments, the HMD may determine an initial head tracking vector associated with an initial head position of the end user relative to the HMD, determine one or more head tracking vectors corresponding with one or more subsequent head positions of the end user relative to the HMD, track head movements of the end user over time based on the initial head tracking vector and the one or more head tracking vectors, and adjust positions of virtual objects displayed to the end user based on the head movements. In some embodiments, the resolution and/or number of virtual objects generated and displayed to the end user may be modified based on a degree of head movement of the end user relative to the HMD.
    Type: Grant
    Filed: June 24, 2013
    Date of Patent: February 9, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Nathan Ackerman, Drew Steedly, Andy Hodge, Alex Aben-Athar Kipman
  • Publication number: 20150370071
    Abstract: A see-through, near-eye mixed reality head mounted display (HMD) device includes left and right see-through display regions within which virtual images are displayable. These left and right see-through display regions each having a transmittance that is less than one hundred percent. The see-through, near-eye mixed reality HMD device also includes a see-through transmittance compensation mask that includes a left window through which the left see-through display region is visible and a right window through which the right see-through display region is visible. In accordance with various embodiments, the see-through transmittance compensation mask is used to provide a substantially uniform transmittance across the field-of-view of a user wearing the HMD device.
    Type: Application
    Filed: June 24, 2014
    Publication date: December 24, 2015
    Inventors: Daniel James Alton, Nathan Ackerman, Philip Andrew Frank, Andrew Hodge, Barry Corlett
  • Publication number: 20150355521
    Abstract: A see-through dimming panel includes first and second transparent substrate layers and suspended-particle-device (SPD) layer therebetween. A first transparent conductor layer is between the first transparent substrate layer and the SPD layer, and a second transparent conductor layer is between the second transparent substrate layer and the SPD layer. A first electrode is electrically coupled to the first transparent conductor layer. Second and third electrodes are electrically coupled to opposite ends of the second transparent conductor layer. An electric potential difference applied between the first and second electrodes controls a transmittance level of the SPD layer. An electric potential difference applied between the second and third electrodes, which results in a transverse electric field, controls a speed at which the transmittance level of the SPD layer decreases when the electric potential difference applied between the first and second electrodes controls is decreased.
    Type: Application
    Filed: June 5, 2014
    Publication date: December 10, 2015
    Inventors: Daniel James Alton, Nathan Ackerman
  • Publication number: 20150323795
    Abstract: An optical stack includes a first variable element and a second variable element. The first variable element is configured to vary light transmission through the first variable element as a function of a first control signal. The second variable element is in series with the first variable element and is configured to vary light transmission through the second variable element as a function of a second control signal. A controller dynamically supplies the first control signal to the first variable element and supplies the second control signal to the second variable element.
    Type: Application
    Filed: May 6, 2014
    Publication date: November 12, 2015
    Inventors: Daniel J. Alton, Nathan Ackerman
  • Publication number: 20150309312
    Abstract: Described herein are display devices, and methods for use therewith. Such a device can be used to display one or more virtual images within a first see-through portion of the device, adjacent to which is a second see-through portion that does not overlap with the first see-through portion. The first and second see-through portions of the device collectively cover a substantially entire field-of-view (FOV) of a user. A transmittance (and/or other optical characteristic(s)) corresponding to the first see-through portion of the device and a transmittance (and/or other optical characteristic(s)) corresponding to the second see-through portion of the device can be caused (e.g., controlled) to be substantially the same to provide a substantially uniform transmittance (and/or other optical characteristic(s)) across the substantially entire FOV of a user. More generally, optical characteristics of see-through portions of the device can be controlled, e.g., by a user and/or through feedback.
    Type: Application
    Filed: April 25, 2014
    Publication date: October 29, 2015
    Inventors: Daniel James Alton, Nathan Ackerman, Andrew Hodge, Philip Andrew Frank
  • Patent number: 8988345
    Abstract: A system and related methods for adaptive event recognition are provided. In one example, a selected sensor of a head-mounted display device is operated at a first polling rate corresponding to a higher potential latency. Initial user-related information is received. Where the initial user-related information matches a pre-event, the selected sensor is operated at a second polling rate faster than the first polling rate and corresponding to a lower potential latency. Subsequent user-related information is received. Where the subsequent user-related information matches a selected target event, feedback associated with the selected target event is provided to the user via the head-mounted display device.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: March 24, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Nathan Ackerman, Mark J. Finocchio, Andrew Bert Hodge
  • Publication number: 20150035744
    Abstract: Embodiments are disclosed for adjusting alignment of a near-eye optic of a see-through head-mounted display system. In one embodiment, a method of detecting eye location for a head-mounted display system includes directing positioning light to an eye of a user and detecting the positioning light reflected from the eye of the user. The method further includes determining a distance between the eye and a near-eye optic of the head-mounted display system based on attributes of the detected positioning light, and providing feedback for adjusting the distance between the eye and the near-eye optic.
    Type: Application
    Filed: August 22, 2013
    Publication date: February 5, 2015
    Inventors: Steve Robbins, Scott C. McEldowney, Xinye Lou, David D. Bohn, Quentin Simon Charles Miller, David Nister, Gerhard Schneider, Christopher Maurice Mei, Nathan Ackerman
  • Publication number: 20150029218
    Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image.
    Type: Application
    Filed: July 25, 2013
    Publication date: January 29, 2015
    Inventors: Oliver Michael Christian Williams, Paul Barham, Michael Isard, Tuan Wong, Kevin Woo, Georg Klein, Douglas Kevin Service, Ashraf Ayman Michail, Andrew Pearson, Martin Shetter, Jeffrey Neil Margolis, Nathan Ackerman, Calvin Chan, Arthur C. Tomlin
  • Publication number: 20150003819
    Abstract: Technology disclosed herein automatically focus a camera based on eye tracking. Techniques include tracking an eye gaze of eyes to determine a location at which the user is focusing. Then, a camera lens may be focused on that location. In one aspect, a first vector that corresponds to a first direction in which a first eye of a user is gazing at a point in time is determined. A second vector that corresponds to a second direction in which a second eye of the user is gazing at the point in time is determined. A location of an intersection of the first vector and the second vector is determined. A distance between the location of intersection and a location of a lens of the camera is determined. The lens is focused based on the distance. The lens could be focused based on a single eye vector and a depth image.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 1, 2015
    Inventors: Nathan Ackerman, Andrew C. Goris, Bruno Silva
  • Publication number: 20140375681
    Abstract: A system and method are disclosed for detecting angular displacement of a display element relative to a reference position on a head mounted display device for presenting a mixed reality or virtual reality experience. Once the displacement is detected, it may be corrected for to maintain the proper binocular disparity of virtual images displayed to the left and right display elements of the head mounted display device. In one example, the detection system uses an optical assembly including collimated LEDs and a camera which together are insensitive to linear displacement. Such a system provides a true measure of angular displacement of one or both display elements on the head mounted display device.
    Type: Application
    Filed: June 24, 2013
    Publication date: December 25, 2014
    Inventors: Steven John Robbins, Drew Steedly, Nathan Ackerman, Quentin Simon Charles Miller, Andrew C. Goris
  • Publication number: 20140375540
    Abstract: A system and method are disclosed for sensing a position and/or angular orientation of a head mounted display device respect to a wearer's eyes, and to provide feedback for adjusting the position and/or angular orientation of the head mounted display device so as to be optimally centered and oriented with respect to the wearer's eyes.
    Type: Application
    Filed: June 24, 2013
    Publication date: December 25, 2014
    Inventors: Nathan Ackerman, Andy Hodge, David Nister
  • Publication number: 20140375545
    Abstract: A system and related methods for adaptive event recognition are provided. In one example, a selected sensor of a head-mounted display device is operated at a first polling rate corresponding to a higher potential latency. Initial user-related information is received. Where the initial user-related information matches a pre-event, the selected sensor is operated at a second polling rate faster than the first polling rate and corresponding to a lower potential latency. Subsequent user-related information is received. Where the subsequent user-related information matches a selected target event, feedback associated with the selected target event is provided to the user via the head-mounted display device.
    Type: Application
    Filed: June 25, 2013
    Publication date: December 25, 2014
    Inventors: Nathan Ackerman, Mark J. Finocchio, Andrew Bert Hodge
  • Publication number: 20140375790
    Abstract: Embodiments are disclosed for a see-through head-mounted display system. In one embodiment, the see-through head-mounted display system comprises a freeform prism, and a display device configured to emit display light through the freeform prism to an eye of a user. The see-through head-mounted display system may also comprise an imaging device having an entrance pupil positioned at a back focal plane of the freeform prism, the imaging device configured to receive gaze-detection light reflected from the eye and directed through the freeform prism.
    Type: Application
    Filed: June 25, 2013
    Publication date: December 25, 2014
    Inventors: Steve Robbins, Scott McEldowney, Xinye Lou, David Nister, Drew Steedly, Quentin Simon Charles Miller, David D. Bohn, James Peele Terrell, JR., Andrew C. Goris, Nathan Ackerman