Patents by Inventor Steven Paul LANSEL

Steven Paul LANSEL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11132056
    Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: September 28, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel
  • Patent number: 11048091
    Abstract: An image generator is configured to generate display light. A first waveguide is configured to generate wide-field image light from a first portion of the display light. A first outcoupling element of the first waveguide extends to a boundary of the frame to provide the wide-field display to substantially all of the augmented FOV of the user. A second waveguide is configured to generate inset image light from a second portion of the display light received from the image generator.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: June 29, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Steven Paul Lansel, Sebastian Sztuk, Kirk Eric Burgess, Brian Wheelwright
  • Publication number: 20210173474
    Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.
    Type: Application
    Filed: December 4, 2019
    Publication date: June 10, 2021
    Inventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel
  • Publication number: 20210150358
    Abstract: A method for evaluating an external machine learning program while limiting access to internal training data includes providing labeled training data from a first source, receiving, by the first source, a machine learning program from a second source different from the first source, blocking, by the first source, access by the second source to the labeled training data, and training, by the first source, the machine learning program according to a supervised machine learning process using the labeled training data. The method further includes generating a first set of metrics from the supervised machine learning process that provide feedback about training of the neural network model, analyzing the first set of metrics to identify subset data therein, and, in order to permit evaluation of the neural network model, transmitting, to the second source, those metrics from the first set of metrics that do not include the subset data.
    Type: Application
    Filed: January 28, 2021
    Publication date: May 20, 2021
    Inventor: Steven Paul LANSEL
  • Patent number: 10871825
    Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: December 22, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel
  • Patent number: 10706572
    Abstract: Systems and methods for performing depth estimation may comprise: an illuminator capable of illuminating a scene from at least a first position and a second position, an image sensor to capture (i) a first image of the scene while the illuminator illuminates the scene from the first position and (ii) a second image of the scene while the illuminator illuminates the scene from the second position, and an image processor to receive the first and second images from the image sensor and estimate a depth of at least one feature that appears in the first and second images. The depth is estimated based on the relative intensity of the first image and the second image, a distance between the first illumination position and the second illumination position, and a position of the at least one feature within at least one of the first and second images.
    Type: Grant
    Filed: August 26, 2016
    Date of Patent: July 7, 2020
    Assignees: OLYMPUS CORPORATION, THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY
    Inventors: Steven Paul Lansel, Brian A. Wandell, Andy Lai Lin
  • Publication number: 20190178628
    Abstract: Depth estimation may be performed by a movable illumination unit, a movable image sensing unit having a fixed position relative to the illumination unit, a memory, and one or more processors coupled to the memory. The processors read instructions from the memory to perform operations including receiving a reference image and a non-reference image from the image sensing unit and estimating a depth of a point of interest that appears in the reference and non-reference images. The reference image is captured when the image sensing unit and the illumination unit are located at a first position. The non-reference image is captured when the image sensing unit and the illumination unit are located at a second position. The first and second positions are separated by at least a translation along an optical axis of the image sensing unit. Estimating the depth of the point is based on the translation.
    Type: Application
    Filed: May 11, 2017
    Publication date: June 13, 2019
    Inventor: Steven Paul Lansel
  • Publication number: 20180232899
    Abstract: Systems and methods for performing depth estimation may comprise: an illuminator capable of illuminating a scene from at least a first position and a second position, an image sensor to capture (i) a first image of the scene while the illuminator illuminates the scene from the first position and (ii) a second image of the scene while the illuminator illuminates the scene from the second position, and an image processor to receive the first and second images from the image sensor and estimate a depth of at least one feature that appears in the first and second images. The depth is estimated based on the relative intensity of the first image and the second image, a distance between the first illumination position and the second illumination position, and a position of the at least one feature within at least one of the first and second images.
    Type: Application
    Filed: August 26, 2016
    Publication date: August 16, 2018
    Inventors: Steven Paul LANSEL, Brian A. WANDELL, Andy Lai LIN