Patents by Inventor Steven Paul LANSEL

Steven Paul LANSEL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11734808
    Abstract: In one embodiment, a method includes receiving a source image and its associated parameters from each of multiple image sources, associating each of the source images with a layer in a range of layers based on the parameters associated with the source images, the range of layers specifying a composition layering order of the source images, generating a corresponding customized distortion mesh for each particular source image in the source images based on the parameters associated with the particular source image and at least a portion of the parameters associated with each of the source images that is associated with any layer preceding a layer associated with the particular source image, modifying each of the source images using the corresponding customized distortion mesh, generate a composite image using the modified source images, and display the composite image as a frame in a video.
    Type: Grant
    Filed: June 23, 2022
    Date of Patent: August 22, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Rohit Rao Padebettu, Steven Paul Lansel, Todd Douglas Keeler
  • Publication number: 20230245260
    Abstract: In one embodiment, a method includes by a computing system, rendering an image using a tile-based graphics processing unit. Determining a gaze location of a user wearing a head-mounted device and using the gaze location to select, from the multiple tiles, central tiles in which the user's gaze location is located, periphery tiles outside of the central tiles, and border tiles located between the central tiles and the periphery tiles. Instructing the GPU to render (a) the central tiles in a first pixel-density, (b) the periphery tiles in a second pixel-density, and (c) the border tiles in the first pixel-density and in the second pixel-density and then blending the border tiles rendered in the first pixel-density and the border tiles rendered in the second pixel-density to create blended border tiles. Then, outputting the central tiles, the periphery tiles, and the blended border tiles using a display of the head-mounted device.
    Type: Application
    Filed: January 31, 2023
    Publication date: August 3, 2023
    Inventors: Weihua Gao, Todd Douglas Keeler, Steven Paul Lansel, Jian Zhang, Tianxin Ning
  • Patent number: 11669160
    Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.
    Type: Grant
    Filed: September 20, 2021
    Date of Patent: June 6, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel
  • Publication number: 20230134355
    Abstract: In one embodiment, a method includes accessing a first image corresponding to a first frame of a video stream, rendering a first area of a second image corresponding to a second frame of the video stream, generating a second area of the second image corresponding to the second frame of the video stream by re-projecting the second area of the first image according to one or more warping parameters, and constructing the second image corresponding to the second frame by compositing the rendered first area and the generated second area of the second image. In another embodiment, a method includes an operating system receiving a set of data associated with an object from a first application, storing the set of data on the operating system, receiving a command to share the object with a second application, and allowing the second application to access the portion of the data associated with the object that it needs.
    Type: Application
    Filed: October 27, 2022
    Publication date: May 4, 2023
    Inventors: Steven Paul Lansel, Todd Douglas Keeler, Rohit Rao Padebettu, Alexander Michael Louie, Michal Hlavac, Wai Leong Chak, Yeliz Karadayi
  • Publication number: 20230136662
    Abstract: In one embodiment, a method includes obtaining a first frame rendered for a first head pose and a second frame rendered for a second head pose, generating first motion vectors based on a first comparison between the first frame and the second frame, determining a first positional displacement vector based on the first head pose and the second head pose, determining a second positional displacement vector based on the second head pose and a subsequent head pose, generating a positional extrapolation for the subsequent head pose by projecting the second positional displacement vector onto the first positional displacement vector, generating a scaling factor based on the positional extrapolation, updating the second frame based on the scaling factor and the first motion vectors, and rendering a subsequent frame for the subsequent head pose based on the updated second frame.
    Type: Application
    Filed: October 19, 2022
    Publication date: May 4, 2023
    Inventors: Todd Douglas Keeler, Steven Paul Lansel
  • Publication number: 20230128288
    Abstract: In one embodiment, a method may obtain, from an application, (a) an image and (b) a layer frame having a first pose in front of the image. The method may generate, for a first viewpoint associated with a first time, a first display frame by separately rendering the image and the layer frame having the first pose into a display buffer. The method may display the first display frame at the first time. The method may determine an extrapolated pose for the layer frame based on the first pose of the layer frame and a second pose of a previously submitted layer frame. The method may generate, for a second viewpoint associated with a second time, a second display frame by separately rendering the image and the layer frame having the extrapolated pose into the display buffer. The method may display the second display frame at the second time.
    Type: Application
    Filed: October 21, 2022
    Publication date: April 27, 2023
    Inventors: Steven Paul Lansel, Guodong Rong, Jian Zhang
  • Publication number: 20220392037
    Abstract: In one embodiment, a method includes receiving a source image and its associated parameters from each of multiple image sources, associating each of the source images with a layer in a range of layers based on the parameters associated with the source images, the range of layers specifying a composition layering order of the source images, generating a corresponding customized distortion mesh for each particular source image in the source images based on the parameters associated with the particular source image and at least a portion of the parameters associated with each of the source images that is associated with any layer preceding a layer associated with the particular source image, modifying each of the source images using the corresponding customized distortion mesh, generate a composite image using the modified source images, and display the composite image as a frame in a video.
    Type: Application
    Filed: June 23, 2022
    Publication date: December 8, 2022
    Inventors: Rohit Rao Padebettu, Steven Paul Lansel, Todd Douglas Keeler
  • Patent number: 11435593
    Abstract: The disclosed computer-implemented method may include (1) displaying one or more images to a user via a display comprising multiple display regions, (2) switching each of the display regions to a blocking state in which a view of the user's real-world environment in a corresponding region of the user's field of view is blocked from the user, (3) detecting a pass-through triggering event involving one or more objects in the user's real-world environment, (4) identifying one or more display regions corresponding to a region of the user's field of view occupied by the object, and (5) switching each of the one or more display regions to a pass-through state in which the view of the user's real-world environment in the corresponding region of the user's field of view is passed through to the user. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: May 6, 2019
    Date of Patent: September 6, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Sebastian Sztuk, Steven Paul Lansel
  • Patent number: 11398020
    Abstract: In one embodiment, a method includes receiving a source image and its associated parameters from each of multiple image sources, associating each of the source images with a layer in a range of layers based on the parameters associated with the source images, the range of layers specifying a composition layering order of the source images, generating a corresponding customized distortion mesh for each particular source image in the source images based on the parameters associated with the particular source image and at least a portion of the parameters associated with each of the source images that is associated with any layer preceding a layer associated with the particular source image, modifying each of the source images using the corresponding customized distortion mesh, generating a composite image using the modified source images, and displaying the composite image as a frame in a video.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: July 26, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Rohit Rao Padebettu, Steven Paul Lansel, Todd Douglas Keeler
  • Publication number: 20220198627
    Abstract: In one embodiment, a method includes receiving a source image and its associated parameters from each of multiple image sources, associating each of the source images with a layer in a range of layers based on the parameters associated with the source images, the range of layers specifying a composition layering order of the source images, generating a corresponding customized distortion mesh for each particular source image in the source images based on the parameters associated with the particular source image and at least a portion of the parameters associated with each of the source images that is associated with any layer preceding a layer associated with the particular source image, modifying each of the source images using the corresponding customized distortion mesh, generating a composite image using the modified source images, and displaying the composite image as a frame in a video.
    Type: Application
    Filed: December 23, 2020
    Publication date: June 23, 2022
    Inventors: Rohit Rao Padebettu, Steven Paul Lansel, Todd Douglas Keeler
  • Publication number: 20220201271
    Abstract: In one embodiment, a method includes accessing a first image corresponding to a first frame of a video stream, where the first image has complete pixel information, rendering a provisional image corresponding to a second frame of the video stream subsequent to the first frame, where the provisional image has a first area with complete pixel information and a second area with incomplete pixel information, generating a predicted image corresponding to the second frame by re-projecting at least an area of the first image according to one or more warping parameters, and generating a second image corresponding to the second frame by compositing the rendered provisional image and the predicted image.
    Type: Application
    Filed: December 23, 2020
    Publication date: June 23, 2022
    Inventors: Behnam Bastani, Steven Paul Lansel, Todd Douglas Keeler
  • Publication number: 20220004256
    Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.
    Type: Application
    Filed: September 20, 2021
    Publication date: January 6, 2022
    Inventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel
  • Patent number: 11132056
    Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: September 28, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel
  • Patent number: 11048091
    Abstract: An image generator is configured to generate display light. A first waveguide is configured to generate wide-field image light from a first portion of the display light. A first outcoupling element of the first waveguide extends to a boundary of the frame to provide the wide-field display to substantially all of the augmented FOV of the user. A second waveguide is configured to generate inset image light from a second portion of the display light received from the image generator.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: June 29, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Steven Paul Lansel, Sebastian Sztuk, Kirk Eric Burgess, Brian Wheelwright
  • Publication number: 20210173474
    Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.
    Type: Application
    Filed: December 4, 2019
    Publication date: June 10, 2021
    Inventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel
  • Publication number: 20210150358
    Abstract: A method for evaluating an external machine learning program while limiting access to internal training data includes providing labeled training data from a first source, receiving, by the first source, a machine learning program from a second source different from the first source, blocking, by the first source, access by the second source to the labeled training data, and training, by the first source, the machine learning program according to a supervised machine learning process using the labeled training data. The method further includes generating a first set of metrics from the supervised machine learning process that provide feedback about training of the neural network model, analyzing the first set of metrics to identify subset data therein, and, in order to permit evaluation of the neural network model, transmitting, to the second source, those metrics from the first set of metrics that do not include the subset data.
    Type: Application
    Filed: January 28, 2021
    Publication date: May 20, 2021
    Inventor: Steven Paul LANSEL
  • Patent number: 10871825
    Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: December 22, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel
  • Patent number: 10706572
    Abstract: Systems and methods for performing depth estimation may comprise: an illuminator capable of illuminating a scene from at least a first position and a second position, an image sensor to capture (i) a first image of the scene while the illuminator illuminates the scene from the first position and (ii) a second image of the scene while the illuminator illuminates the scene from the second position, and an image processor to receive the first and second images from the image sensor and estimate a depth of at least one feature that appears in the first and second images. The depth is estimated based on the relative intensity of the first image and the second image, a distance between the first illumination position and the second illumination position, and a position of the at least one feature within at least one of the first and second images.
    Type: Grant
    Filed: August 26, 2016
    Date of Patent: July 7, 2020
    Assignees: OLYMPUS CORPORATION, THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY
    Inventors: Steven Paul Lansel, Brian A. Wandell, Andy Lai Lin
  • Publication number: 20190178628
    Abstract: Depth estimation may be performed by a movable illumination unit, a movable image sensing unit having a fixed position relative to the illumination unit, a memory, and one or more processors coupled to the memory. The processors read instructions from the memory to perform operations including receiving a reference image and a non-reference image from the image sensing unit and estimating a depth of a point of interest that appears in the reference and non-reference images. The reference image is captured when the image sensing unit and the illumination unit are located at a first position. The non-reference image is captured when the image sensing unit and the illumination unit are located at a second position. The first and second positions are separated by at least a translation along an optical axis of the image sensing unit. Estimating the depth of the point is based on the translation.
    Type: Application
    Filed: May 11, 2017
    Publication date: June 13, 2019
    Inventor: Steven Paul Lansel
  • Publication number: 20180232899
    Abstract: Systems and methods for performing depth estimation may comprise: an illuminator capable of illuminating a scene from at least a first position and a second position, an image sensor to capture (i) a first image of the scene while the illuminator illuminates the scene from the first position and (ii) a second image of the scene while the illuminator illuminates the scene from the second position, and an image processor to receive the first and second images from the image sensor and estimate a depth of at least one feature that appears in the first and second images. The depth is estimated based on the relative intensity of the first image and the second image, a distance between the first illumination position and the second illumination position, and a position of the at least one feature within at least one of the first and second images.
    Type: Application
    Filed: August 26, 2016
    Publication date: August 16, 2018
    Inventors: Steven Paul LANSEL, Brian A. WANDELL, Andy Lai LIN