Patents by Inventor Steven Paul LANSEL

Steven Paul LANSEL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250116864
    Abstract: A method implemented by a computing device includes rendering on a first display and a second display of the computing device an extended reality (XR) environment, and determining a context of the XR environment with respect to a user. Determining the context includes determining characteristics associated with each eye of the user with respect to virtual content displayed within the XR environment. The method includes determining, based on the characteristics, a mapping of a first set of coordinates of the virtual content as displayed on the first display and a second set of coordinates of the virtual content as displayed on the second display, generating, based on the mapping of the first set of coordinates and the second set of coordinates, composite virtual content to be rendered on the first display and the second display; and re-rendering on the first display and the second display the composite virtual content.
    Type: Application
    Filed: October 10, 2023
    Publication date: April 10, 2025
    Inventor: Steven Paul Lansel
  • Patent number: 12271995
    Abstract: In one embodiment, a method may obtain, from an application, (a) an image and (b) a layer frame having a first pose in front of the image. The method may generate, for a first viewpoint associated with a first time, a first display frame by separately rendering the image and the layer frame having the first pose into a display buffer. The method may display the first display frame at the first time. The method may determine an extrapolated pose for the layer frame based on the first pose of the layer frame and a second pose of a previously submitted layer frame. The method may generate, for a second viewpoint associated with a second time, a second display frame by separately rendering the image and the layer frame having the extrapolated pose into the display buffer. The method may display the second display frame at the second time.
    Type: Grant
    Filed: October 21, 2022
    Date of Patent: April 8, 2025
    Assignee: META PLATFORMS TECHNOLOGIES, LLC
    Inventors: Steven Paul Lansel, Guodong Rong, Jian Zhang
  • Publication number: 20240354894
    Abstract: A filter system can effectively filter images with low computation cost by, instead of using large 2D filters, applying a series of smaller filters that require less compute and memory. In some implementations, the larger filter function, ƒ, is replaced by ƒ1, ƒ2, . . . , ƒN where ƒ(x)?ƒ1(x)+ƒ2(x)+ . . . +ƒN(x). The filter system can apply these filters sequentially across multiple frames in time. The time integration of information by the human visual system results in the perception of a single higher quality filtering result, while using only the compute and memory footprint necessary to implement the filters. The number of frames across which a filter can be split without introducing flicker artifacts is dependent on the refresh rate of the display.
    Type: Application
    Filed: February 9, 2024
    Publication date: October 24, 2024
    Inventors: Grant Kaijuin YANG, Steven Paul LANSEL, Irad RATMANSKY, Bruce ZITELLI
  • Patent number: 12069230
    Abstract: In one embodiment, a method includes accessing a first image corresponding to a first frame of a video stream, where the first image has complete pixel information, rendering a provisional image corresponding to a second frame of the video stream subsequent to the first frame, where the provisional image has a first area with complete pixel information and a second area with incomplete pixel information, generating a predicted image corresponding to the second frame by re-projecting at least an area of the first image according to one or more warping parameters, and generating a second image corresponding to the second frame by compositing the rendered provisional image and the predicted image.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: August 20, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Behnam Bastani, Steven Paul Lansel, Todd Douglas Keeler
  • Publication number: 20240249478
    Abstract: A method implemented by a computing device includes rendering on displays of a computing device an extended reality (XR) environment, and determining a context of the XR environment with respect to a user. Determining the context includes determining characteristics associated with an eye of the user with respect to content displayed. The method includes generating, based on the characteristics associated with the eye, a foveated map including a plurality of foveal regions. The plurality of foveal regions includes a plurality of zones each corresponding to a low-resolution area of the content for the respective zone. The method includes inputting one or more of the plurality of zones into a machine-learning model trained to generate a super-resolution reconstruction of the foveated map based on regions of interest identified within the one or more of the plurality of zones, and outputting, by the machine-learning model, the super-resolution reconstruction of the foveated map.
    Type: Application
    Filed: December 28, 2023
    Publication date: July 25, 2024
    Inventors: Sebastian Sztuk, Ilya Brailovskiy, Steven Paul Lansel, Grant Kaijuin Yang
  • Patent number: 12039695
    Abstract: In particular embodiments, the disclosure provides a method comprising: rendering, on a graphics processing unit (GPU), a low-resolution image associated with a scene, the low-resolution image having a resolution that is lower than a target resolution; transmitting a version of the low-resolution image to a neural accelerator; processing, on the neural accelerator, the version of the low-resolution image using a trained machine-learning model, thereby outputting a plurality of control parameters; transmitting the control parameters from the neural accelerator to the GPU; processing, on the GPU, the low-resolution image and the control parameters to construct a high-resolution image having the target resolution, wherein the GPU is programmed to determine a plurality of pixel weights for performing an interpolation using the control parameters; and outputting the high-resolution image.
    Type: Grant
    Filed: February 7, 2022
    Date of Patent: July 16, 2024
    Assignee: META PLATFORMS TECHNOLOGIES, LLC
    Inventors: Haomiao Jiang, Todd Douglas Keeler, Grant Kaijuin Yang, Rohit Rao Padebettu, Steven Paul Lansel, Behnam Bastani
  • Patent number: 12032711
    Abstract: A method for evaluating an external machine learning program while limiting access to internal training data includes providing labeled training data from a first source, receiving, by the first source, a machine learning program from a second source different from the first source, blocking, by the first source, access by the second source to the labeled training data, and training, by the first source, the machine learning program according to a supervised machine learning process using the labeled training data. The method further includes generating a first set of metrics from the supervised machine learning process that provide feedback about training of the neural network model, analyzing the first set of metrics to identify subset data therein, and, in order to permit evaluation of the neural network model, transmitting, to the second source, those metrics from the first set of metrics that do not include the subset data.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: July 9, 2024
    Assignee: OLYMPUS CORPORATION
    Inventor: Steven Paul Lansel
  • Patent number: 11734808
    Abstract: In one embodiment, a method includes receiving a source image and its associated parameters from each of multiple image sources, associating each of the source images with a layer in a range of layers based on the parameters associated with the source images, the range of layers specifying a composition layering order of the source images, generating a corresponding customized distortion mesh for each particular source image in the source images based on the parameters associated with the particular source image and at least a portion of the parameters associated with each of the source images that is associated with any layer preceding a layer associated with the particular source image, modifying each of the source images using the corresponding customized distortion mesh, generate a composite image using the modified source images, and display the composite image as a frame in a video.
    Type: Grant
    Filed: June 23, 2022
    Date of Patent: August 22, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Rohit Rao Padebettu, Steven Paul Lansel, Todd Douglas Keeler
  • Publication number: 20230245260
    Abstract: In one embodiment, a method includes by a computing system, rendering an image using a tile-based graphics processing unit. Determining a gaze location of a user wearing a head-mounted device and using the gaze location to select, from the multiple tiles, central tiles in which the user's gaze location is located, periphery tiles outside of the central tiles, and border tiles located between the central tiles and the periphery tiles. Instructing the GPU to render (a) the central tiles in a first pixel-density, (b) the periphery tiles in a second pixel-density, and (c) the border tiles in the first pixel-density and in the second pixel-density and then blending the border tiles rendered in the first pixel-density and the border tiles rendered in the second pixel-density to create blended border tiles. Then, outputting the central tiles, the periphery tiles, and the blended border tiles using a display of the head-mounted device.
    Type: Application
    Filed: January 31, 2023
    Publication date: August 3, 2023
    Inventors: Weihua Gao, Todd Douglas Keeler, Steven Paul Lansel, Jian Zhang, Tianxin Ning
  • Patent number: 11669160
    Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.
    Type: Grant
    Filed: September 20, 2021
    Date of Patent: June 6, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel
  • Publication number: 20230134355
    Abstract: In one embodiment, a method includes accessing a first image corresponding to a first frame of a video stream, rendering a first area of a second image corresponding to a second frame of the video stream, generating a second area of the second image corresponding to the second frame of the video stream by re-projecting the second area of the first image according to one or more warping parameters, and constructing the second image corresponding to the second frame by compositing the rendered first area and the generated second area of the second image. In another embodiment, a method includes an operating system receiving a set of data associated with an object from a first application, storing the set of data on the operating system, receiving a command to share the object with a second application, and allowing the second application to access the portion of the data associated with the object that it needs.
    Type: Application
    Filed: October 27, 2022
    Publication date: May 4, 2023
    Inventors: Steven Paul Lansel, Todd Douglas Keeler, Rohit Rao Padebettu, Alexander Michael Louie, Michal Hlavac, Wai Leong Chak, Yeliz Karadayi
  • Publication number: 20230136662
    Abstract: In one embodiment, a method includes obtaining a first frame rendered for a first head pose and a second frame rendered for a second head pose, generating first motion vectors based on a first comparison between the first frame and the second frame, determining a first positional displacement vector based on the first head pose and the second head pose, determining a second positional displacement vector based on the second head pose and a subsequent head pose, generating a positional extrapolation for the subsequent head pose by projecting the second positional displacement vector onto the first positional displacement vector, generating a scaling factor based on the positional extrapolation, updating the second frame based on the scaling factor and the first motion vectors, and rendering a subsequent frame for the subsequent head pose based on the updated second frame.
    Type: Application
    Filed: October 19, 2022
    Publication date: May 4, 2023
    Inventors: Todd Douglas Keeler, Steven Paul Lansel
  • Publication number: 20230128288
    Abstract: In one embodiment, a method may obtain, from an application, (a) an image and (b) a layer frame having a first pose in front of the image. The method may generate, for a first viewpoint associated with a first time, a first display frame by separately rendering the image and the layer frame having the first pose into a display buffer. The method may display the first display frame at the first time. The method may determine an extrapolated pose for the layer frame based on the first pose of the layer frame and a second pose of a previously submitted layer frame. The method may generate, for a second viewpoint associated with a second time, a second display frame by separately rendering the image and the layer frame having the extrapolated pose into the display buffer. The method may display the second display frame at the second time.
    Type: Application
    Filed: October 21, 2022
    Publication date: April 27, 2023
    Inventors: Steven Paul Lansel, Guodong Rong, Jian Zhang
  • Publication number: 20220392037
    Abstract: In one embodiment, a method includes receiving a source image and its associated parameters from each of multiple image sources, associating each of the source images with a layer in a range of layers based on the parameters associated with the source images, the range of layers specifying a composition layering order of the source images, generating a corresponding customized distortion mesh for each particular source image in the source images based on the parameters associated with the particular source image and at least a portion of the parameters associated with each of the source images that is associated with any layer preceding a layer associated with the particular source image, modifying each of the source images using the corresponding customized distortion mesh, generate a composite image using the modified source images, and display the composite image as a frame in a video.
    Type: Application
    Filed: June 23, 2022
    Publication date: December 8, 2022
    Inventors: Rohit Rao Padebettu, Steven Paul Lansel, Todd Douglas Keeler
  • Patent number: 11435593
    Abstract: The disclosed computer-implemented method may include (1) displaying one or more images to a user via a display comprising multiple display regions, (2) switching each of the display regions to a blocking state in which a view of the user's real-world environment in a corresponding region of the user's field of view is blocked from the user, (3) detecting a pass-through triggering event involving one or more objects in the user's real-world environment, (4) identifying one or more display regions corresponding to a region of the user's field of view occupied by the object, and (5) switching each of the one or more display regions to a pass-through state in which the view of the user's real-world environment in the corresponding region of the user's field of view is passed through to the user. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: May 6, 2019
    Date of Patent: September 6, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Sebastian Sztuk, Steven Paul Lansel
  • Patent number: 11398020
    Abstract: In one embodiment, a method includes receiving a source image and its associated parameters from each of multiple image sources, associating each of the source images with a layer in a range of layers based on the parameters associated with the source images, the range of layers specifying a composition layering order of the source images, generating a corresponding customized distortion mesh for each particular source image in the source images based on the parameters associated with the particular source image and at least a portion of the parameters associated with each of the source images that is associated with any layer preceding a layer associated with the particular source image, modifying each of the source images using the corresponding customized distortion mesh, generating a composite image using the modified source images, and displaying the composite image as a frame in a video.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: July 26, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Rohit Rao Padebettu, Steven Paul Lansel, Todd Douglas Keeler
  • Publication number: 20220201271
    Abstract: In one embodiment, a method includes accessing a first image corresponding to a first frame of a video stream, where the first image has complete pixel information, rendering a provisional image corresponding to a second frame of the video stream subsequent to the first frame, where the provisional image has a first area with complete pixel information and a second area with incomplete pixel information, generating a predicted image corresponding to the second frame by re-projecting at least an area of the first image according to one or more warping parameters, and generating a second image corresponding to the second frame by compositing the rendered provisional image and the predicted image.
    Type: Application
    Filed: December 23, 2020
    Publication date: June 23, 2022
    Inventors: Behnam Bastani, Steven Paul Lansel, Todd Douglas Keeler
  • Publication number: 20220198627
    Abstract: In one embodiment, a method includes receiving a source image and its associated parameters from each of multiple image sources, associating each of the source images with a layer in a range of layers based on the parameters associated with the source images, the range of layers specifying a composition layering order of the source images, generating a corresponding customized distortion mesh for each particular source image in the source images based on the parameters associated with the particular source image and at least a portion of the parameters associated with each of the source images that is associated with any layer preceding a layer associated with the particular source image, modifying each of the source images using the corresponding customized distortion mesh, generating a composite image using the modified source images, and displaying the composite image as a frame in a video.
    Type: Application
    Filed: December 23, 2020
    Publication date: June 23, 2022
    Inventors: Rohit Rao Padebettu, Steven Paul Lansel, Todd Douglas Keeler
  • Publication number: 20220004256
    Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.
    Type: Application
    Filed: September 20, 2021
    Publication date: January 6, 2022
    Inventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel
  • Patent number: 11132056
    Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: September 28, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel