Patents by Inventor Steven Paul LANSEL
Steven Paul LANSEL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250116864Abstract: A method implemented by a computing device includes rendering on a first display and a second display of the computing device an extended reality (XR) environment, and determining a context of the XR environment with respect to a user. Determining the context includes determining characteristics associated with each eye of the user with respect to virtual content displayed within the XR environment. The method includes determining, based on the characteristics, a mapping of a first set of coordinates of the virtual content as displayed on the first display and a second set of coordinates of the virtual content as displayed on the second display, generating, based on the mapping of the first set of coordinates and the second set of coordinates, composite virtual content to be rendered on the first display and the second display; and re-rendering on the first display and the second display the composite virtual content.Type: ApplicationFiled: October 10, 2023Publication date: April 10, 2025Inventor: Steven Paul Lansel
-
Patent number: 12271995Abstract: In one embodiment, a method may obtain, from an application, (a) an image and (b) a layer frame having a first pose in front of the image. The method may generate, for a first viewpoint associated with a first time, a first display frame by separately rendering the image and the layer frame having the first pose into a display buffer. The method may display the first display frame at the first time. The method may determine an extrapolated pose for the layer frame based on the first pose of the layer frame and a second pose of a previously submitted layer frame. The method may generate, for a second viewpoint associated with a second time, a second display frame by separately rendering the image and the layer frame having the extrapolated pose into the display buffer. The method may display the second display frame at the second time.Type: GrantFiled: October 21, 2022Date of Patent: April 8, 2025Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: Steven Paul Lansel, Guodong Rong, Jian Zhang
-
Publication number: 20240354894Abstract: A filter system can effectively filter images with low computation cost by, instead of using large 2D filters, applying a series of smaller filters that require less compute and memory. In some implementations, the larger filter function, ƒ, is replaced by ƒ1, ƒ2, . . . , ƒN where ƒ(x)?ƒ1(x)+ƒ2(x)+ . . . +ƒN(x). The filter system can apply these filters sequentially across multiple frames in time. The time integration of information by the human visual system results in the perception of a single higher quality filtering result, while using only the compute and memory footprint necessary to implement the filters. The number of frames across which a filter can be split without introducing flicker artifacts is dependent on the refresh rate of the display.Type: ApplicationFiled: February 9, 2024Publication date: October 24, 2024Inventors: Grant Kaijuin YANG, Steven Paul LANSEL, Irad RATMANSKY, Bruce ZITELLI
-
Patent number: 12069230Abstract: In one embodiment, a method includes accessing a first image corresponding to a first frame of a video stream, where the first image has complete pixel information, rendering a provisional image corresponding to a second frame of the video stream subsequent to the first frame, where the provisional image has a first area with complete pixel information and a second area with incomplete pixel information, generating a predicted image corresponding to the second frame by re-projecting at least an area of the first image according to one or more warping parameters, and generating a second image corresponding to the second frame by compositing the rendered provisional image and the predicted image.Type: GrantFiled: December 23, 2020Date of Patent: August 20, 2024Assignee: Meta Platforms Technologies, LLCInventors: Behnam Bastani, Steven Paul Lansel, Todd Douglas Keeler
-
Publication number: 20240249478Abstract: A method implemented by a computing device includes rendering on displays of a computing device an extended reality (XR) environment, and determining a context of the XR environment with respect to a user. Determining the context includes determining characteristics associated with an eye of the user with respect to content displayed. The method includes generating, based on the characteristics associated with the eye, a foveated map including a plurality of foveal regions. The plurality of foveal regions includes a plurality of zones each corresponding to a low-resolution area of the content for the respective zone. The method includes inputting one or more of the plurality of zones into a machine-learning model trained to generate a super-resolution reconstruction of the foveated map based on regions of interest identified within the one or more of the plurality of zones, and outputting, by the machine-learning model, the super-resolution reconstruction of the foveated map.Type: ApplicationFiled: December 28, 2023Publication date: July 25, 2024Inventors: Sebastian Sztuk, Ilya Brailovskiy, Steven Paul Lansel, Grant Kaijuin Yang
-
Patent number: 12039695Abstract: In particular embodiments, the disclosure provides a method comprising: rendering, on a graphics processing unit (GPU), a low-resolution image associated with a scene, the low-resolution image having a resolution that is lower than a target resolution; transmitting a version of the low-resolution image to a neural accelerator; processing, on the neural accelerator, the version of the low-resolution image using a trained machine-learning model, thereby outputting a plurality of control parameters; transmitting the control parameters from the neural accelerator to the GPU; processing, on the GPU, the low-resolution image and the control parameters to construct a high-resolution image having the target resolution, wherein the GPU is programmed to determine a plurality of pixel weights for performing an interpolation using the control parameters; and outputting the high-resolution image.Type: GrantFiled: February 7, 2022Date of Patent: July 16, 2024Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: Haomiao Jiang, Todd Douglas Keeler, Grant Kaijuin Yang, Rohit Rao Padebettu, Steven Paul Lansel, Behnam Bastani
-
Patent number: 12032711Abstract: A method for evaluating an external machine learning program while limiting access to internal training data includes providing labeled training data from a first source, receiving, by the first source, a machine learning program from a second source different from the first source, blocking, by the first source, access by the second source to the labeled training data, and training, by the first source, the machine learning program according to a supervised machine learning process using the labeled training data. The method further includes generating a first set of metrics from the supervised machine learning process that provide feedback about training of the neural network model, analyzing the first set of metrics to identify subset data therein, and, in order to permit evaluation of the neural network model, transmitting, to the second source, those metrics from the first set of metrics that do not include the subset data.Type: GrantFiled: January 28, 2021Date of Patent: July 9, 2024Assignee: OLYMPUS CORPORATIONInventor: Steven Paul Lansel
-
Patent number: 11734808Abstract: In one embodiment, a method includes receiving a source image and its associated parameters from each of multiple image sources, associating each of the source images with a layer in a range of layers based on the parameters associated with the source images, the range of layers specifying a composition layering order of the source images, generating a corresponding customized distortion mesh for each particular source image in the source images based on the parameters associated with the particular source image and at least a portion of the parameters associated with each of the source images that is associated with any layer preceding a layer associated with the particular source image, modifying each of the source images using the corresponding customized distortion mesh, generate a composite image using the modified source images, and display the composite image as a frame in a video.Type: GrantFiled: June 23, 2022Date of Patent: August 22, 2023Assignee: Meta Platforms Technologies, LLCInventors: Rohit Rao Padebettu, Steven Paul Lansel, Todd Douglas Keeler
-
Publication number: 20230245260Abstract: In one embodiment, a method includes by a computing system, rendering an image using a tile-based graphics processing unit. Determining a gaze location of a user wearing a head-mounted device and using the gaze location to select, from the multiple tiles, central tiles in which the user's gaze location is located, periphery tiles outside of the central tiles, and border tiles located between the central tiles and the periphery tiles. Instructing the GPU to render (a) the central tiles in a first pixel-density, (b) the periphery tiles in a second pixel-density, and (c) the border tiles in the first pixel-density and in the second pixel-density and then blending the border tiles rendered in the first pixel-density and the border tiles rendered in the second pixel-density to create blended border tiles. Then, outputting the central tiles, the periphery tiles, and the blended border tiles using a display of the head-mounted device.Type: ApplicationFiled: January 31, 2023Publication date: August 3, 2023Inventors: Weihua Gao, Todd Douglas Keeler, Steven Paul Lansel, Jian Zhang, Tianxin Ning
-
Patent number: 11669160Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.Type: GrantFiled: September 20, 2021Date of Patent: June 6, 2023Assignee: Meta Platforms Technologies, LLCInventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel
-
Publication number: 20230134355Abstract: In one embodiment, a method includes accessing a first image corresponding to a first frame of a video stream, rendering a first area of a second image corresponding to a second frame of the video stream, generating a second area of the second image corresponding to the second frame of the video stream by re-projecting the second area of the first image according to one or more warping parameters, and constructing the second image corresponding to the second frame by compositing the rendered first area and the generated second area of the second image. In another embodiment, a method includes an operating system receiving a set of data associated with an object from a first application, storing the set of data on the operating system, receiving a command to share the object with a second application, and allowing the second application to access the portion of the data associated with the object that it needs.Type: ApplicationFiled: October 27, 2022Publication date: May 4, 2023Inventors: Steven Paul Lansel, Todd Douglas Keeler, Rohit Rao Padebettu, Alexander Michael Louie, Michal Hlavac, Wai Leong Chak, Yeliz Karadayi
-
Publication number: 20230136662Abstract: In one embodiment, a method includes obtaining a first frame rendered for a first head pose and a second frame rendered for a second head pose, generating first motion vectors based on a first comparison between the first frame and the second frame, determining a first positional displacement vector based on the first head pose and the second head pose, determining a second positional displacement vector based on the second head pose and a subsequent head pose, generating a positional extrapolation for the subsequent head pose by projecting the second positional displacement vector onto the first positional displacement vector, generating a scaling factor based on the positional extrapolation, updating the second frame based on the scaling factor and the first motion vectors, and rendering a subsequent frame for the subsequent head pose based on the updated second frame.Type: ApplicationFiled: October 19, 2022Publication date: May 4, 2023Inventors: Todd Douglas Keeler, Steven Paul Lansel
-
Publication number: 20230128288Abstract: In one embodiment, a method may obtain, from an application, (a) an image and (b) a layer frame having a first pose in front of the image. The method may generate, for a first viewpoint associated with a first time, a first display frame by separately rendering the image and the layer frame having the first pose into a display buffer. The method may display the first display frame at the first time. The method may determine an extrapolated pose for the layer frame based on the first pose of the layer frame and a second pose of a previously submitted layer frame. The method may generate, for a second viewpoint associated with a second time, a second display frame by separately rendering the image and the layer frame having the extrapolated pose into the display buffer. The method may display the second display frame at the second time.Type: ApplicationFiled: October 21, 2022Publication date: April 27, 2023Inventors: Steven Paul Lansel, Guodong Rong, Jian Zhang
-
Publication number: 20220392037Abstract: In one embodiment, a method includes receiving a source image and its associated parameters from each of multiple image sources, associating each of the source images with a layer in a range of layers based on the parameters associated with the source images, the range of layers specifying a composition layering order of the source images, generating a corresponding customized distortion mesh for each particular source image in the source images based on the parameters associated with the particular source image and at least a portion of the parameters associated with each of the source images that is associated with any layer preceding a layer associated with the particular source image, modifying each of the source images using the corresponding customized distortion mesh, generate a composite image using the modified source images, and display the composite image as a frame in a video.Type: ApplicationFiled: June 23, 2022Publication date: December 8, 2022Inventors: Rohit Rao Padebettu, Steven Paul Lansel, Todd Douglas Keeler
-
Patent number: 11435593Abstract: The disclosed computer-implemented method may include (1) displaying one or more images to a user via a display comprising multiple display regions, (2) switching each of the display regions to a blocking state in which a view of the user's real-world environment in a corresponding region of the user's field of view is blocked from the user, (3) detecting a pass-through triggering event involving one or more objects in the user's real-world environment, (4) identifying one or more display regions corresponding to a region of the user's field of view occupied by the object, and (5) switching each of the one or more display regions to a pass-through state in which the view of the user's real-world environment in the corresponding region of the user's field of view is passed through to the user. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: May 6, 2019Date of Patent: September 6, 2022Assignee: Meta Platforms Technologies, LLCInventors: Sebastian Sztuk, Steven Paul Lansel
-
Patent number: 11398020Abstract: In one embodiment, a method includes receiving a source image and its associated parameters from each of multiple image sources, associating each of the source images with a layer in a range of layers based on the parameters associated with the source images, the range of layers specifying a composition layering order of the source images, generating a corresponding customized distortion mesh for each particular source image in the source images based on the parameters associated with the particular source image and at least a portion of the parameters associated with each of the source images that is associated with any layer preceding a layer associated with the particular source image, modifying each of the source images using the corresponding customized distortion mesh, generating a composite image using the modified source images, and displaying the composite image as a frame in a video.Type: GrantFiled: December 23, 2020Date of Patent: July 26, 2022Assignee: Facebook Technologies, LLC.Inventors: Rohit Rao Padebettu, Steven Paul Lansel, Todd Douglas Keeler
-
Publication number: 20220201271Abstract: In one embodiment, a method includes accessing a first image corresponding to a first frame of a video stream, where the first image has complete pixel information, rendering a provisional image corresponding to a second frame of the video stream subsequent to the first frame, where the provisional image has a first area with complete pixel information and a second area with incomplete pixel information, generating a predicted image corresponding to the second frame by re-projecting at least an area of the first image according to one or more warping parameters, and generating a second image corresponding to the second frame by compositing the rendered provisional image and the predicted image.Type: ApplicationFiled: December 23, 2020Publication date: June 23, 2022Inventors: Behnam Bastani, Steven Paul Lansel, Todd Douglas Keeler
-
Publication number: 20220198627Abstract: In one embodiment, a method includes receiving a source image and its associated parameters from each of multiple image sources, associating each of the source images with a layer in a range of layers based on the parameters associated with the source images, the range of layers specifying a composition layering order of the source images, generating a corresponding customized distortion mesh for each particular source image in the source images based on the parameters associated with the particular source image and at least a portion of the parameters associated with each of the source images that is associated with any layer preceding a layer associated with the particular source image, modifying each of the source images using the corresponding customized distortion mesh, generating a composite image using the modified source images, and displaying the composite image as a frame in a video.Type: ApplicationFiled: December 23, 2020Publication date: June 23, 2022Inventors: Rohit Rao Padebettu, Steven Paul Lansel, Todd Douglas Keeler
-
Publication number: 20220004256Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.Type: ApplicationFiled: September 20, 2021Publication date: January 6, 2022Inventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel
-
Patent number: 11132056Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.Type: GrantFiled: December 4, 2019Date of Patent: September 28, 2021Assignee: Facebook Technologies, LLCInventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel