Patents by Inventor Anton S. Kaplanyan
Anton S. Kaplanyan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20210241519Abstract: In one embodiment, a computing system accesses a three-dimensional (3D) model of an environment, the 3D model comprising a virtual representation of an object in the environment. The computing system accesses an image of the object captured by a camera from a camera pose. The computing system accesses light source parameters associated with a virtual representation of a light source in the environment. The computing system renders, using the 3D model, pixels associated with the virtual representation of the object based on the light source parameters, the pixels being rendered from a virtual perspective corresponding to the camera pose. The computing system determines updated light source parameters based on a comparison of the rendered pixels to corresponding pixels located in the image of the object.Type: ApplicationFiled: February 17, 2021Publication date: August 5, 2021Inventors: Anton S. Kaplanyan, Dejan Azinovic, Matthias Niessner, Tzu-Mao Li
-
Patent number: 11069124Abstract: In one embodiment, a computing system may determine a first orientation of a viewer in a three-dimensional (3D) space based on first sensor data associated with a first time. The system may render one or more first lines of pixels based on the first orientation of the viewer and display the one or more first lines. The system may determine a second orientation of the viewer in the 3D space based on second sensor data associated with a second time that is subsequent to the first time. The system may render one or more second lines of pixels based on the second orientation of the viewer and display the one or more second lines of pixels. The one or more second lines of pixels associated with the second orientation are displayed concurrently with the one or more first lines of pixels associated with the first orientation.Type: GrantFiled: January 21, 2020Date of Patent: July 20, 2021Assignee: Facebook Technologies, LLCInventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Patent number: 11037531Abstract: In one embodiment, a computing system configured to generate a current frame may access a current sample dataset having incomplete pixel information of a current frame in a sequence of frames. The system may access a previous frame in the sequence of frames with complete pixel information. The system may further access a motion representation indicating pixel relationships between the current frame and the previous frame. The previous frame may then be transformed according to the motion representation. The system may generate the current frame having complete pixel information by processing the current sample dataset and the transformed previous frame using a first machine-learning model.Type: GrantFiled: October 24, 2019Date of Patent: June 15, 2021Assignee: Facebook Technologies, LLCInventors: Anton S. Kaplanyan, Jiahao Lin, Mikhail Okunev
-
Publication number: 20210125583Abstract: In one embodiment, a computing system configured to generate a current frame may access a current sample dataset having incomplete pixel information of a current frame in a sequence of frames. The system may access a previous frame in the sequence of frames with complete pixel information. The system may further access a motion representation indicating pixel relationships between the current frame and the previous frame. The previous frame may then be transformed according to the motion representation. The system may generate the current frame having complete pixel information by processing the current sample dataset and the transformed previous frame using a first machine-learning model.Type: ApplicationFiled: October 24, 2019Publication date: April 29, 2021Inventors: Anton S. Kaplanyan, Jiahao Lin, Mikhail Okunev
-
Patent number: 10970911Abstract: Embodiments disclosed herein relate to a graphics processing chip for rendering computer graphics. The graphics processing chip may include a controller configured to manage operations of the graphics processing chip in accordance with a graphics-rendering pipeline. The operations may include geometry-processing operations, rasterization operations, and shading operations. The chip may further include programmable memory components configured to store a machine-learning model configured to perform at least a portion of the shading operations. The chip may also include a plurality of processing units configured to be selectively used to perform the shading operations in accordance with the machine-learning model. The chip may also include at least one output memory configured to store image data generated using the shading operations.Type: GrantFiled: February 21, 2019Date of Patent: April 6, 2021Assignee: Facebook Technologies, LLCInventors: Christoph Herman Schied, Anton S. Kaplanyan
-
Patent number: 10964098Abstract: In one embodiment, a computing system accesses a three-dimensional (3D) model of an environment, the 3D model comprising a virtual representation of an object in the environment. The computing system accesses an image of the object captured by a camera from a camera pose. The computing system accesses material parameters associated with a material property for the virtual representation of the object. The computing system renders, using the 3D model, pixels associated with the virtual representation of the object based on the material parameters associated with the virtual representation of the object, the pixels being rendered from a virtual perspective corresponding to the camera pose. The computing system determines updated material parameters associated with the material property for the virtual representation of the object based on a comparison of the rendered pixels to corresponding pixels located in the image of the object.Type: GrantFiled: March 5, 2019Date of Patent: March 30, 2021Assignee: Facebook Technologies, LLCInventors: Anton S. Kaplanyan, Dejan Azinovic, Matthias Niessner, Tzu-Mao Li
-
Patent number: 10902670Abstract: Embodiments described herein pertain to a machine-learning approach for shading. A system may determine, for each of a plurality of pixels, object visibility information based on one or more objects in a virtual environment. The system may select, for each pixel, a light source from a plurality of light sources in the virtual environment. The system may determine, for each pixel, lighting information associated with the light source selected for that pixel based on the associated object visibility information. The system may generate a first latent representation of the lighting information associated with the plurality of pixels. The system may generate a second latent representation by processing the first latent representation using a first machine-learning model trained to denoise latent light representations. The system may then generate color values for the plurality of pixels by processing at least the second latent representation using a second machine-learning model.Type: GrantFiled: November 12, 2019Date of Patent: January 26, 2021Assignee: Facebook Technologies, LLCInventors: Christoph Hermann Schied, Anton S. Kaplanyan
-
Patent number: 10846888Abstract: In one embodiment, a method for generating completed frames from sparse data may access sample datasets associated with a sequence of frames, respectively. Each sample dataset may comprise incomplete pixel information of the associated frame. The system may generate, using a first machine-learning model, the sequence of frames, each having complete pixel information, based on the sample datasets. The first machine-learning model is configured to retain spatio-temporal representations associated with the generated frames. The system may then access a next sample dataset comprising incomplete pixel information of a next frame after the sequence of frames. The system may generate, using the first machine-learning model, the next frame based on the next sample dataset.Type: GrantFiled: November 15, 2018Date of Patent: November 24, 2020Assignee: Facebook Technologies, LLCInventors: Anton S. Kaplanyan, Anton Sochenov, Thomas Sebastian Leimkuhler, Warren Andrew Hunt
-
Publication number: 20200311881Abstract: In one embodiment, a computing system may receive current eye-tracking data associated with a user of a head-mounted display. The system may dynamically adjust a focal length of the head-mounted display based on the current eye-tracking data. The system may generate an in-focus image of a scene and a corresponding depth map of the scene. The system may generate a circle-of-confusion map for the scene based on the depth map. The circle-of-confusion map encodes a desired focal surface in the scene. The system may generate, using a machine-learning model, an output image with a synthesized defocus-blur effect by processing the in-focus image, the corresponding depth map, and the circle-of-confusion map of the scene. The system may display the output image with the synthesized defocus-blur effect to the user via the head-mounted display having the adjusted focal length.Type: ApplicationFiled: June 16, 2020Publication date: October 1, 2020Inventors: Douglas Robert Lanman, Matthew Steven Chapman, Alexander Jobe Fix, Anton S. Kaplanyan, Lei Xiao
-
Publication number: 20200273231Abstract: Embodiments disclosed herein relate to a graphics processing chip for rendering computer graphics. The graphics processing chip may include a controller configured to manage operations of the graphics processing chip in accordance with a graphics-rendering pipeline. The operations may include geometry-processing operations, rasterization operations, and shading operations. The chip may further include programmable memory components configured to store a machine-learning model configured to perform at least a portion of the shading operations. The chip may also include a plurality of processing units configured to be selectively used to perform the shading operations in accordance with the machine-learning model. The chip may also include at least one output memory configured to store image data generated using the shading operations.Type: ApplicationFiled: February 21, 2019Publication date: August 27, 2020Inventors: Christoph Herman Schied, Anton S. Kaplanyan
-
Patent number: 10740876Abstract: In one embodiment, a system may access a training sample from a training dataset, including a training image of a scene and a corresponding depth map. The system may access a circle-of-confusion map for the scene, which is generated based on the depth map and encodes a desired focal surface in the scene. The system may generate an output image by processing the training image, the corresponding depth map, and the corresponding circle-of-confusion map using a machine-learning model. The system may update the machine-learning model based on a comparison between the generated output image and a target image depicting the scene with a desired defocus-blur effect. The updated machine-learning model is configured to generate images with defocus-blur effect based on input images and corresponding depth maps.Type: GrantFiled: July 19, 2018Date of Patent: August 11, 2020Assignee: Facebook Technologies, LLCInventors: Douglas Robert Lanman, Matthew Steven Chapman, Alexander Jobe Fix, Anton S. Kaplanyan, Lei Xiao
-
Patent number: 10699467Abstract: In one embodiment, a method for determine visibility may perform intersection tests using block beams, tile beams, and rays. First, a computing system may project a block beam to test for intersection with a first bounding volume (BV) in a bounding volume hierarchy. If the beam fully contains BV, the system may test for more granular intersections with the first BV by projecting smaller tile beams contained within the block beam. Upon determining that the first BV partially intersects a tile beam, the system may project the tile beam against a second BV contained within the first BV. If the tile beam fully contains the second BV, the system may test for intersection using rays contained within the tile beam. The system may project procedurally-generated rays to test whether they intersect with objects contained within the second BV. Information associated with intersections may be used to render a computer-generated scene.Type: GrantFiled: April 16, 2018Date of Patent: June 30, 2020Assignee: Facebook Technologies, LLCInventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Patent number: 10664953Abstract: In one embodiment, a system may access a training sample from a training dataset. The training sample may include a training image of a scene and a corresponding depth map of the scene. The system may generate a plurality of decomposition images by processing the training image and the corresponding depth map using a machine-learning model. The system may generate a focal stack based on the plurality of decomposition images and update the machine-learning model based on a comparison between the generated focal stack and a target focal stack associated with the training sample. The updated machine-learning model is configured to generate decomposition images with defocus-blur effect based on input images and corresponding depth maps.Type: GrantFiled: July 19, 2018Date of Patent: May 26, 2020Assignee: Facebook Technologies, LLCInventors: Douglas Robert Lanman, Matthew Steven Chapman, Alexander Jobe Fix, Anton S. Kaplanyan, Lei Xiao
-
Publication number: 20200160587Abstract: In one embodiment, a computing system may determine a first orientation of a viewer in a three-dimensional (3D) space based on first sensor data associated with a first time. The system may render one or more first lines of pixels based on the first orientation of the viewer and display the one or more first lines. The system may determine a second orientation of the viewer in the 3D space based on second sensor data associated with a second time that is subsequent to the first time. The system may render one or more second lines of pixels based on the second orientation of the viewer and display the one or more second lines of pixels. The one or more second lines of pixels associated with the second orientation are displayed concurrently with the one or more first lines of pixels associated with the first orientation.Type: ApplicationFiled: January 21, 2020Publication date: May 21, 2020Inventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Publication number: 20200098139Abstract: In one embodiment, a method for generating completed frames from sparse data may access sample datasets associated with a sequence of frames, respectively. Each sample dataset may comprise incomplete pixel information of the associated frame. The system may generate, using a first machine-learning model, the sequence of frames, each having complete pixel information, based on the sample datasets. The first machine-learning model is configured to retain spatio-temporal representations associated with the generated frames. The system may then access a next sample dataset comprising incomplete pixel information of a next frame after the sequence of frames. The system may generate, using the first machine-learning model, the next frame based on the next sample dataset.Type: ApplicationFiled: November 15, 2018Publication date: March 26, 2020Inventors: Anton S. Kaplanyan, Anton Sochenov, Thomas Sebastian Leimkuhler, Warren Andrew Hunt
-
Patent number: 10600167Abstract: A method, computer readable medium, and system are disclosed for performing spatiotemporal filtering. The method includes the steps of applying, utilizing a processor, a temporal filter of a filtering pipeline to a current image frame, using a temporal reprojection, to obtain a color and auxiliary information for each pixel within the current image frame, providing the auxiliary information for each pixel within the current image frame to one or more subsequent filters of the filtering pipeline, and creating a reconstructed image for the current image frame, utilizing the one or more subsequent filters of the filtering pipeline.Type: GrantFiled: January 18, 2018Date of Patent: March 24, 2020Assignee: NVIDIA CORPORATIONInventors: Christoph H. Schied, Marco Salvi, Anton S. Kaplanyan, Aaron Eliot Lefohn, John Matthew Burgess, Anjul Patney, Christopher Ryan Wyman
-
Publication number: 20200043219Abstract: In one embodiment, a computing system may determine an orientation in a three-dimensional (3D) space and generate a plurality of coordinates in the 3D space based on the determined orientation. The system may access pre-determined ray trajectory definitions associated with the plurality of coordinates. The system may determine visibility information of one or more objects defined within the 3D space by projecting rays through the plurality of coordinates, wherein trajectories of the rays from the plurality of coordinates are determined based on the pre-determined ray trajectory definitions. The system may then generate an image of the one or more objects based on the determined visibility information of the one or more objects.Type: ApplicationFiled: October 7, 2019Publication date: February 6, 2020Inventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Patent number: 10553012Abstract: In one embodiment, a computer system may determine an orientation in a 3D space based on sensor data generated by a virtual reality device. The system may generate ray footprints in the 3D space based on the determined orientation. For at least one of the ray footprints, the system may identify a corresponding number of subsamples to generate for that ray footprint and generate one or more coordinates in the ray footprint based on the corresponding number of subsamples. The system may determine visibility of one or more objects defined within the 3D space by projecting a ray from each of the one or more coordinates to test for intersection with the one or more objects. The system may generate an image of the one or more objected based on the determined visibility of the one or more objects.Type: GrantFiled: April 16, 2018Date of Patent: February 4, 2020Assignee: Facebook Technologies, LLCInventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Patent number: 10553013Abstract: In one embodiment, a computing system may determine a first orientation in a 3D space based on first sensor data generated at a first time. The system may determine a first visibility of an object in the 3D space by projecting rays based on the first orientation to test for intersection. The system may generate first lines of pixels based on the determined first visibility and output the first lines of pixels for display. The system may determine a second orientation based on second sensor data generated at a second time. The system may determine a second visibility of the object by projected rays based on the second orientation to test for intersection. The system may generate second lines of pixels based on the determined second visibility and output the second lines of pixels for display. The second lines of pixels are displayed concurrently with the first lines of pixels.Type: GrantFiled: April 16, 2018Date of Patent: February 4, 2020Assignee: Facebook Technologies, LLCInventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Patent number: 10529117Abstract: In one embodiment, a computing system may receive a focal surface map, which may be specified by an application. The system may determine an orientation in a 3D space based on sensor data generated by a virtual reality device. The system may generate first coordinates in the 3D space based on the determined orientation and generate second coordinates using the first coordinates and the focal surface map. Each of the first coordinates is associated with one of the second coordinates. For each of the first coordinates, the system may determine visibility of one or more objects defined within the 3D space by projecting a ray from the first coordinate through the associated second coordinate to test for intersection with the one or more objects. The system may generate an image of the one or more objected based on the determined visibility of the one or more objects.Type: GrantFiled: April 16, 2018Date of Patent: January 7, 2020Assignee: Facebook Technologies, LLCInventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis