Patents by Inventor Fabian Langguth
Fabian Langguth has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240119568Abstract: A processor accesses a depth map and a first image of a scene generated using one or more sensors of an artificial reality device. The processor generates, based on the first image, segmentation masks respectively associated with a plurality of object types. The segmentation masks segment the depth map into a plurality of segmented depth maps respectively associated with the object types. The processor generates meshes using, respectively, the segmented depth maps. For each eye of the user, the processor captures a second image and generates, based on the second image, segmentation information. The processor warps the plurality of meshes to generate warped meshes for the eye, and then generates an eye-specific mesh for the eye by compositing the warped meshes according to the segmentation information. The processor renders an output image for the eye using the second image and the eye-specific mesh.Type: ApplicationFiled: October 10, 2023Publication date: April 11, 2024Inventors: Andrey Tovchigrechko, Fabian Langguth, Alexander Sorkine Hornung, Oskar Linde, Christian Forster
-
Publication number: 20230252599Abstract: Each image in a sequence of images includes three-dimensional locations of object features depicted in the image, and a first camera position of the camera when the image is captured. A gap is detected between first camera positions associated with a first continuous and first camera positions associated with a second continuous subset, the first camera positions associated with the second continuous subset adjusted to close the gap. A view path for a virtual camera is determined based on the first camera positions and the adjusted first camera positions. Second camera positions are determined for the virtual camera, for each of the second camera positions: one of the first camera positions associated with the sequence of images is selected and warped using the first camera position, the second camera position, and the three-dimensional locations of object features depicted in the selected image. A sequence of the warped images is outputted.Type: ApplicationFiled: April 11, 2023Publication date: August 10, 2023Inventors: Andrei Viktorovich Chtcherbatchenko, Francis Yunfeng Ge, Bo Yin, Shi Chen, Fabian Langguth, Johannes Peter Kopf, Suhib Fakhri Mahmod Alsisan, Richard Szeliski
-
Publication number: 20230245404Abstract: In one embodiment, a computing system may monitor one or more regions of a first 3D model of an environment to determine a frequency the respective region changes. The computing system may determine a first region of the one or more regions is static based on the frequency the respective region changes. The computing system may delay a first time period to a second time period to update the first 3D model based on comparisons between a second 3D model and first depth measurements of the first region. The computing system may detect whether the first region changed after the second time period based on comparisons between the second 3D model and the first depth measurements of the first region. The computing system may in response to detecting a change in the first region, update the first 3D model of the first region.Type: ApplicationFiled: February 13, 2023Publication date: August 3, 2023Inventors: Fabian Langguth, Alexander Sorkine Hornung
-
Publication number: 20230169737Abstract: In one embodiment, a method includes receiving sensor data of a scene captured using one or more sensors, generating (1) a number of virtual surfaces representing a number of detected planar surfaces in the scene and (2) a point cloud representing detected features of objects in the scene based on the sensor data, assigning each point in the point cloud to one or more of the number of virtual surfaces, generating occupancy volumes for each of the number of virtual surfaces based on the points assigned to the virtual surface, generating a datastore including the number of virtual surfaces, the occupancy volumes of each of the number of virtual surfaces, and a spatial relationship between the number of virtual surfaces, receiving a query, and sending a response to the query, the response including an identified subset of the plurality of virtual surfaces in the datastore that satisfy the query.Type: ApplicationFiled: November 30, 2022Publication date: June 1, 2023Inventors: Yuichi Taguchi, Fabian Langguth
-
Patent number: 11651473Abstract: In one embodiment, a method includes generating an outputted sequence of warped images from a captured sequence of images. Using this captured sequence of images, a computing system may determine one or more three-dimensional locations of object features and a corresponding camera position for each image in the captured sequence of images. Utilizing the camera positions for each image, the computing system may determine a view path representing the perspective of a virtual camera. The computing system may identify one or more virtual camera positions for the virtual camera located on the view path, and subsequently warp one or more images from the sequence of captured images to represent the perspective of the virtual camera at each of the respective virtual camera positions. This results in a sequence of warped images that may be outputted for viewing and interaction on a client device.Type: GrantFiled: May 22, 2020Date of Patent: May 16, 2023Assignee: Meta Platforms, Inc.Inventors: Andrei Viktorovich Chtcherbatchenko, Francis Yunfeng Ge, Bo Yin, Shi Chen, Fabian Langguth, Johannes Peter Kopf, Suhib Fakhri Mahmod Alsisan, Richard Szeliski
-
Patent number: 11580703Abstract: In one embodiment, a computing system may update a first 3D model of a region of an environment based on comparisons between the first 3D model and first depth measurements of the region generated during a first time period. The computing system may determine that the region is static by comparing the first 3D model to second depth measurements of the region generated during a second time period. The computing system may in response to determining that the region is static, detect whether the region changed after the second time period based on comparisons between a second 3D model of the region and third depth measurements of the region generated after the second time period, the second 3D model having a lower resolution than the first 3D model. The computing system may in response to detecting a change in the region, update the first 3D model of the region.Type: GrantFiled: April 1, 2021Date of Patent: February 14, 2023Assignee: Meta Platforms Technologies, LLCInventors: Fabian Langguth, Alexander Sorkine Hornung
-
Patent number: 11521361Abstract: In one embodiment, a method includes receiving sensor data of a scene captured using one or more sensors, generating (1) a number of virtual surfaces representing a number of detected planar surfaces in the scene and (2) a point cloud representing detected features of objects in the scene based on the sensor data, assigning each point in the point cloud to one or more of the number of virtual surfaces, generating occupancy volumes for each of the number of virtual surfaces based on the points assigned to the virtual surface, generating a datastore including the number of virtual surfaces, the occupancy volumes of each of the number of virtual surfaces, and a spatial relationship between the number of virtual surfaces, receiving a query, and sending a response to the query, the response including an identified subset of the plurality of virtual surfaces in the datastore that satisfy the query.Type: GrantFiled: July 1, 2021Date of Patent: December 6, 2022Assignee: Meta Platforms Technologies, LLCInventors: Yuichi Taguchi, Fabian Langguth
-
Patent number: 11315329Abstract: In one embodiment, a method includes accessing a plurality of points, wherein each point (1) corresponds to a spatial location associated with an observed feature of a physical environment and (2) is associated with a patch representing the observed feature, determining a density associated with each of the plurality of points based on the spatial locations of the plurality of points, scaling the patch associated with each of the plurality of points based on the density associated with the point, and reconstructing a scene of the physical environment based on at least the scaled patches.Type: GrantFiled: February 25, 2020Date of Patent: April 26, 2022Assignee: Facebook Technologies, LLC.Inventors: Alexander Sorkine Hornung, Alessia Marra, Fabian Langguth, Matthew James Alderman
-
Publication number: 20210366075Abstract: In one embodiment, a method includes generating an outputted sequence of warped images from a captured sequence of images. Using this captured sequence of images, a computing system may determine one or more three-dimensional locations of object features and a corresponding camera position for each image in the captured sequence of images. Utilizing the camera positions for each image, the computing system may determine a view path representing the perspective of a virtual camera. The computing system may identify one or more virtual camera positions for the virtual camera located on the view path, and subsequently warp one or more images from the sequence of captured images to represent the perspective of the virtual camera at each of the respective virtual camera positions. This results in a sequence of warped images that may be outputted for viewing and interaction on a client device.Type: ApplicationFiled: May 22, 2020Publication date: November 25, 2021Inventors: Andrei Chtcherbatchenko, Francis Yunfeng Ge, Bo Yin, Shi Chen, Fabian Langguth, Johannes Peter Kopf, Suhib Fakhri Mahmod Alsisan, Richard Szeliski
-
Patent number: 9396544Abstract: Techniques are disclosed for reconstructing the surface geometry of an object using a single image. A computing device is configured to reconstruct a surface for a colored object from a single image using surface integrability as an additional constraint. The image is captured under an illumination of three fixed colored lights that correspond to three color channels, such as red, green and blue (RGB). The RGB image can be separated into three grayscale images, with different lighting for each image, and the geometry can be reconstructed by computing the surface normals of these separate images. Depth can be estimated by integrating the surface normals.Type: GrantFiled: January 8, 2014Date of Patent: July 19, 2016Assignee: Adobe Systems CorporationInventors: Fabian Langguth, Kalyan Krishna Sunkavalli, Sunil Hadap
-
Patent number: 9373189Abstract: This document describes techniques and apparatuses for constructing three dimensional (3D) surfaces for multi-colored objects. In some aspects, these techniques determine, from a color image and coarse depth information, an illumination model and albedo for a multi-color object. The coarse depth information may then be refined based on the illumination model and combined with the albedo to provide a 3D surface of the multi-color object.Type: GrantFiled: November 13, 2014Date of Patent: June 21, 2016Assignee: Adobe Systems IncorporatedInventors: Fabian Langguth, Kalyan K. Sunkavalli, Sunil Hadap
-
Publication number: 20160140753Abstract: This document describes techniques and apparatuses for constructing three dimensional (3D) surfaces for multi-colored objects. In some aspects, these techniques determine, from a color image and coarse depth information, an illumination model and albedo for a multi-color object. The coarse depth information may then be refined based on the illumination model and combined with the albedo to provide a 3D surface of the multi-color object.Type: ApplicationFiled: November 13, 2014Publication date: May 19, 2016Inventors: Fabian Langguth, Kalyan K. Sunkavalli, Sunil Hadap
-
Publication number: 20150193973Abstract: Techniques are disclosed for reconstructing the surface geometry of an object using a single image. A computing device is configured to reconstruct a surface for a colored object from a single image using surface integrability as an additional constraint. The image is captured under an illumination of three fixed colored lights that correspond to three color channels, such as red, green and blue (RGB). The RGB image can be separated into three grayscale images, with different lighting for each image, and the geometry can be reconstructed by computing the surface normals of these separate images. Depth can be estimated by integrating the surface normals.Type: ApplicationFiled: January 8, 2014Publication date: July 9, 2015Applicant: Adobe Systems IncorporatedInventors: Fabian Langguth, Kalyan Krishna Sunkavalli, Sunil Hadap