Patents by Inventor Gian Diego Tipaldi
Gian Diego Tipaldi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250124648Abstract: In one embodiment for generating passthrough, a system may receive an image and depth measurements of an environment and generate a corresponding 3D model. The system identifies, in the image, first pixels depicting a physical object and second pixels corresponding to a padded boundary around the first pixels. The system associates the first pixels with a first portion of the 3D model representing the physical object and a first representative depth value computed based on the depth measurements. The system associates the second pixels with a second portion of the 3D model representing a region around the physical object and a second representative depth value farther than the first representative depth value. The system renders an output image depicting a virtual object and the physical object. Occlusions between the virtual object and the physical object are determined using the first representative depth value and the second representative depth value.Type: ApplicationFiled: August 16, 2024Publication date: April 17, 2025Inventors: Alberto Garcia Garcia, Gioacchino Noris, Gian Diego Tipaldi
-
Patent number: 12067673Abstract: In one embodiment, a computing system determines one or more depth measurements associated with a first physical object. The system captures an image including image data associated with the first physical object. The system identifies and associates a plurality of first pixels with a first representative depth value based on the one or more depth measurements. The system determines, for an output pixel of an output image, that (1) a portion of a virtual object is visible from a viewpoint and (2) the output pixel overlaps with a portion of the first physical object. The system determines that the portion of the first physical object is associated with the plurality of first pixels and renders the output image from the viewpoint. Occlusion at the output pixel is determined based on a comparison between the first representative depth value and a depth value associated with the portion of the virtual object.Type: GrantFiled: October 13, 2022Date of Patent: August 20, 2024Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: Alberto Garcia Garcia, Gioacchino Noris, Gian Diego Tipaldi
-
Publication number: 20230260228Abstract: One example method for adaptive mixed reality includes estimating ambient lighting conditions using one or more sensors; calibrating each of the one or more sensors spatially or spectrally; determining a color point based on the ambient lighting conditions; and modifying a mixed reality display based on the color point, wherein the mixed reality display comprises virtual reality (VR) content and pass-through (PT) content.Type: ApplicationFiled: February 15, 2023Publication date: August 17, 2023Inventors: Zhang Jia, Jan Oberländer, Stefan Rüffer, Gian Diego Tipaldi
-
Publication number: 20230215084Abstract: In one embodiment, a computing system determines one or more depth measurements associated with a first physical object. The system captures an image including image data associated with the first physical object. The system identifies and associates a plurality of first pixels with a first representative depth value based on the one or more depth measurements. The system determines, for an output pixel of an output image, that (1) a portion of a virtual object is visible from a viewpoint and (2) the output pixel overlaps with a portion of the first physical object. The system determines that the portion of the first physical object is associated with the plurality of first pixels and renders the output image from the viewpoint. Occlusion at the output pixel is determined based on a comparison between the first representative depth value and a depth value associated with the portion of the virtual object.Type: ApplicationFiled: October 13, 2022Publication date: July 6, 2023Inventors: Alberto Garcia Garcia, Gioacchino Noris, Gian Diego Tipaldi
-
Patent number: 11501488Abstract: In one embodiment for generating passthrough, a system may receive an image and depth measurements of an environment and generate a corresponding 3D model. The system identifies, in the image, first pixels depicting a physical object and second pixels corresponding to a padded boundary around the first pixels. The system associates the first pixels with a first portion of the 3D model representing the physical object and a first representative depth value computed based on the depth measurements. The system associates the second pixels with a second portion of the 3D model representing a region around the physical object and a second representative depth value farther than the first representative depth value. The system renders an output image depicting a virtual object and the physical object. Occlusions between the virtual object and the physical object are determined using the first representative depth value and the second representative depth value.Type: GrantFiled: February 16, 2021Date of Patent: November 15, 2022Assignee: Meta Platforms Technologies, LLCInventors: Alberto Garcia Garcia, Gioacchino Noris, Gian Diego Tipaldi
-
Patent number: 11410387Abstract: In one embodiment for generating passthrough, a computing system may access images of an environment captured by cameras of a device worn by a user. The system may generate, based on the images, depth measurements of objects in the environment. The system may generate a mesh covering a field of view of the user and then update the mesh based on the depth measurements to represent a contour of the objects in the environment. The system may determine a first viewpoint of a first eye of the user and render a first output image based on the first viewpoint and the updated mesh. The system may then display the first output image on a first display of the device, the first display being configured to be viewed by the first eye of the user.Type: GrantFiled: January 17, 2020Date of Patent: August 9, 2022Assignee: Facebook Technologies, LLC.Inventors: Matthew James Alderman, Gaurav Chaurasia, Paul Timothy Furgale, Lingwen Gan, Alexander Sorkine Hornung, Alexandru-Eugen Ichim, Arthur Nieuwoudt, Jan Oberländer, Gian Diego Tipaldi
-
Patent number: 11335077Abstract: A method includes receiving an image of a real environment using a camera worn by a user, and determining a portion of the image that comprises an object of interest. Based on the portion of the image that comprises the object of interest, a surface representing the object of interest is generated. Depth measurements of the real environment corresponding to the portion of the image comprising the object of interest are received and used to determine a depth of the surface representing the object of interest. The surface is posed in a coordinate system corresponding to the real environment based on the depth of the surface and a visibility of a virtual object is determined relative to the object of interest by comparing a model of the virtual object with the surface. The output image is generated based on the determined visibility of the virtual object.Type: GrantFiled: March 19, 2021Date of Patent: May 17, 2022Assignee: Facebook Technologies, LLC.Inventors: Mahdi Salmani Rahimi, Gregory Mayo Daly, Gian Diego Tipaldi, Alexander Sorkine Hornung, Mark David Strachan
-
Publication number: 20210233305Abstract: In one embodiment for generating passthrough, a system may receive an image and depth measurements of an environment and generate a corresponding 3D model. The system identifies, in the image, first pixels depicting a physical object and second pixels corresponding to a padded boundary around the first pixels. The system associates the first pixels with a first portion of the 3D model representing the physical object and a first representative depth value computed based on the depth measurements. The system associates the second pixels with a second portion of the 3D model representing a region around the physical object and a second representative depth value farther than the first representative depth value. The system renders an output image depicting a virtual object and the physical object. Occlusions between the virtual object and the physical object are determined using the first representative depth value and the second representative depth value.Type: ApplicationFiled: February 16, 2021Publication date: July 29, 2021Inventors: Alberto Garcia Garcia, Gioacchino Noris, Gian Diego Tipaldi
-
Patent number: 10950034Abstract: In one embodiment for generating passthrough, a computing system may compute, based on an image of a physical environment, depth measurements of at least one physical object. The system may generate a first model of the physical object using the depth measurements. The system may identify first pixels in the image that depict the physical object and associate them with a first representative depth value computed using the first model. The system may determine, for a pixel of an output image, that a portion of the first model and a portion of a second model of a virtual object are visible. The system may determine that the portion of the first model is associated with the plurality of first pixels and determine occlusion at the pixel based on a comparison between the first representative depth value and a depth value associated with the portion of the second model.Type: GrantFiled: January 27, 2020Date of Patent: March 16, 2021Assignee: Facebook Technologies, LLCInventors: Alberto Garcia Garcia, Gioacchino Noris, Gian Diego Tipaldi
-
Patent number: 10444332Abstract: A method and a system for calibrating a network of multiple range finders are disclosed. In an embodiment, the method includes performing a measurement with the range finders during a movement of an object through an area covered by the network, identifying, for each range finder, data sets of the measurement associated with the moving object, based on a static analysis of the respective range finder, determining, for each pair of overlapping range finders, an estimated relative transformation between respective poses of the pair, based on the identified data sets of the pair, determining an initial maximum likelihood configuration of poses of all of the range finders based on the estimated relative transformations of each pair and iteratively determining sets of point correspondences of the identified data sets and if a distance of the points in the pair is below a threshold, the threshold being redefined decreasingly for each iteration.Type: GrantFiled: March 17, 2016Date of Patent: October 15, 2019Assignee: ALBERT-LUDWIGS-UNIVERSITÄT FREIBURGInventors: Jörg Röwekämper, Michael Ruhnke, Bastian Steder, Gian Diego Tipaldi, Wolfram Burgard
-
Publication number: 20180067199Abstract: A method and a system for calibrating a network of multiple range finders are disclosed. In an embodiment, the method includes performing a measurement with the range finders during a movement of an object through an area covered by the network, identifying, for each range finder, data sets of the measurement associated with the moving object, based on a static analysis of the respective range finder, determining, for each pair of overlapping range finders, an estimated relative transformation between respective poses of the pair, based on the identified data sets of the pair, determining an initial maximum likelihood configuration of poses of all of the range finders based on the estimated relative transformations of each pair and iteratively determining sets of point correspondences of the identified data sets and if a distance of the points in the pair is below a threshold, the threshold being redefined decreasingly for each iteration.Type: ApplicationFiled: March 17, 2016Publication date: March 8, 2018Inventors: Jörg Röwekämper, Michael Ruhnke, Bastian Steder, Gian Diego Tipaldi, Wolfram Burgard