Patents by Inventor Gioacchino Noris

Gioacchino Noris has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11501488
    Abstract: In one embodiment for generating passthrough, a system may receive an image and depth measurements of an environment and generate a corresponding 3D model. The system identifies, in the image, first pixels depicting a physical object and second pixels corresponding to a padded boundary around the first pixels. The system associates the first pixels with a first portion of the 3D model representing the physical object and a first representative depth value computed based on the depth measurements. The system associates the second pixels with a second portion of the 3D model representing a region around the physical object and a second representative depth value farther than the first representative depth value. The system renders an output image depicting a virtual object and the physical object. Occlusions between the virtual object and the physical object are determined using the first representative depth value and the second representative depth value.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: November 15, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Alberto Garcia Garcia, Gioacchino Noris, Gian Diego Tipaldi
  • Publication number: 20220358715
    Abstract: In one embodiment, a method includes displaying, for one or more displays of a virtual VR device, a first output image comprising a passthrough view of a real-world environment. The method includes identifying, using one or more images captured by one or more cameras of the VR display device, a real-world object in the real-world environment. The method includes receiving a user input indicating a first dimension corresponding to the real-world object. The method includes automatically determining, based on the first dimension, a second and third dimension corresponding to the real-world object. The method includes rendering, for the one or more displays of the VR display device, a second output image of a VR environment. The VR environment includes a MR object that corresponds to the real-world object. The MR object is defined by the determined first, second, and third dimensions.
    Type: Application
    Filed: June 30, 2022
    Publication date: November 10, 2022
    Inventors: Christopher Richard Tanner, Amir Mesguich Havilio, Michelle Pujals, Gioacchino Noris, Alessia Marra, Nicholas Wallen
  • Patent number: 11481960
    Abstract: A method includes a computing system tracking motions performed by a hand of a user, determining one or more anchor locations in a three-dimensional space, and generating a virtual surface anchored in the three-dimensional space. An image of a real environment is captured using a camera worn by the user, and a pose of the camera when the image is captured is determined. The computing system determines a first viewpoint of a first eye of the user and a region in the image that, as viewed from the camera, corresponds to the virtual surface. The computing system renders an output image based on (1) the first viewpoint relative to the virtual surface and (2) the image region corresponding to the virtual surface, and displays the output image on a first display of the device, the first display being configured to be viewed by the first eye of the user.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: October 25, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Alessia Marra, Gioacchino Noris, Panya Inversin
  • Patent number: 11475639
    Abstract: The disclosed artificial reality system can provide a user self representation in an artificial reality environment based on a self portion from an image of the user. The artificial reality system can generate the self representation by applying a machine learning model to classify the self portion of the image. The machine learning model can be trained to identify self portions in images based on a set of training images, with portions tagged as either depicting a user from a self-perspective or not. The artificial reality system can display the self portion as a self representation in the artificial reality environment by positioning them in the artificial reality environment relative to the user's perspective in the artificial reality environment. The artificial reality system can also identify movements of the user and can adjust the self representation to match the user's movement, providing more accurate self representations.
    Type: Grant
    Filed: January 3, 2020
    Date of Patent: October 18, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: James Allen Booth, Mahdi Salmani Rahimi, Gioacchino Noris
  • Patent number: 11436790
    Abstract: In one embodiment, a method includes receiving image data corresponding to an external environment of a user. The image data is captured at a first time and comprises a body part of the user. The method also includes receiving a first tracking data generated based on measurements made at the first time by at least one motion sensor associated with the body part; generating, based at least on the image data, a model representation associated with the body part; receiving a second tracking data generated based on measurements made at a second time by the at least one motion sensor associated with the body part; determining a deformation of the model representation associated with the body part based on the first tracking data and the second tracking data; and displaying the deformation of the model representation associated with the body part of the user.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: September 6, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Gioacchino Noris, James Allan Booth, Alexander Sorkine Hornung
  • Patent number: 11417054
    Abstract: In one embodiment, a method includes displaying, for one or more displays of a virtual VR device, a first output image comprising a passthrough view of a real-world environment. The method includes identifying, using one or more images captured by one or more cameras of the VR display device, a real-world object in the real-world environment. The method includes receiving a user input indicating a first dimension corresponding to the real-world object. The method includes automatically determining, based on the first dimension, a second and third dimension corresponding to the real-world object. The method includes rendering, for the one or more displays of the VR display device, a second output image of a VR environment. The VR environment includes a MR object that corresponds to the real-world object. The MR object is defined by the determined first, second, and third dimensions.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: August 16, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Christopher Richard Tanner, Amir Mesguich Havilio, Michelle Pujals, Gioacchino Noris, Alessia Marra, Nicholas Wallen
  • Publication number: 20220207816
    Abstract: A method includes a computing system tracking motions performed by a hand of a user, determining one or more anchor locations in a three-dimensional space, and generating a virtual surface anchored in the three-dimensional space. An image of a real environment is captured using a camera worn by the user, and a pose of the camera when the image is captured is determined. The computing system determines a first viewpoint of a first eye of the user and a region in the image that, as viewed from the camera, corresponds to the virtual surface. The computing system renders an output image based on (1) the first viewpoint relative to the virtual surface and (2) the image region corresponding to the virtual surface, and displays the output image on a first display of the device, the first display being configured to be viewed by the first eye of the user.
    Type: Application
    Filed: December 30, 2020
    Publication date: June 30, 2022
    Inventors: Alessia Marra, Gioacchino Noris, Panya Inversin
  • Patent number: 11361512
    Abstract: In one embodiment, a method includes displaying, through a head-mounted display (HMD), virtual objects to a user wearing the HMD. The method then accesses a boundary definition that corresponds to a boundary within a physical space surrounding the user and generates a plurality of spatial points based on depth measurements of physical objects within the physical space. Based on the spatial points, a location at which a physical object is likely to exist is determined. The method determines whether the location of the physical object is inside the boundary definition and, in response to this determination, issues an alert to the user.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: June 14, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Gaurav Chaurasia, Alexandru-Eugen Ichim, Eldad Yitzhak, Arthur Benjamin Nieuwoudt, Gioacchino Noris, Alexander Sorkine Hornung
  • Publication number: 20220172444
    Abstract: Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
    Type: Application
    Filed: February 18, 2022
    Publication date: June 2, 2022
    Applicant: Facebook Technologies, LLC
    Inventors: Michael James LEBEAU, Manuel Ricardo FREIRE SANTOS, Aleksejs ANPILOGOVS, Alexander SORKINE HORNUNG, Björn WANBO, Connor TREACY, Fangwei LEE, Federico RUIZ, Jonathan MALLINSON, Jonathan Richard MAYOH, Marcus TANNER, Panya INVERSIN, Sarthak RAY, Sheng SHEN, William Arthur Hugh STEPTOE, Alessia MARRA, Gioacchino NORIS, Derrick READINGER, Jeffrey Wai-King LOCK, Jeffrey WITTHUHN, Jennifer Lynn SPURLOCK, Larissa Heike LAICH, Javier Alejandro Sierra SANTOS
  • Patent number: 11302085
    Abstract: Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: April 12, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Michael James LeBeau, Manuel Ricardo Freire Santos, Aleksejs Anpilogovs, Alexander Sorkine Hornung, Bjorn Wanbo, Connor Treacy, Fangwei Lee, Federico Ruiz, Jonathan Mallinson, Jonathan Richard Mayoh, Marcus Tanner, Panya Inversin, Sarthak Ray, Sheng Shen, William Arthur Hugh Steptoe, Alessia Marra, Gioacchino Noris, Derrick Readinger, Jeffrey Wai-King Lock, Jeffrey Witthuhn, Jennifer Lynn Spurlock, Larissa Heike Laich, Javier Alejandro Sierra Santos
  • Publication number: 20220086205
    Abstract: Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
    Type: Application
    Filed: October 30, 2020
    Publication date: March 17, 2022
    Inventors: Michael James LeBeau, Manuel Ricardo Freire Santos, Aleksejs Anpilogovs, Alexander Sorkine Hornung, Bjorn Wanbo, Connor Treacy, Fangwei Lee, Federico Ruiz, Jonathan Mallinson, Jonathan Richard Mayoh, Marcus Tanner, Panya Inversin, Sarthak Ray, Sheng Shen, William Arthur Hugh Steptoe, Alessia Marra, Gioacchino Noris, Derrick Readinger, Jeffrey Wai-King Lock, Jeffrey Witthuhn, Jennifer Lynn Spurlock, Larissa Heike Laich, Javier Alejandro Sierra Santos
  • Publication number: 20220084288
    Abstract: Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
    Type: Application
    Filed: October 30, 2020
    Publication date: March 17, 2022
    Inventors: Michael James LeBeau, Manuel Ricardo Freire Santos, Aleksejs Anpilogovs, Alexander Sorkine Hornung, Bjorn Wanbo, Connor Treacy, Fangwei Lee, Federico Ruiz, Jonathan Mallinson, Jonathan Richard Mayoh, Marcus Tanner, Panya Inversin, Sarthak Ray, Sheng Shen, William Arthur Hugh Steptoe, Alessia Marra, Gioacchino Noris, Derrick Readinger, Jeffrey Wai-King Lock, Jeffrey Witthuhn, Jennifer Lynn Spurlock, Larissa Heike Laich, Javier Alejandro Sierra Santos
  • Publication number: 20220086167
    Abstract: Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
    Type: Application
    Filed: October 30, 2020
    Publication date: March 17, 2022
    Inventors: Michael James LeBeau, Manuel Ricardo Freire Santos, Aleksejs Anpilogovs, Alexander Sorkine Hornung, Bjorn Wanbo, Connor Treacy, Fangwei Lee, Federico Ruiz, Jonathan Mallinson, Jonathan Richard Mayoh, Marcus Tanner, Panya Inversin, Sarthak Ray, Sheng Shen, William Arthur Hugh Steptoe, Alessia Marra, Gioacchino Noris, Derrick Readinger, Jeffrey Wai-King Lock, Jeffrey Witthuhn, Jennifer Lynn Spurlock, Larissa Heike Laich, Javier Alejandro Sierra Santos
  • Publication number: 20220004766
    Abstract: A system generates a plurality of spatial points based on depth measurements of physical objects. The system determines, based on the plurality of spatial points, an occupancy score for each voxel within a plurality of voxels. The system identifies, based on a gaze of the user, a first set of occupied voxels that are in a field of view of the user and a second set of occupied voxels that are outside the field of view of the user. The system updates the occupancy scores of the first set of occupied voxels by temporally decaying one or more of the plurality of spatial points within the first set of occupied voxels. The system maintains the occupancy scores of the second set of occupied voxels. The system detects intrusions in a predefined subspace within a physical space based on the updated occupancy scores of the first set of occupied voxels.
    Type: Application
    Filed: September 20, 2021
    Publication date: January 6, 2022
    Inventors: Alexandru-Eugen Ichim, Sarthak Ray, Alexander Sorkine Hornung, Gioacchino Noris, Gaurav Chaurasia, Jan Oberländer
  • Patent number: 11210860
    Abstract: A computing system may compute estimated depth measurements of at least one physical object in a physical environment surrounding a user. The system may generate, based on the estimated depth measurements, a first model of the at least one physical object. The system may render, based on the first model and a second model of a virtual object, an image depicting the physical object and the virtual object from a perspective of the user. At least one pixel of the image has a blended color corresponding to a portion of the physical object and a portion of the virtual object. The blended color is computed in response to a determination that a relative depth between a portion of the first model corresponding to the portion of the physical object and a portion of the second model corresponding to the portion of the virtual object is within a threshold.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: December 28, 2021
    Assignee: Facebook Technologies, LLC.
    Inventor: Gioacchino Noris
  • Patent number: 11195320
    Abstract: An artificial reality system includes a head mounted display (HMD) and a physical overlay engine that generates overlay image data, referred to herein as a physical overlay image, corresponding to the physical objects in a three-dimensional (3D) environment. In response to an activation condition, a rendering engine of the artificial reality system renders the overlay image data to overlay artificial reality content for display on the HMD, thereby apprising a user of the HMD of their position with respect to the physical objects in the 3D environment.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: December 7, 2021
    Assignee: FACEBOOK TECHNOLOGIES, LLC
    Inventors: Jeng-Weei Lin, Gioacchino Noris, Alessia Marra, Alexander Sorkine Hornung
  • Patent number: 11170575
    Abstract: In one embodiment, a method includes segmenting a layout of a physical space surrounding a user into physical segments; generating, based on the physical segments, virtual paths for a virtual environment through which the user can navigate by traveling the physical segments; identifying, based on a current location of the user with respect to the physical space, a portion of the physical segments for which to enable an intrusion detection feature; detecting a physical object in the portion of the physical segments that corresponds to a particular virtual path of the virtual paths; and in response to the detecting, displaying a representation of the physical object in the particular virtual path.
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: November 9, 2021
    Assignee: Facebook Technologies, LLC.
    Inventors: Gioacchino Noris, Matthew James Alderman, Alexandru-Eugen Ichim
  • Publication number: 20210319220
    Abstract: In one embodiment, a method includes generating a plurality of spatial points based on depth measurements of physical objects within a physical space surrounding a user and determining, based on the spatial points, a location at which a physical object is likely to exist. The method then renders, based on the location of the physical object, a virtual space representing the physical space. This virtual space may include a virtual object representing the physical object. The method displays the virtual space to the user, and, while displaying the virtual space, receives input from the user indicating a boundary of a subspace within the virtual space, and detects that at least a portion of the virtual object is within the subspace. Finally, the method updates the virtual space to indicate that the portion of the virtual object is within the subspace.
    Type: Application
    Filed: April 9, 2020
    Publication date: October 14, 2021
    Inventors: Alexandru-Eugen Ichim, Sarthak Ray, Alexander Sorkine Hornung, Gioacchino Noris, Gaurav Chaurasia, Jan Oberländer
  • Publication number: 20210304502
    Abstract: In one embodiment, a method includes displaying, through a head-mounted display (HMD), virtual objects to a user wearing the HMD. The method then accesses a boundary definition that corresponds to a boundary within a physical space surrounding the user and generates a plurality of spatial points based on depth measurements of physical objects within the physical space. Based on the spatial points, a location at which a physical object is likely to exist is determined. The method determines whether the location of the physical object is inside the boundary definition and, in response to this determination, issues an alert to the user.
    Type: Application
    Filed: March 26, 2020
    Publication date: September 30, 2021
    Inventors: Gaurav Chaurasia, Alexandru-Eugen Ichim, Eldad Yitzhak, Arthur Benjamin Nieuwoudt, Gioacchino Noris, Alexander Sorkine Hornung
  • Patent number: 11126850
    Abstract: In one embodiment, a method includes generating a plurality of spatial points based on depth measurements of physical objects within a physical space surrounding a user and determining, based on the spatial points, a location at which a physical object is likely to exist. The method then renders, based on the location of the physical object, a virtual space representing the physical space. This virtual space may include a virtual object representing the physical object. The method displays the virtual space to the user, and, while displaying the virtual space, receives input from the user indicating a boundary of a subspace within the virtual space, and detects that at least a portion of the virtual object is within the subspace. Finally, the method updates the virtual space to indicate that the portion of the virtual object is within the subspace.
    Type: Grant
    Filed: April 9, 2020
    Date of Patent: September 21, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Alexandru-Eugen Ichim, Sarthak Ray, Alexander Sorkine Hornung, Gioacchino Noris, Gaurav Chaurasia, Jan Oberländer