Patents by Inventor Gioacchino Noris

Gioacchino Noris has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11113891
    Abstract: Particular embodiments are directed to a passthrough feature. A computing system may display a virtual-reality scene on a device worn by a user. The system may receive a request to display a visual representation of at least a portion of a physical environment surrounding the user. The system may access data associated with the physical environment captured by camera(s) of the device. The system may generate, based the data, depth measurements of one or more objects in the physical environment. The system may generate, based on the depth measurements, one or more models of the one or more objects in the physical environment. The system may render an image based on a viewpoint of the user and the one or more models and, based on the image, generate the visual representation requested by the user. The visual representation may then be displayed with the virtual-reality scene to the user.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: September 7, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Gioacchino Noris, Jeng-Weei Lin
  • Publication number: 20210233313
    Abstract: A computing system may compute estimated depth measurements of at least one physical object in a physical environment surrounding a user. The system may generate, based on the estimated depth measurements, a first model of the at least one physical object. The system may render, based on the first model and a second model of a virtual object, an image depicting the physical object and the virtual object from a perspective of the user. At least one pixel of the image has a blended color corresponding to a portion of the physical object and a portion of the virtual object. The blended color is computed in response to a determination that a relative depth between a portion of the first model corresponding to the portion of the physical object and a portion of the second model corresponding to the portion of the virtual object is within a threshold.
    Type: Application
    Filed: January 27, 2020
    Publication date: July 29, 2021
    Inventor: Gioacchino Noris
  • Publication number: 20210233305
    Abstract: In one embodiment for generating passthrough, a system may receive an image and depth measurements of an environment and generate a corresponding 3D model. The system identifies, in the image, first pixels depicting a physical object and second pixels corresponding to a padded boundary around the first pixels. The system associates the first pixels with a first portion of the 3D model representing the physical object and a first representative depth value computed based on the depth measurements. The system associates the second pixels with a second portion of the 3D model representing a region around the physical object and a second representative depth value farther than the first representative depth value. The system renders an output image depicting a virtual object and the physical object. Occlusions between the virtual object and the physical object are determined using the first representative depth value and the second representative depth value.
    Type: Application
    Filed: February 16, 2021
    Publication date: July 29, 2021
    Inventors: Alberto Garcia Garcia, Gioacchino Noris, Gian Diego Tipaldi
  • Publication number: 20210233312
    Abstract: Particular embodiments are directed to a passthrough feature. A computing system may display a virtual-reality scene on a device worn by a user. The system may receive a request to display a visual representation of at least a portion of a physical environment surrounding the user. The system may access data associated with the physical environment captured by camera(s) of the device. The system may generate, based the data, depth measurements of one or more objects in the physical environment. The system may generate, based on the depth measurements, one or more models of the one or more objects in the physical environment. The system may render an image based on a viewpoint of the user and the one or more models and, based on the image, generate the visual representation requested by the user. The visual representation may then be displayed with the virtual-reality scene to the user.
    Type: Application
    Filed: January 27, 2020
    Publication date: July 29, 2021
    Inventors: Gioacchino Noris, Jeng-Weei Lin
  • Publication number: 20210232210
    Abstract: In one embodiment, a method includes segmenting a layout of a physical space surrounding a user into physical segments; generating, based on the physical segments, virtual paths for a virtual environment through which the user can navigate by traveling the physical segments; displaying a particular virtual path based on a location of the user in the virtual environment; determining that a forward direction of travel of the user is proximate to a boundary condition of a particular physical segment corresponding to the particular virtual path; notifying the user that a physical rotation of the user is needed in order for the user to travel beyond a point in the particular virtual path; and detecting that the physical rotation of the user is complete, and in response, updating the display to show the particular virtual path and allowing the user to travel beyond the point in the particular virtual path.
    Type: Application
    Filed: January 28, 2020
    Publication date: July 29, 2021
    Inventors: Gioacchino Noris, Matthew James Alderman, Alessia Marra
  • Publication number: 20210209854
    Abstract: The disclosed artificial reality system can provide a user self representation in an artificial reality environment based on a self portion from an image of the user. The artificial reality system can generate the self representation by applying a machine learning model to classify the self portion of the image. The machine learning model can be trained to identify self portions in images based on a set of training images, with portions tagged as either depicting a user from a self-perspective or not. The artificial reality system can display the self portion as a self representation in the artificial reality environment by positioning them in the artificial reality environment relative to the user's perspective in the artificial reality environment. The artificial reality system can also identify movements of the user and can adjust the self representation to match the user's movement, providing more accurate self representations.
    Type: Application
    Filed: January 3, 2020
    Publication date: July 8, 2021
    Inventors: James Allen Booth, Mahdi Salmani Rahimi, Gioacchino Noris
  • Publication number: 20210183135
    Abstract: An artificial reality system includes a head mounted display (HMD) and a physical overlay engine that generates overlay image data, referred to herein as a physical overlay image, corresponding to the physical objects in a three-dimensional (3D) environment. In response to an activation condition, a rendering engine of the artificial reality system renders the overlay image data to overlay artificial reality content for display on the HMD, thereby apprising a user of the HMD of their position with respect to the physical objects in the 3D environment.
    Type: Application
    Filed: December 12, 2019
    Publication date: June 17, 2021
    Inventors: Jeng-Weei Lin, Gioacchino Noris, Alessia Marra, Alexander Sorkine Hornung
  • Patent number: 11024074
    Abstract: In one embodiment, a method includes displaying a first virtual content to a first user in a virtual area, the virtual area comprising one or more second virtual content, inferring an intent of the first user to interact with the first virtual content based on one or more of first user actions or contextual information, and adjusting one or more configurations associated with one or more of the second virtual content based on the inferring of the intent of the first user to interact with the first virtual content.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: June 1, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra
  • Publication number: 20210082176
    Abstract: In one embodiment, a method includes receiving image data corresponding to an external environment of a user. The image data is captured at a first time and comprises a body part of the user. The method also includes receiving a first tracking data generated based on measurements made at the first time by at least one motion sensor associated with the body part; generating, based at least on the image data, a model representation associated with the body part; receiving a second tracking data generated based on measurements made at a second time by the at least one motion sensor associated with the body part; determining a deformation of the model representation associated with the body part based on the first tracking data and the second tracking data; and displaying the deformation of the model representation associated with the body part of the user.
    Type: Application
    Filed: November 6, 2020
    Publication date: March 18, 2021
    Inventors: Gioacchino Noris, James Allan Booth, Alexander Sorkine Hornung
  • Patent number: 10950034
    Abstract: In one embodiment for generating passthrough, a computing system may compute, based on an image of a physical environment, depth measurements of at least one physical object. The system may generate a first model of the physical object using the depth measurements. The system may identify first pixels in the image that depict the physical object and associate them with a first representative depth value computed using the first model. The system may determine, for a pixel of an output image, that a portion of the first model and a portion of a second model of a virtual object are visible. The system may determine that the portion of the first model is associated with the plurality of first pixels and determine occlusion at the pixel based on a comparison between the first representative depth value and a depth value associated with the portion of the second model.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: March 16, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Alberto Garcia Garcia, Gioacchino Noris, Gian Diego Tipaldi
  • Patent number: 10921878
    Abstract: In one embodiment, a method includes receiving, from the first user, a request to create a joint virtual space to use with one or more second users, determining a first area in a first room associated with the first user based at least in part on space limitations associated with the first room and locations of one or more items in the first room, retrieving information associated with one or more second rooms for each of the second users, creating, based on first area of the first room and the information associated with each of the second rooms, the joint virtual space, and providing access to the joint virtual space to the first user and each of the one or more second users.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: February 16, 2021
    Assignee: Facebook, Inc.
    Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra
  • Publication number: 20210012571
    Abstract: A mixed reality (MR) simulation system includes a console and a head mounted device (HMD). The MR system captures stereoscopic images from a real-world environment using outward-facing stereoscopic cameras mounted to the HMD. The MR system preprocesses the stereoscopic images to maximize contrast and then extracts a set of features from those images, including edges or corners, among others. For each feature, the MR system generates one or more two-dimensional (2D) polylines. Then, the MR system triangulates between 2D polylines found in right side images and corresponding 2D polylines found in left side images to generate a set of 3D polylines. The MR system interpolates between 3D vertices included in the 3D polylines or extrapolates additional 3D vertices, thereby generating a geometric reconstruction of the real-world environment. The MR system may map textures derived from the real-world environment onto the geometric representation faster than the geometric reconstruction is updated.
    Type: Application
    Filed: July 30, 2020
    Publication date: January 14, 2021
    Inventors: James Allan Booth, Gaurav Chaurasia, Alexandru-Eugen Ichim, Alex Locher, Gioacchino Noris, Alexander Sorkine Hornung, Manuel Werlberger
  • Patent number: 10861223
    Abstract: In one embodiment, a method includes receiving image data corresponding to an external environment of a user. The image data is captured at a first time and comprises a body part of the user. The method also includes receiving a first tracking data generated based on measurements made at the first time by at least one motion sensor associated with the body part; generating, based at least on the image data, a model representation associated with the body part; receiving a second tracking data generated based on measurements made at a second time by the at least one motion sensor associated with the body part; determining a deformation of the model representation associated with the body part based on the first tracking data and the second tracking data; and displaying the deformation of the model representation associated with the body part of the user.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: December 8, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Gioacchino Noris, James Allan Booth, Alexander Sorkine Hornung
  • Patent number: 10733800
    Abstract: A mixed reality (MR) simulation system includes a console and a head mounted device (HMD). The MR system captures stereoscopic images from a real-world environment using outward-facing stereoscopic cameras mounted to the HMD. The MR system preprocesses the stereoscopic images to maximize contrast and then extracts a set of features from those images, including edges or corners, among others. For each feature, the MR system generates one or more two-dimensional (2D) polylines. Then, the MR system triangulates between 2D polylines found in right side images and corresponding 2D polylines found in left side images to generate a set of 3D polylines. The MR system interpolates between 3D vertices included in the 3D polylines or extrapolates additional 3D vertices, thereby generating a geometric reconstruction of the real-world environment. The MR system may map textures derived from the real-world environment onto the geometric representation faster than the geometric reconstruction is updated.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: August 4, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: James Allan Booth, Gaurav Chaurasia, Alexandru-Eugen Ichim, Alex Locher, Gioacchino Noris, Alexander Sorkine Hornung, Manuel Werlberger
  • Publication number: 20200210137
    Abstract: In one embodiment, a method includes receiving a request to share a display of a first interactive item with one or more users, generating a first virtual item as a copy of the first interactive item, and displaying the first virtual item in a virtual reality environment to a subset of the one or more users, wherein if changes made to the first interactive item are received, the display of the first virtual item in the virtual reality environment is updated to include the same changes as the first interactive item.
    Type: Application
    Filed: December 27, 2018
    Publication date: July 2, 2020
    Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra
  • Publication number: 20200211251
    Abstract: In one embodiment, a method includes displaying a first virtual content to a first user in a virtual area, the virtual area comprising one or more second virtual content, inferring an intent of the first user to interact with the first virtual content based on one or more of first user actions or contextual information, and adjusting one or more configurations associated with one or more of the second virtual content based on the inferring of the intent of the first user to interact with the first virtual content.
    Type: Application
    Filed: December 27, 2018
    Publication date: July 2, 2020
    Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra
  • Publication number: 20200209949
    Abstract: In one embodiment, a method includes receiving, from the first user, a request to create a joint virtual space to use with one or more second users, determining a first area in a first room associated with the first user based at least in part on space limitations associated with the first room and locations of one or more items in the first room, retrieving information associated with one or more second rooms for each of the second users, creating, based on first area of the first room and the information associated with each of the second rooms, the joint virtual space, and providing access to the joint virtual space to the first user and each of the one or more second users.
    Type: Application
    Filed: December 27, 2018
    Publication date: July 2, 2020
    Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra
  • Publication number: 20200143584
    Abstract: In one embodiment, a method includes receiving image data corresponding to an external environment of a user. The image data is captured at a first time and comprises a body part of the user. The method also includes receiving a first tracking data generated based on measurements made at the first time by at least one motion sensor associated with the body part; generating, based at least on the image data, a model representation associated with the body part; receiving a second tracking data generated based on measurements made at a second time by the at least one motion sensor associated with the body part; determining a deformation of the model representation associated with the body part based on the first tracking data and the second tracking data; and displaying the deformation of the model representation associated with the body part of the user.
    Type: Application
    Filed: December 7, 2018
    Publication date: May 7, 2020
    Inventors: Gioacchino Noris, James Allan Booth, Alexander Sorkine Hornung
  • Publication number: 20200090406
    Abstract: A mixed reality (MR) simulation system includes a console and a head mounted device (HMD). The MR system captures stereoscopic images from a real-world environment using outward-facing stereoscopic cameras mounted to the HMD. The MR system preprocesses the stereoscopic images to maximize contrast and then extracts a set of features from those images, including edges or corners, among others. For each feature, the MR system generates one or more two-dimensional (2D) polylines. Then, the MR system triangulates between 2D polylines found in right side images and corresponding 2D polylines found in left side images to generate a set of 3D polylines. The MR system interpolates between 3D vertices included in the 3D polylines or extrapolates additional 3D vertices, thereby generating a geometric reconstruction of the real-world environment. The MR system may map textures derived from the real-world environment onto the geometric representation faster than the geometric reconstruction is updated.
    Type: Application
    Filed: September 17, 2018
    Publication date: March 19, 2020
    Inventors: James Allan Booth, Gaurav Chaurasia, Alexandru-Eugen Ichim, Alex Locher, Gioacchino Noris, Alexander Sorkine Hornung, Manuel Werlberger
  • Patent number: 10359906
    Abstract: The disclosure provides an approach for populating a virtual environment with objects. In one embodiment, an editing application may track a handheld device using sensor data from a camera, by following an image displayed on the handheld device's screen. The editing application then updates the position of an object in the virtual environment according to the tracked position of the handheld device. Initially, the handheld device may be placed at a fixed location for calibration purposes, during which the editing application initializes a mapping between the virtual and physical environments. To add an object to the virtual environment, a user may select the object on the handheld device. The user may then place the object at a desired location and orientation in the virtual environment by moving the handheld device in the physical environment.
    Type: Grant
    Filed: July 25, 2017
    Date of Patent: July 23, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Robert Sumner, Gioacchino Noris