Patents by Inventor James Allan Booth

James Allan Booth has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240078754
    Abstract: In one embodiment, a computing system may access a first image of a first portion of a face of a user captured by a first camera from a first viewpoint and a second image of a second portion of the face captured by a second camera from a second viewpoint. The system may generate, using a machine-learning model and the first and second images, a synthesized image corresponding to a third portion of the face of the user as viewed from a third viewpoint. The system may access a three-dimensional (3D) facial model representative of the face and generate a texture image for the face by projecting at least the synthesized image onto the 3D facial model from a predetermined camera pose corresponding to the third viewpoint. The system may cause an output image to be rendered using at least the 3D facial model and the texture image.
    Type: Application
    Filed: October 31, 2023
    Publication date: March 7, 2024
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Publication number: 20240062492
    Abstract: The disclosed artificial reality system can provide a user self representation in an artificial reality environment based on a self portion from an image of the user. The artificial reality system can generate the self representation by applying a machine learning model to classify the self portion of the image. The machine learning model can be trained to identify self portions in images based on a set of training images, with portions tagged as either depicting a user from a self-perspective or not. The artificial reality system can display the self portion as a self representation in the artificial reality environment by positioning them in the artificial reality environment relative to the user's perspective in the artificial reality environment. The artificial reality system can also identify movements of the user and can adjust the self representation to match the user's movement, providing more accurate self representations.
    Type: Application
    Filed: October 30, 2023
    Publication date: February 22, 2024
    Inventors: James Allan BOOTH, Mahdi SALMANI RAHIMI, Gioacchino NORIS
  • Publication number: 20240046590
    Abstract: A mixed reality (MR) simulation system includes a console and a head mounted device (HMD). The MR system captures stereoscopic images from a real-world environment using outward-facing stereoscopic cameras mounted to the HMD. The MR system preprocesses the stereoscopic images to maximize contrast and then extracts a set of features from those images, including edges or corners, among others. For each feature, the MR system generates one or more two-dimensional (2D) polylines. Then, the MR system triangulates between 2D polylines found in right side images and corresponding 2D polylines found in left side images to generate a set of 3D polylines. The MR system interpolates between 3D vertices included in the 3D polylines or extrapolates additional 3D vertices, thereby generating a geometric reconstruction of the real-world environment. The MR system may map textures derived from the real-world environment onto the geometric representation faster than the geometric reconstruction is updated.
    Type: Application
    Filed: October 18, 2023
    Publication date: February 8, 2024
    Inventors: James Allan BOOTH, Gaurav CHAURASIA, Alexandru-Eugen ICHIM, Alex LOCHER, Gioacchino NORIS, Alexander Sorkine HORNUNG, Manuel WERLBERGER
  • Patent number: 11861757
    Abstract: The disclosed artificial reality system can provide a user self representation in an artificial reality environment based on a self portion from an image of the user. The artificial reality system can generate the self representation by applying a machine learning model to classify the self portion of the image. The machine learning model can be trained to identify self portions in images based on a set of training images, with portions tagged as either depicting a user from a self-perspective or not. The artificial reality system can display the self portion as a self representation in the artificial reality environment by positioning them in the artificial reality environment relative to the user's perspective in the artificial reality environment. The artificial reality system can also identify movements of the user and can adjust the self representation to match the user's movement, providing more accurate self representations.
    Type: Grant
    Filed: September 7, 2022
    Date of Patent: January 2, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: James Allan Booth, Mahdi Salmani Rahimi, Gioacchino Noris
  • Patent number: 11842442
    Abstract: In one embodiment, one or more computing systems may access a plurality of images corresponding to a portion of a face of a user. The plurality of images is captured from different viewpoints by a plurality of cameras coupled to an artificial-reality system worn by the user. The one or more computing systems may synthesize, using a machine-learning model, to generate a synthesized image corresponding to the portion of the face of the user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the user and generate a texture image by projecting the synthesized image onto the 3D facial model from a specific camera pose. The one or more computing systems may cause an output image of a facial representation of the user to be rendered using at least the 3D facial model and the texture image.
    Type: Grant
    Filed: December 22, 2022
    Date of Patent: December 12, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Patent number: 11830148
    Abstract: A mixed reality (MR) simulation system includes a console and a head mounted device (HMD). The MR system captures stereoscopic images from a real-world environment using outward-facing stereoscopic cameras mounted to the HMD. The MR system preprocesses the stereoscopic images to maximize contrast and then extracts a set of features from those images, including edges or corners, among others. For each feature, the MR system generates one or more two-dimensional (2D) polylines. Then, the MR system triangulates between 2D polylines found in right side images and corresponding 2D polylines found in left side images to generate a set of 3D polylines. The MR system interpolates between 3D vertices included in the 3D polylines or extrapolates additional 3D vertices, thereby generating a geometric reconstruction of the real-world environment. The MR system may map textures derived from the real-world environment onto the geometric representation faster than the geometric reconstruction is updated.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: November 28, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: James Allan Booth, Gaurav Chaurasia, Alexandru-Eugen Ichim, Alex Locher, Gioacchino Noris, Alexander Sorkine Hornung, Manuel Werlberger
  • Publication number: 20230206560
    Abstract: In one embodiment, one or more computing systems may access a plurality of images corresponding to a portion of a face of a user. The plurality of images is captured from different viewpoints by a plurality of cameras coupled to an artificial-reality system worn by the user. The one or more computing systems may synthesize, using a machine-learning model, to generate a synthesized image corresponding to the portion of the face of the user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the user and generate a texture image by projecting the synthesized image onto the 3D facial model from a specific camera pose. The one or more computing systems may cause an output image of a facial representation of the user to be rendered using at least the 3D facial model and the texture image.
    Type: Application
    Filed: December 22, 2022
    Publication date: June 29, 2023
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Patent number: 11562535
    Abstract: In one embodiment, one or more computing systems may receive an image of a portion of a face of a first user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the first user. The one or more computing systems may identify one or more facial features captured in the image and determine a camera pose relative to the 3D facial model based on the identified one or more facial features in the image and predetermined feature locations on the 3D facial model. The one or more computing systems may determine a mapping relationship between the image and the 3D facial model by projecting the image of the portion of the face of the first user onto the 3D facial model from the camera pose and cause an output image of a facial representation of the first user to be rendered.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: January 24, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Publication number: 20220415000
    Abstract: The disclosed artificial reality system can provide a user self representation in an artificial reality environment based on a self portion from an image of the user. The artificial reality system can generate the self representation by applying a machine learning model to classify the self portion of the image. The machine learning model can be trained to identify self portions in images based on a set of training images, with portions tagged as either depicting a user from a self-perspective or not. The artificial reality system can display the self portion as a self representation in the artificial reality environment by positioning them in the artificial reality environment relative to the user's perspective in the artificial reality environment. The artificial reality system can also identify movements of the user and can adjust the self representation to match the user's movement, providing more accurate self representations.
    Type: Application
    Filed: September 7, 2022
    Publication date: December 29, 2022
    Inventors: James Allan BOOTH, Mahdi SALMANI RAHIMI, Gioacchino NORIS
  • Patent number: 11436790
    Abstract: In one embodiment, a method includes receiving image data corresponding to an external environment of a user. The image data is captured at a first time and comprises a body part of the user. The method also includes receiving a first tracking data generated based on measurements made at the first time by at least one motion sensor associated with the body part; generating, based at least on the image data, a model representation associated with the body part; receiving a second tracking data generated based on measurements made at a second time by the at least one motion sensor associated with the body part; determining a deformation of the model representation associated with the body part based on the first tracking data and the second tracking data; and displaying the deformation of the model representation associated with the body part of the user.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: September 6, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Gioacchino Noris, James Allan Booth, Alexander Sorkine Hornung
  • Publication number: 20220092853
    Abstract: In one embodiment, one or more computing systems may receive an image of a portion of a face of a first user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the first user. The one or more computing systems may identify one or more facial features captured in the image and determine a camera pose relative to the 3D facial model based on the identified one or more facial features in the image and predetermined feature locations on the 3D facial model. The one or more computing systems may determine a mapping relationship between the image and the 3D facial model by projecting the image of the portion of the face of the first user onto the 3D facial model from the camera pose and cause an output image of a facial representation of the first user to be rendered.
    Type: Application
    Filed: September 22, 2020
    Publication date: March 24, 2022
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Patent number: 11024074
    Abstract: In one embodiment, a method includes displaying a first virtual content to a first user in a virtual area, the virtual area comprising one or more second virtual content, inferring an intent of the first user to interact with the first virtual content based on one or more of first user actions or contextual information, and adjusting one or more configurations associated with one or more of the second virtual content based on the inferring of the intent of the first user to interact with the first virtual content.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: June 1, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra
  • Publication number: 20210082176
    Abstract: In one embodiment, a method includes receiving image data corresponding to an external environment of a user. The image data is captured at a first time and comprises a body part of the user. The method also includes receiving a first tracking data generated based on measurements made at the first time by at least one motion sensor associated with the body part; generating, based at least on the image data, a model representation associated with the body part; receiving a second tracking data generated based on measurements made at a second time by the at least one motion sensor associated with the body part; determining a deformation of the model representation associated with the body part based on the first tracking data and the second tracking data; and displaying the deformation of the model representation associated with the body part of the user.
    Type: Application
    Filed: November 6, 2020
    Publication date: March 18, 2021
    Inventors: Gioacchino Noris, James Allan Booth, Alexander Sorkine Hornung
  • Patent number: 10921878
    Abstract: In one embodiment, a method includes receiving, from the first user, a request to create a joint virtual space to use with one or more second users, determining a first area in a first room associated with the first user based at least in part on space limitations associated with the first room and locations of one or more items in the first room, retrieving information associated with one or more second rooms for each of the second users, creating, based on first area of the first room and the information associated with each of the second rooms, the joint virtual space, and providing access to the joint virtual space to the first user and each of the one or more second users.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: February 16, 2021
    Assignee: Facebook, Inc.
    Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra
  • Publication number: 20210012571
    Abstract: A mixed reality (MR) simulation system includes a console and a head mounted device (HMD). The MR system captures stereoscopic images from a real-world environment using outward-facing stereoscopic cameras mounted to the HMD. The MR system preprocesses the stereoscopic images to maximize contrast and then extracts a set of features from those images, including edges or corners, among others. For each feature, the MR system generates one or more two-dimensional (2D) polylines. Then, the MR system triangulates between 2D polylines found in right side images and corresponding 2D polylines found in left side images to generate a set of 3D polylines. The MR system interpolates between 3D vertices included in the 3D polylines or extrapolates additional 3D vertices, thereby generating a geometric reconstruction of the real-world environment. The MR system may map textures derived from the real-world environment onto the geometric representation faster than the geometric reconstruction is updated.
    Type: Application
    Filed: July 30, 2020
    Publication date: January 14, 2021
    Inventors: James Allan Booth, Gaurav Chaurasia, Alexandru-Eugen Ichim, Alex Locher, Gioacchino Noris, Alexander Sorkine Hornung, Manuel Werlberger
  • Patent number: 10861223
    Abstract: In one embodiment, a method includes receiving image data corresponding to an external environment of a user. The image data is captured at a first time and comprises a body part of the user. The method also includes receiving a first tracking data generated based on measurements made at the first time by at least one motion sensor associated with the body part; generating, based at least on the image data, a model representation associated with the body part; receiving a second tracking data generated based on measurements made at a second time by the at least one motion sensor associated with the body part; determining a deformation of the model representation associated with the body part based on the first tracking data and the second tracking data; and displaying the deformation of the model representation associated with the body part of the user.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: December 8, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Gioacchino Noris, James Allan Booth, Alexander Sorkine Hornung
  • Patent number: 10733800
    Abstract: A mixed reality (MR) simulation system includes a console and a head mounted device (HMD). The MR system captures stereoscopic images from a real-world environment using outward-facing stereoscopic cameras mounted to the HMD. The MR system preprocesses the stereoscopic images to maximize contrast and then extracts a set of features from those images, including edges or corners, among others. For each feature, the MR system generates one or more two-dimensional (2D) polylines. Then, the MR system triangulates between 2D polylines found in right side images and corresponding 2D polylines found in left side images to generate a set of 3D polylines. The MR system interpolates between 3D vertices included in the 3D polylines or extrapolates additional 3D vertices, thereby generating a geometric reconstruction of the real-world environment. The MR system may map textures derived from the real-world environment onto the geometric representation faster than the geometric reconstruction is updated.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: August 4, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: James Allan Booth, Gaurav Chaurasia, Alexandru-Eugen Ichim, Alex Locher, Gioacchino Noris, Alexander Sorkine Hornung, Manuel Werlberger
  • Publication number: 20200209949
    Abstract: In one embodiment, a method includes receiving, from the first user, a request to create a joint virtual space to use with one or more second users, determining a first area in a first room associated with the first user based at least in part on space limitations associated with the first room and locations of one or more items in the first room, retrieving information associated with one or more second rooms for each of the second users, creating, based on first area of the first room and the information associated with each of the second rooms, the joint virtual space, and providing access to the joint virtual space to the first user and each of the one or more second users.
    Type: Application
    Filed: December 27, 2018
    Publication date: July 2, 2020
    Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra
  • Publication number: 20200211251
    Abstract: In one embodiment, a method includes displaying a first virtual content to a first user in a virtual area, the virtual area comprising one or more second virtual content, inferring an intent of the first user to interact with the first virtual content based on one or more of first user actions or contextual information, and adjusting one or more configurations associated with one or more of the second virtual content based on the inferring of the intent of the first user to interact with the first virtual content.
    Type: Application
    Filed: December 27, 2018
    Publication date: July 2, 2020
    Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra
  • Publication number: 20200210137
    Abstract: In one embodiment, a method includes receiving a request to share a display of a first interactive item with one or more users, generating a first virtual item as a copy of the first interactive item, and displaying the first virtual item in a virtual reality environment to a subset of the one or more users, wherein if changes made to the first interactive item are received, the display of the first virtual item in the virtual reality environment is updated to include the same changes as the first interactive item.
    Type: Application
    Filed: December 27, 2018
    Publication date: July 2, 2020
    Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra