Patents by Inventor James Allan Booth
James Allan Booth has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240078754Abstract: In one embodiment, a computing system may access a first image of a first portion of a face of a user captured by a first camera from a first viewpoint and a second image of a second portion of the face captured by a second camera from a second viewpoint. The system may generate, using a machine-learning model and the first and second images, a synthesized image corresponding to a third portion of the face of the user as viewed from a third viewpoint. The system may access a three-dimensional (3D) facial model representative of the face and generate a texture image for the face by projecting at least the synthesized image onto the 3D facial model from a predetermined camera pose corresponding to the third viewpoint. The system may cause an output image to be rendered using at least the 3D facial model and the texture image.Type: ApplicationFiled: October 31, 2023Publication date: March 7, 2024Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
-
Publication number: 20240062492Abstract: The disclosed artificial reality system can provide a user self representation in an artificial reality environment based on a self portion from an image of the user. The artificial reality system can generate the self representation by applying a machine learning model to classify the self portion of the image. The machine learning model can be trained to identify self portions in images based on a set of training images, with portions tagged as either depicting a user from a self-perspective or not. The artificial reality system can display the self portion as a self representation in the artificial reality environment by positioning them in the artificial reality environment relative to the user's perspective in the artificial reality environment. The artificial reality system can also identify movements of the user and can adjust the self representation to match the user's movement, providing more accurate self representations.Type: ApplicationFiled: October 30, 2023Publication date: February 22, 2024Inventors: James Allan BOOTH, Mahdi SALMANI RAHIMI, Gioacchino NORIS
-
Publication number: 20240046590Abstract: A mixed reality (MR) simulation system includes a console and a head mounted device (HMD). The MR system captures stereoscopic images from a real-world environment using outward-facing stereoscopic cameras mounted to the HMD. The MR system preprocesses the stereoscopic images to maximize contrast and then extracts a set of features from those images, including edges or corners, among others. For each feature, the MR system generates one or more two-dimensional (2D) polylines. Then, the MR system triangulates between 2D polylines found in right side images and corresponding 2D polylines found in left side images to generate a set of 3D polylines. The MR system interpolates between 3D vertices included in the 3D polylines or extrapolates additional 3D vertices, thereby generating a geometric reconstruction of the real-world environment. The MR system may map textures derived from the real-world environment onto the geometric representation faster than the geometric reconstruction is updated.Type: ApplicationFiled: October 18, 2023Publication date: February 8, 2024Inventors: James Allan BOOTH, Gaurav CHAURASIA, Alexandru-Eugen ICHIM, Alex LOCHER, Gioacchino NORIS, Alexander Sorkine HORNUNG, Manuel WERLBERGER
-
Patent number: 11861757Abstract: The disclosed artificial reality system can provide a user self representation in an artificial reality environment based on a self portion from an image of the user. The artificial reality system can generate the self representation by applying a machine learning model to classify the self portion of the image. The machine learning model can be trained to identify self portions in images based on a set of training images, with portions tagged as either depicting a user from a self-perspective or not. The artificial reality system can display the self portion as a self representation in the artificial reality environment by positioning them in the artificial reality environment relative to the user's perspective in the artificial reality environment. The artificial reality system can also identify movements of the user and can adjust the self representation to match the user's movement, providing more accurate self representations.Type: GrantFiled: September 7, 2022Date of Patent: January 2, 2024Assignee: Meta Platforms Technologies, LLCInventors: James Allan Booth, Mahdi Salmani Rahimi, Gioacchino Noris
-
Patent number: 11842442Abstract: In one embodiment, one or more computing systems may access a plurality of images corresponding to a portion of a face of a user. The plurality of images is captured from different viewpoints by a plurality of cameras coupled to an artificial-reality system worn by the user. The one or more computing systems may synthesize, using a machine-learning model, to generate a synthesized image corresponding to the portion of the face of the user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the user and generate a texture image by projecting the synthesized image onto the 3D facial model from a specific camera pose. The one or more computing systems may cause an output image of a facial representation of the user to be rendered using at least the 3D facial model and the texture image.Type: GrantFiled: December 22, 2022Date of Patent: December 12, 2023Assignee: Meta Platforms Technologies, LLCInventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
-
Patent number: 11830148Abstract: A mixed reality (MR) simulation system includes a console and a head mounted device (HMD). The MR system captures stereoscopic images from a real-world environment using outward-facing stereoscopic cameras mounted to the HMD. The MR system preprocesses the stereoscopic images to maximize contrast and then extracts a set of features from those images, including edges or corners, among others. For each feature, the MR system generates one or more two-dimensional (2D) polylines. Then, the MR system triangulates between 2D polylines found in right side images and corresponding 2D polylines found in left side images to generate a set of 3D polylines. The MR system interpolates between 3D vertices included in the 3D polylines or extrapolates additional 3D vertices, thereby generating a geometric reconstruction of the real-world environment. The MR system may map textures derived from the real-world environment onto the geometric representation faster than the geometric reconstruction is updated.Type: GrantFiled: July 30, 2020Date of Patent: November 28, 2023Assignee: Meta Platforms, Inc.Inventors: James Allan Booth, Gaurav Chaurasia, Alexandru-Eugen Ichim, Alex Locher, Gioacchino Noris, Alexander Sorkine Hornung, Manuel Werlberger
-
Publication number: 20230206560Abstract: In one embodiment, one or more computing systems may access a plurality of images corresponding to a portion of a face of a user. The plurality of images is captured from different viewpoints by a plurality of cameras coupled to an artificial-reality system worn by the user. The one or more computing systems may synthesize, using a machine-learning model, to generate a synthesized image corresponding to the portion of the face of the user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the user and generate a texture image by projecting the synthesized image onto the 3D facial model from a specific camera pose. The one or more computing systems may cause an output image of a facial representation of the user to be rendered using at least the 3D facial model and the texture image.Type: ApplicationFiled: December 22, 2022Publication date: June 29, 2023Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
-
Patent number: 11562535Abstract: In one embodiment, one or more computing systems may receive an image of a portion of a face of a first user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the first user. The one or more computing systems may identify one or more facial features captured in the image and determine a camera pose relative to the 3D facial model based on the identified one or more facial features in the image and predetermined feature locations on the 3D facial model. The one or more computing systems may determine a mapping relationship between the image and the 3D facial model by projecting the image of the portion of the face of the first user onto the 3D facial model from the camera pose and cause an output image of a facial representation of the first user to be rendered.Type: GrantFiled: September 22, 2020Date of Patent: January 24, 2023Assignee: Meta Platforms Technologies, LLCInventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
-
Publication number: 20220415000Abstract: The disclosed artificial reality system can provide a user self representation in an artificial reality environment based on a self portion from an image of the user. The artificial reality system can generate the self representation by applying a machine learning model to classify the self portion of the image. The machine learning model can be trained to identify self portions in images based on a set of training images, with portions tagged as either depicting a user from a self-perspective or not. The artificial reality system can display the self portion as a self representation in the artificial reality environment by positioning them in the artificial reality environment relative to the user's perspective in the artificial reality environment. The artificial reality system can also identify movements of the user and can adjust the self representation to match the user's movement, providing more accurate self representations.Type: ApplicationFiled: September 7, 2022Publication date: December 29, 2022Inventors: James Allan BOOTH, Mahdi SALMANI RAHIMI, Gioacchino NORIS
-
Patent number: 11436790Abstract: In one embodiment, a method includes receiving image data corresponding to an external environment of a user. The image data is captured at a first time and comprises a body part of the user. The method also includes receiving a first tracking data generated based on measurements made at the first time by at least one motion sensor associated with the body part; generating, based at least on the image data, a model representation associated with the body part; receiving a second tracking data generated based on measurements made at a second time by the at least one motion sensor associated with the body part; determining a deformation of the model representation associated with the body part based on the first tracking data and the second tracking data; and displaying the deformation of the model representation associated with the body part of the user.Type: GrantFiled: November 6, 2020Date of Patent: September 6, 2022Assignee: Meta Platforms Technologies, LLCInventors: Gioacchino Noris, James Allan Booth, Alexander Sorkine Hornung
-
Publication number: 20220092853Abstract: In one embodiment, one or more computing systems may receive an image of a portion of a face of a first user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the first user. The one or more computing systems may identify one or more facial features captured in the image and determine a camera pose relative to the 3D facial model based on the identified one or more facial features in the image and predetermined feature locations on the 3D facial model. The one or more computing systems may determine a mapping relationship between the image and the 3D facial model by projecting the image of the portion of the face of the first user onto the 3D facial model from the camera pose and cause an output image of a facial representation of the first user to be rendered.Type: ApplicationFiled: September 22, 2020Publication date: March 24, 2022Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
-
Patent number: 11024074Abstract: In one embodiment, a method includes displaying a first virtual content to a first user in a virtual area, the virtual area comprising one or more second virtual content, inferring an intent of the first user to interact with the first virtual content based on one or more of first user actions or contextual information, and adjusting one or more configurations associated with one or more of the second virtual content based on the inferring of the intent of the first user to interact with the first virtual content.Type: GrantFiled: December 27, 2018Date of Patent: June 1, 2021Assignee: Facebook Technologies, LLCInventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra
-
Publication number: 20210082176Abstract: In one embodiment, a method includes receiving image data corresponding to an external environment of a user. The image data is captured at a first time and comprises a body part of the user. The method also includes receiving a first tracking data generated based on measurements made at the first time by at least one motion sensor associated with the body part; generating, based at least on the image data, a model representation associated with the body part; receiving a second tracking data generated based on measurements made at a second time by the at least one motion sensor associated with the body part; determining a deformation of the model representation associated with the body part based on the first tracking data and the second tracking data; and displaying the deformation of the model representation associated with the body part of the user.Type: ApplicationFiled: November 6, 2020Publication date: March 18, 2021Inventors: Gioacchino Noris, James Allan Booth, Alexander Sorkine Hornung
-
Patent number: 10921878Abstract: In one embodiment, a method includes receiving, from the first user, a request to create a joint virtual space to use with one or more second users, determining a first area in a first room associated with the first user based at least in part on space limitations associated with the first room and locations of one or more items in the first room, retrieving information associated with one or more second rooms for each of the second users, creating, based on first area of the first room and the information associated with each of the second rooms, the joint virtual space, and providing access to the joint virtual space to the first user and each of the one or more second users.Type: GrantFiled: December 27, 2018Date of Patent: February 16, 2021Assignee: Facebook, Inc.Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra
-
Publication number: 20210012571Abstract: A mixed reality (MR) simulation system includes a console and a head mounted device (HMD). The MR system captures stereoscopic images from a real-world environment using outward-facing stereoscopic cameras mounted to the HMD. The MR system preprocesses the stereoscopic images to maximize contrast and then extracts a set of features from those images, including edges or corners, among others. For each feature, the MR system generates one or more two-dimensional (2D) polylines. Then, the MR system triangulates between 2D polylines found in right side images and corresponding 2D polylines found in left side images to generate a set of 3D polylines. The MR system interpolates between 3D vertices included in the 3D polylines or extrapolates additional 3D vertices, thereby generating a geometric reconstruction of the real-world environment. The MR system may map textures derived from the real-world environment onto the geometric representation faster than the geometric reconstruction is updated.Type: ApplicationFiled: July 30, 2020Publication date: January 14, 2021Inventors: James Allan Booth, Gaurav Chaurasia, Alexandru-Eugen Ichim, Alex Locher, Gioacchino Noris, Alexander Sorkine Hornung, Manuel Werlberger
-
Patent number: 10861223Abstract: In one embodiment, a method includes receiving image data corresponding to an external environment of a user. The image data is captured at a first time and comprises a body part of the user. The method also includes receiving a first tracking data generated based on measurements made at the first time by at least one motion sensor associated with the body part; generating, based at least on the image data, a model representation associated with the body part; receiving a second tracking data generated based on measurements made at a second time by the at least one motion sensor associated with the body part; determining a deformation of the model representation associated with the body part based on the first tracking data and the second tracking data; and displaying the deformation of the model representation associated with the body part of the user.Type: GrantFiled: December 7, 2018Date of Patent: December 8, 2020Assignee: Facebook Technologies, LLCInventors: Gioacchino Noris, James Allan Booth, Alexander Sorkine Hornung
-
Patent number: 10733800Abstract: A mixed reality (MR) simulation system includes a console and a head mounted device (HMD). The MR system captures stereoscopic images from a real-world environment using outward-facing stereoscopic cameras mounted to the HMD. The MR system preprocesses the stereoscopic images to maximize contrast and then extracts a set of features from those images, including edges or corners, among others. For each feature, the MR system generates one or more two-dimensional (2D) polylines. Then, the MR system triangulates between 2D polylines found in right side images and corresponding 2D polylines found in left side images to generate a set of 3D polylines. The MR system interpolates between 3D vertices included in the 3D polylines or extrapolates additional 3D vertices, thereby generating a geometric reconstruction of the real-world environment. The MR system may map textures derived from the real-world environment onto the geometric representation faster than the geometric reconstruction is updated.Type: GrantFiled: September 17, 2018Date of Patent: August 4, 2020Assignee: Facebook Technologies, LLCInventors: James Allan Booth, Gaurav Chaurasia, Alexandru-Eugen Ichim, Alex Locher, Gioacchino Noris, Alexander Sorkine Hornung, Manuel Werlberger
-
Publication number: 20200209949Abstract: In one embodiment, a method includes receiving, from the first user, a request to create a joint virtual space to use with one or more second users, determining a first area in a first room associated with the first user based at least in part on space limitations associated with the first room and locations of one or more items in the first room, retrieving information associated with one or more second rooms for each of the second users, creating, based on first area of the first room and the information associated with each of the second rooms, the joint virtual space, and providing access to the joint virtual space to the first user and each of the one or more second users.Type: ApplicationFiled: December 27, 2018Publication date: July 2, 2020Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra
-
Publication number: 20200210137Abstract: In one embodiment, a method includes receiving a request to share a display of a first interactive item with one or more users, generating a first virtual item as a copy of the first interactive item, and displaying the first virtual item in a virtual reality environment to a subset of the one or more users, wherein if changes made to the first interactive item are received, the display of the first virtual item in the virtual reality environment is updated to include the same changes as the first interactive item.Type: ApplicationFiled: December 27, 2018Publication date: July 2, 2020Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra
-
Publication number: 20200211251Abstract: In one embodiment, a method includes displaying a first virtual content to a first user in a virtual area, the virtual area comprising one or more second virtual content, inferring an intent of the first user to interact with the first virtual content based on one or more of first user actions or contextual information, and adjusting one or more configurations associated with one or more of the second virtual content based on the inferring of the intent of the first user to interact with the first virtual content.Type: ApplicationFiled: December 27, 2018Publication date: July 2, 2020Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra