Patents by Inventor Tomas SIMON

Tomas SIMON has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12190428
    Abstract: A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.
    Type: Grant
    Filed: June 13, 2023
    Date of Patent: January 7, 2025
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Jason Saragih, Stephen Anthony Lombardi, Shunsuke Saito, Tomas Simon Kreuz, Shih-En Wei, Kevyn Alex Anthony McPhail, Yaser Sheikh, Sai Bi
  • Publication number: 20240406371
    Abstract: A device for providing a reverse pass-through view of a user of a headset display to an onlooker includes an eyepiece comprising an optical surface configured to provide an image to a user on a first side of the optical surface. The device also includes a first camera configured to collect an image of a portion of a face of the user reflected from the optical surface in a first field of view, a display adjacent to the optical surface and configured to project forward an image of the face of the user, and a screen configured to receive light from the display and provide the image of the face of the user to an onlooker.
    Type: Application
    Filed: August 12, 2024
    Publication date: December 5, 2024
    Inventors: Nathan Matsuda, Brian Wheelwright, Joel Hegland, Stephen Anthony Lombardi, Jason Saragih, Tomas Simon Kreuz, Shunsuke Saito, Michael Zollhoefer, Amit Raj, James Henry Hays
  • Patent number: 12131416
    Abstract: A method of forming a pixel-aligned volumetric avatar includes receiving multiple two-dimensional images having at least two or more fields of view of a subject. The method also includes extracting multiple image features from the two-dimensional images using a set of learnable weights, projecting the image features along a direction between a three-dimensional model of the subject and a selected observation point for a viewer, and providing, to the viewer, an image of the three-dimensional model of the subject. A system and a non-transitory, computer readable medium storing instructions to perform the above method, are also provided.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: October 29, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Stephen Anthony Lombardi, Jason Saragih, Tomas Simon Kreuz, Shunsuke Saito, Michael Zollhoefer, Amit Raj, James Henry Hays
  • Publication number: 20240346770
    Abstract: A method for simulating a solid body animation of a subject includes retrieving a first frame that includes a body image of a subject. The method also includes selecting, from the first frame, multiple key points within the body image of the subject that define a hull of a body part and multiple joint points that define a joint between two body parts, identifying a geometry, a speed, and a mass of the body part to include in a dynamic model of the subject, based on the key points and the joint points, determining, based on the dynamic model of the subject, a pose of the subject in a second frame after the first frame in a video stream, and providing the video stream to an immersive reality application running on a client device.
    Type: Application
    Filed: June 13, 2024
    Publication date: October 17, 2024
    Inventors: Jason Saragih, Shih-En Wei, Tomas Simon Kreuz, Kris Makoto Kitani, Ye Yuan
  • Patent number: 12095975
    Abstract: A device for providing a reverse pass-through view of a user of a headset display to an onlooker includes an eyepiece comprising an optical surface configured to provide an image to a user on a first side of the optical surface. The device also includes a first camera configured to collect an image of a portion of a face of the user reflected from the optical surface in a first field of view, a display adjacent to the optical surface and configured to project forward an image of the face of the user, and a screen configured to receive light from the display and provide the image of the face of the user to an onlooker.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: September 17, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Nathan Matsuda, Brian Wheelwright, Joel Hegland, Stephen Anthony Lombardi, Jason Saragih, Tomas Simon Kreuz, Shunsuke Saito, Michael Zollhoefer, Amit Raj, James Henry Hays
  • Publication number: 20240303951
    Abstract: A method for training a real-time, modeling for animating an avatar for a subject is provided. The method includes collecting multiple images of a subject. The method also includes selecting a plurality of vertex positions in a guide mesh, indicative of a volumetric primitive enveloping the subject, determining a geometric attribute for the volumetric primitive including a position, a rotation, and a scale factor of the volumetric primitive, determining a payload attribute for each of the volumetric primitive, the payload attribute including a color value and an opacity value for each voxel in a voxel grid defining the volumetric primitive, determining a loss factor for each point in the volumetric primitive based on the geometric attribute, the payload attribute and a ground truth value, and updating a three-dimensional model for the subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
    Type: Application
    Filed: April 16, 2024
    Publication date: September 12, 2024
    Inventors: Stephen Anthony Lombardi, Tomas Simon Kreuz, Jason Saragih, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Yaser Sheikh
  • Patent number: 12056824
    Abstract: A method for simulating a solid body animation of a subject includes retrieving a first frame that includes a body image of a subject. The method also includes selecting, from the first frame, multiple key points within the body image of the subject that define a hull of a body part and multiple joint points that define a joint between two body parts, identifying a geometry, a speed, and a mass of the body part to include in a dynamic model of the subject, based on the key points and the joint points, determining, based on the dynamic model of the subject, a pose of the subject in a second frame after the first frame in a video stream, and providing the video stream to an immersive reality application running on a client device.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: August 6, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Jason Saragih, Shih-En Wei, Tomas Simon Kreuz, Kris Makoto Kitani, Ye Yuan
  • Publication number: 20240242455
    Abstract: As disclosed herein, a computer-implemented method is provided. In one aspect, the computer-implemented method may include receiving, from a client device, images of a user. The computer-implemented method may include determining a target appearance of an avatar of the user. The computer-implemented method may include generating, based on the images and the target appearance, renders of the avatar. The computer-implemented method may include determining, based on a difference between first and second renders, a first adjustment to a weight associated with a first parameter for generating the renders and a second adjustment to a weight associated with a second parameter for generating the renders. The computer-implemented method may include generating, based on the adjustments, a third render of the avatar, the third render appearing more similar to the target appearance relative to the first render and the second render. A system and a non-transitory computer-readable storage medium are also disclosed.
    Type: Application
    Filed: January 12, 2024
    Publication date: July 18, 2024
    Inventors: Albert Pumarola Peris, Thu Nguyen Phuoc, Chen Cao, Artsiom Sanakoyeu, Tao Xu, Tomas Simon Kreuz, Ali Thabet, Juan Camilo Perez
  • Patent number: 12032737
    Abstract: A method for updating a gaze direction for a transmitter avatar in a receiver headset is provided. The method includes verifying, in a receiver device, that a visual tracking of a transmitter avatar is active in a transmitter device, and adjusting, in the receiver device, a gaze direction of the transmitter avatar to a fixation point. Adjusting the gaze direction of the transmitter avatar comprises estimating a coordinate of the fixation point in a receiver frame at a later time, and rotating, in the receiver device, two eyeballs of the transmitter avatar to a point in a direction of the fixation point. A headset, a memory in the headset storing instructions, and a processor configured to execute the instructions to perform the above method, are also provided.
    Type: Grant
    Filed: December 16, 2022
    Date of Patent: July 9, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Jason Saragih, Tomas Simon Kreuz, Shunsuke Saito, Stephen Anthony Lombardi, Gabriel Bailowitz Schwartz, Shih-En Wei
  • Publication number: 20240177419
    Abstract: Methods, systems, and storage media for modeling subjects in a virtual environment are disclosed. Exemplary implementations may: receiving, from a client device, image data including at least one subject; extracting, from the image data, a face of the at least one subject and an object interacting with the face, wherein the object may be glasses worn by the subject; generating a set of face primitives based on the face, the set of face primitives comprising geometry and appearance information; generating a set of object primitives based on a set of latent codes for the object; generating an appearance model of photometric interactions between the face and the object; and rendering an avatar in the virtual environment based on the appearance model, the set of face primitives, and the set of object primitives.
    Type: Application
    Filed: November 29, 2023
    Publication date: May 30, 2024
    Inventors: Shunsuke Saito, Junxuan Li, Tomas Simon Kreuz, Jason Saragih, Shun Iwase, Timur Bagautdinov, Rohan Joshi, Fabian Andres Prada Nino, Takaaki Shiratori, Yaser Sheikh, Stephen Anthony Lombardi
  • Patent number: 11989846
    Abstract: A method for training a real-time, modeling for animating an avatar for a subject is provided. The method includes collecting multiple images of a subject. The method also includes selecting a plurality of vertex positions in a guide mesh, indicative of a volumetric primitive enveloping the subject, determining a geometric attribute for the volumetric primitive including a position, a rotation, and a scale factor of the volumetric primitive, determining a payload attribute for each of the volumetric primitive, the payload attribute including a color value and an opacity value for each voxel in a voxel grid defining the volumetric primitive, determining a loss factor for each point in the volumetric primitive based on the geometric attribute, the payload attribute and a ground truth value, and updating a three-dimensional model for the subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: May 21, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Stephen Anthony Lombardi, Tomas Simon Kreuz, Jason Saragih, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Yaser Sheikh
  • Publication number: 20240061499
    Abstract: A method for updating a gaze direction for a transmitter avatar in a receiver headset is provided. The method includes verifying, in a receiver device, that a visual tracking of a transmitter avatar is active in a transmitter device, and adjusting, in the receiver device, a gaze direction of the transmitter avatar to a fixation point. Adjusting the gaze direction of the transmitter avatar comprises estimating a coordinate of the fixation point in a receiver frame at a later time, and rotating, in the receiver device, two eyeballs of the transmitter avatar to a point in a direction of the fixation point. A headset, a memory in the headset storing instructions, and a processor configured to execute the instructions to perform the above method, are also provided.
    Type: Application
    Filed: December 16, 2022
    Publication date: February 22, 2024
    Inventors: Jason Saragih, Tomas Simon Kreuz, Shunsuke Saito, Stephen Anthony Lombardi, Gabriel Bailowitz Schwartz, Shih-En Wei
  • Publication number: 20240064413
    Abstract: A method to generate relightable avatars with an arbitrary illumination configuration is provided. The method includes selecting a lighting configuration for collecting a sequence of pictures of a subject, the lighting configuration including a spatial pattern with multiple lights surrounding the subject and a time lapse pattern with multiple camera exposure windows, modifying the spatial pattern and the time lapse pattern based on an average illumination intensity provided to the subject, activating the lights in a sequence based on the spatial pattern and the time lapse pattern, and collecting multiple pictures from multiple cameras surrounding the subject at each of the camera exposure windows. A memory storing instructions, a processor configured to execute the instructions, and a system which is caused, upon the executed instructions, to perform the above method, are also provided.
    Type: Application
    Filed: December 16, 2022
    Publication date: February 22, 2024
    Inventors: Jason Saragih, Tomas Simon Kreuz, Kevyn Alex Anthony McPhail, María Murcia López
  • Publication number: 20230326112
    Abstract: A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.
    Type: Application
    Filed: June 13, 2023
    Publication date: October 12, 2023
    Inventors: Jason Saragih, Stephen Anthony Lombardi, Shunsuke Saito, Tomas Simon Kreuz, Shih-En Wei, Kevyn Alex Anthony McPhail, Yaser Sheikh, Sai Bi
  • Patent number: 11734888
    Abstract: A method for providing real-time three-dimensional facial animation from video is provided. The method includes collecting images of a subject, and forming a three-dimensional mesh for the subject based on a facial expression factor and a head pose of the subject extracted from the images of the subject. The method also includes forming a texture transformation based on an illumination parameter associated with an illumination configuration for the images from the subject, forming a three-dimensional model for the subject based on the three-dimensional mesh and the texture transformation, determining a loss factor based on selected points in a test image from the subject and a rendition of the test image by the three-dimensional model, and updating the three-dimensional model according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: August 22, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Chen Cao, Vasu Agrawal, Fernando De la Torre, Lele Chen, Jason Saragih, Tomas Simon Kreuz, Yaser Sheikh
  • Publication number: 20230254300
    Abstract: A method for authenticating an avatar for use in a virtual reality/augmented reality (VR/AR) application is provided. The method includes receiving, from a client device, a request for authenticating an identity of a user of an immersive reality application running in the client device, wherein the user is associated with a subject-based avatar in the immersive reality application. The computer-implemented method also includes verifying, in a server, a public key provided by the client device against a private key stored in the server, the private key associated with the subject-based avatar, providing, to the client device, a certificate of validity of the identity of the user when the public key matches the private key, and storing an encrypted version of the certificate of validity in a memory. A memory storing instructions and a system to perform the above method are also provided.
    Type: Application
    Filed: February 4, 2022
    Publication date: August 10, 2023
    Inventors: Barry David Silverstein, Tomas Simon Kreuz, Jason Saragih
  • Publication number: 20230245365
    Abstract: A method for generating a subject avatar using a mobile phone scan is provided. The method includes receiving, from a mobile device, multiple images of a first subject, extracting multiple image features from the images of the first subject based on a set of learnable weights, inferring a three-dimensional model of the first subject from the image features and an existing three-dimensional model of a second subject, animating the three-dimensional model of the first subject based on an immersive reality application running on a headset used by a viewer, and providing, to a display on the headset, an image of the three-dimensional model of the first subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method, are also provided.
    Type: Application
    Filed: December 2, 2022
    Publication date: August 3, 2023
    Inventors: Chen Cao, Stuart Anderson, Tomas Simon Kreuz, Jin Kyu Kim, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Shunsuke Saito, Stephen Anthony Lombardi, Shih-En Wei, Danielle Belko, Shoou-I Yu, Yaser Sheikh, Jason Saragih
  • Patent number: 11715248
    Abstract: A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.
    Type: Grant
    Filed: January 20, 2022
    Date of Patent: August 1, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Jason Saragih, Stephen Anthony Lombardi, Shunsuke Saito, Tomas Simon Kreuz, Shih-En Wei, Kevyn Alex Anthony McPhail, Yaser Sheikh, Sai Bi
  • Publication number: 20220358719
    Abstract: A method for providing real-time three-dimensional facial animation from video is provided. The method includes collecting images of a subject, and forming a three-dimensional mesh for the subject based on a facial expression factor and a head pose of the subject extracted from the images of the subject. The method also includes forming a texture transformation based on an illumination parameter associated with an illumination configuration for the images from the subject, forming a three-dimensional model for the subject based on the three-dimensional mesh and the texture transformation, determining a loss factor based on selected points in a test image from the subject and a rendition of the test image by the three-dimensional model, and updating the three-dimensional model according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
    Type: Application
    Filed: August 6, 2021
    Publication date: November 10, 2022
    Inventors: Chen Cao, Vasu Agrawal, Fernando De la Torre, Lele Chen, Jason Saragih, Tomas Simon Kreuz, Yaser Sheikh
  • Patent number: 11423616
    Abstract: In one embodiment, a system may access an input image of an object captured by cameras, and the input image depicts appearance information associated with an object. The system may generate a first mesh of the object based on features identified from the input image of the object. The system may generate, by processing the first mesh using a machine-learning model, a position map that defines a contour of the object. Each pixel in the position map corresponds to a three-dimensional coordinate. The system may further generate a second mesh based on the position map, wherein the second mesh has a higher resolution than the first mesh. The system may render an output image of the object based on the second mesh. The system disclosed in the present application can render a dense mesh which has a higher resolution to provide details which cannot be compensated by texture information.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: August 23, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Tomas Simon Kreuz, Jason Saragih, Stephen Anthony Lombardi, Shugao Ma, Gabriel Bailowitz Schwartz