Patents by Inventor Michael ZOLLHOEFER

Michael ZOLLHOEFER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11989846
    Abstract: A method for training a real-time, modeling for animating an avatar for a subject is provided. The method includes collecting multiple images of a subject. The method also includes selecting a plurality of vertex positions in a guide mesh, indicative of a volumetric primitive enveloping the subject, determining a geometric attribute for the volumetric primitive including a position, a rotation, and a scale factor of the volumetric primitive, determining a payload attribute for each of the volumetric primitive, the payload attribute including a color value and an opacity value for each voxel in a voxel grid defining the volumetric primitive, determining a loss factor for each point in the volumetric primitive based on the geometric attribute, the payload attribute and a ground truth value, and updating a three-dimensional model for the subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: May 21, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Stephen Anthony Lombardi, Tomas Simon Kreuz, Jason Saragih, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Yaser Sheikh
  • Publication number: 20230419579
    Abstract: A method for training a three-dimensional model face animation model from speech, is provided. The method includes determining a first correlation value for a facial feature based on an audio waveform from a first subject, generating a first mesh for a lower portion of a human face, based on the facial feature and the first correlation value, updating the first correlation value when a difference between the first mesh and a ground truth image of the first subject is greater than a pre-selected threshold, and providing a three-dimensional model of the human face animated by speech to an immersive reality application accessed by a client device based on the difference between the first mesh and the ground truth image of the first subject. A non-transitory, computer-readable medium storing instructions to cause a system to perform the above method, and the system, are also provided.
    Type: Application
    Filed: September 6, 2023
    Publication date: December 28, 2023
    Inventors: Alexander Richard, Michael Zollhoefer, Fernando De la Torre, Yaser Sheikh
  • Patent number: 11756250
    Abstract: A method for training a three-dimensional model face animation model from speech, is provided. The method includes determining a first correlation value for a facial feature based on an audio waveform from a first subject, generating a first mesh for a lower portion of a human face, based on the facial feature and the first correlation value, updating the first correlation value when a difference between the first mesh and a ground truth image of the first subject is greater than a pre-selected threshold, and providing a three-dimensional model of the human face animated by speech to an immersive reality application accessed by a client device based on the difference between the first mesh and the ground truth image of the first subject. A non-transitory, computer-readable medium storing instructions to cause a system to perform the above method, and the system, are also provided.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: September 12, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Alexander Richard, Michael Zollhoefer, Fernando De la Torre, Yaser Sheikh
  • Publication number: 20230245365
    Abstract: A method for generating a subject avatar using a mobile phone scan is provided. The method includes receiving, from a mobile device, multiple images of a first subject, extracting multiple image features from the images of the first subject based on a set of learnable weights, inferring a three-dimensional model of the first subject from the image features and an existing three-dimensional model of a second subject, animating the three-dimensional model of the first subject based on an immersive reality application running on a headset used by a viewer, and providing, to a display on the headset, an image of the three-dimensional model of the first subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method, are also provided.
    Type: Application
    Filed: December 2, 2022
    Publication date: August 3, 2023
    Inventors: Chen Cao, Stuart Anderson, Tomas Simon Kreuz, Jin Kyu Kim, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Shunsuke Saito, Stephen Anthony Lombardi, Shih-En Wei, Danielle Belko, Shoou-I Yu, Yaser Sheikh, Jason Saragih
  • Publication number: 20220309724
    Abstract: A method for training a three-dimensional model face animation model from speech, is provided. The method includes determining a first correlation value for a facial feature based on an audio waveform from a first subject, generating a first mesh for a lower portion of a human face, based on the facial feature and the first correlation value, updating the first correlation value when a difference between the first mesh and a ground truth image of the first subject is greater than a pre-selected threshold, and providing a three-dimensional model of the human face animated by speech to an immersive reality application accessed by a client device based on the difference between the first mesh and the ground truth image of the first subject. A non-transitory, computer-readable medium storing instructions to cause a system to perform the above method, and the system, are also provided.
    Type: Application
    Filed: February 10, 2022
    Publication date: September 29, 2022
    Inventors: Alexander Richard, Michael Zollhoefer, Fernando De la Torre, Yaser Sheikh
  • Publication number: 20220245910
    Abstract: A method for training a real-time, modeling for animating an avatar for a subject is provided. The method includes collecting multiple images of a subject. The method also includes selecting a plurality of vertex positions in a guide mesh, indicative of a volumetric primitive enveloping the subject, determining a geometric attribute for the volumetric primitive including a position, a rotation, and a scale factor of the volumetric primitive, determining a payload attribute for each of the volumetric primitive, the payload attribute including a color value and an opacity value for each voxel in a voxel grid defining the volumetric primitive, determining a loss factor for each point in the volumetric primitive based on the geometric attribute, the payload attribute and a ground truth value, and updating a three-dimensional model for the subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
    Type: Application
    Filed: December 17, 2021
    Publication date: August 4, 2022
    Inventors: Stephen Anthony Lombardi, Tomas Simon Kreuz, Jason Saragih, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Yaser Sheikh
  • Publication number: 20220239844
    Abstract: In one embodiment, a method includes initializing latent codes respectively associated with times associated with frames in a training video of a scene captured by a camera. For each of the frames, a system (1) generates rendered pixel values for a set of pixels in the frame by querying NeRF using the latent code associated with the frame, a camera viewpoint associated with the frame, and ray directions associated with the set of pixels, and (2) updates the latent code associated with the frame and the NeRF based on comparisons between the rendered pixel values and original pixel values for the set of pixels. Once trained, the system renders output frames for an output video of the scene, wherein each output frame is rendered by querying the updated NeRF using one of the updated latent codes corresponding to a desired time associated with the output frame.
    Type: Application
    Filed: January 7, 2022
    Publication date: July 28, 2022
    Inventors: Zhaoyang Lv, Miroslava Slavcheva, Tianye Li, Michael Zollhoefer, Simon Gareth Green, Tanner Schmidt, Michael Goesele, Steven John Lovegrove, Christoph Lassner, Changil Kim
  • Publication number: 20220201273
    Abstract: A device for providing a reverse pass-through view of a user of a headset display to an onlooker includes an eyepiece comprising an optical surface configured to provide an image to a user on a first side of the optical surface. The device also includes a first camera configured to collect an image of a portion of a face of the user reflected from the optical surface in a first field of view, a display adjacent to the optical surface and configured to project forward an image of the face of the user, and a screen configured to receive light from the display and provide the image of the face of the user to an onlooker.
    Type: Application
    Filed: December 17, 2021
    Publication date: June 23, 2022
    Inventors: Nathan Matsuda, Brian Wheelwright, Joel Hegland, Stephen Anthony Lombardi, Jason Saragih, Tomas Simon Kreuz, Shunsuke Saito, Michael Zollhoefer, Amit Raj, James Henry Hays
  • Publication number: 20220198731
    Abstract: A method of forming a pixel-aligned volumetric avatar includes receiving multiple two-dimensional images having at least two or more fields of view of a subject. The method also includes extracting multiple image features from the two-dimensional images using a set of learnable weights, projecting the image features along a direction between a three-dimensional model of the subject and a selected observation point for a viewer, and providing, to the viewer, an image of the three-dimensional model of the subject. A system and a non-transitory, computer readable medium storing instructions to perform the above method, are also provided.
    Type: Application
    Filed: December 20, 2021
    Publication date: June 23, 2022
    Inventors: Stephen Anthony Lombardi, Jason Saragih, Tomas Simon Kreuz, Shunsuke Saito, Michael Zollhoefer, Amit Raj, James Henry Hays
  • Publication number: 20180068178
    Abstract: A computer-implemented method for tracking a human face in a target video includes obtaining target video data of a human face; and estimating parameters of a target human face model, based on the target video data. A first subset of the parameters represents a geometric shape and a second subset of the parameters represents an expression of the human face. At least one of the estimated parameters is modified in order to obtain new parameters of the target human face model, and output video data are generated based on the new parameters of the target human face model and the target video data.
    Type: Application
    Filed: September 5, 2016
    Publication date: March 8, 2018
    Inventors: Christian THEOBALT, Michael ZOLLHOEFER, Marc STAMMINGER, Justus THIES, Matthias NIESSNER