Patents by Inventor Yaser Sheikh

Yaser Sheikh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240303951
    Abstract: A method for training a real-time, modeling for animating an avatar for a subject is provided. The method includes collecting multiple images of a subject. The method also includes selecting a plurality of vertex positions in a guide mesh, indicative of a volumetric primitive enveloping the subject, determining a geometric attribute for the volumetric primitive including a position, a rotation, and a scale factor of the volumetric primitive, determining a payload attribute for each of the volumetric primitive, the payload attribute including a color value and an opacity value for each voxel in a voxel grid defining the volumetric primitive, determining a loss factor for each point in the volumetric primitive based on the geometric attribute, the payload attribute and a ground truth value, and updating a three-dimensional model for the subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
    Type: Application
    Filed: April 16, 2024
    Publication date: September 12, 2024
    Inventors: Stephen Anthony Lombardi, Tomas Simon Kreuz, Jason Saragih, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Yaser Sheikh
  • Publication number: 20240177419
    Abstract: Methods, systems, and storage media for modeling subjects in a virtual environment are disclosed. Exemplary implementations may: receiving, from a client device, image data including at least one subject; extracting, from the image data, a face of the at least one subject and an object interacting with the face, wherein the object may be glasses worn by the subject; generating a set of face primitives based on the face, the set of face primitives comprising geometry and appearance information; generating a set of object primitives based on a set of latent codes for the object; generating an appearance model of photometric interactions between the face and the object; and rendering an avatar in the virtual environment based on the appearance model, the set of face primitives, and the set of object primitives.
    Type: Application
    Filed: November 29, 2023
    Publication date: May 30, 2024
    Inventors: Shunsuke Saito, Junxuan Li, Tomas Simon Kreuz, Jason Saragih, Shun Iwase, Timur Bagautdinov, Rohan Joshi, Fabian Andres Prada Nino, Takaaki Shiratori, Yaser Sheikh, Stephen Anthony Lombardi
  • Patent number: 11989846
    Abstract: A method for training a real-time, modeling for animating an avatar for a subject is provided. The method includes collecting multiple images of a subject. The method also includes selecting a plurality of vertex positions in a guide mesh, indicative of a volumetric primitive enveloping the subject, determining a geometric attribute for the volumetric primitive including a position, a rotation, and a scale factor of the volumetric primitive, determining a payload attribute for each of the volumetric primitive, the payload attribute including a color value and an opacity value for each voxel in a voxel grid defining the volumetric primitive, determining a loss factor for each point in the volumetric primitive based on the geometric attribute, the payload attribute and a ground truth value, and updating a three-dimensional model for the subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: May 21, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Stephen Anthony Lombardi, Tomas Simon Kreuz, Jason Saragih, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Yaser Sheikh
  • Publication number: 20230419579
    Abstract: A method for training a three-dimensional model face animation model from speech, is provided. The method includes determining a first correlation value for a facial feature based on an audio waveform from a first subject, generating a first mesh for a lower portion of a human face, based on the facial feature and the first correlation value, updating the first correlation value when a difference between the first mesh and a ground truth image of the first subject is greater than a pre-selected threshold, and providing a three-dimensional model of the human face animated by speech to an immersive reality application accessed by a client device based on the difference between the first mesh and the ground truth image of the first subject. A non-transitory, computer-readable medium storing instructions to cause a system to perform the above method, and the system, are also provided.
    Type: Application
    Filed: September 6, 2023
    Publication date: December 28, 2023
    Inventors: Alexander Richard, Michael Zollhoefer, Fernando De la Torre, Yaser Sheikh
  • Publication number: 20230326112
    Abstract: A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.
    Type: Application
    Filed: June 13, 2023
    Publication date: October 12, 2023
    Inventors: Jason Saragih, Stephen Anthony Lombardi, Shunsuke Saito, Tomas Simon Kreuz, Shih-En Wei, Kevyn Alex Anthony McPhail, Yaser Sheikh, Sai Bi
  • Patent number: 11756250
    Abstract: A method for training a three-dimensional model face animation model from speech, is provided. The method includes determining a first correlation value for a facial feature based on an audio waveform from a first subject, generating a first mesh for a lower portion of a human face, based on the facial feature and the first correlation value, updating the first correlation value when a difference between the first mesh and a ground truth image of the first subject is greater than a pre-selected threshold, and providing a three-dimensional model of the human face animated by speech to an immersive reality application accessed by a client device based on the difference between the first mesh and the ground truth image of the first subject. A non-transitory, computer-readable medium storing instructions to cause a system to perform the above method, and the system, are also provided.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: September 12, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Alexander Richard, Michael Zollhoefer, Fernando De la Torre, Yaser Sheikh
  • Patent number: 11734888
    Abstract: A method for providing real-time three-dimensional facial animation from video is provided. The method includes collecting images of a subject, and forming a three-dimensional mesh for the subject based on a facial expression factor and a head pose of the subject extracted from the images of the subject. The method also includes forming a texture transformation based on an illumination parameter associated with an illumination configuration for the images from the subject, forming a three-dimensional model for the subject based on the three-dimensional mesh and the texture transformation, determining a loss factor based on selected points in a test image from the subject and a rendition of the test image by the three-dimensional model, and updating the three-dimensional model according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: August 22, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Chen Cao, Vasu Agrawal, Fernando De la Torre, Lele Chen, Jason Saragih, Tomas Simon Kreuz, Yaser Sheikh
  • Publication number: 20230245365
    Abstract: A method for generating a subject avatar using a mobile phone scan is provided. The method includes receiving, from a mobile device, multiple images of a first subject, extracting multiple image features from the images of the first subject based on a set of learnable weights, inferring a three-dimensional model of the first subject from the image features and an existing three-dimensional model of a second subject, animating the three-dimensional model of the first subject based on an immersive reality application running on a headset used by a viewer, and providing, to a display on the headset, an image of the three-dimensional model of the first subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method, are also provided.
    Type: Application
    Filed: December 2, 2022
    Publication date: August 3, 2023
    Inventors: Chen Cao, Stuart Anderson, Tomas Simon Kreuz, Jin Kyu Kim, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Shunsuke Saito, Stephen Anthony Lombardi, Shih-En Wei, Danielle Belko, Shoou-I Yu, Yaser Sheikh, Jason Saragih
  • Patent number: 11715248
    Abstract: A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.
    Type: Grant
    Filed: January 20, 2022
    Date of Patent: August 1, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Jason Saragih, Stephen Anthony Lombardi, Shunsuke Saito, Tomas Simon Kreuz, Shih-En Wei, Kevyn Alex Anthony McPhail, Yaser Sheikh, Sai Bi
  • Publication number: 20220358719
    Abstract: A method for providing real-time three-dimensional facial animation from video is provided. The method includes collecting images of a subject, and forming a three-dimensional mesh for the subject based on a facial expression factor and a head pose of the subject extracted from the images of the subject. The method also includes forming a texture transformation based on an illumination parameter associated with an illumination configuration for the images from the subject, forming a three-dimensional model for the subject based on the three-dimensional mesh and the texture transformation, determining a loss factor based on selected points in a test image from the subject and a rendition of the test image by the three-dimensional model, and updating the three-dimensional model according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
    Type: Application
    Filed: August 6, 2021
    Publication date: November 10, 2022
    Inventors: Chen Cao, Vasu Agrawal, Fernando De la Torre, Lele Chen, Jason Saragih, Tomas Simon Kreuz, Yaser Sheikh
  • Publication number: 20220309724
    Abstract: A method for training a three-dimensional model face animation model from speech, is provided. The method includes determining a first correlation value for a facial feature based on an audio waveform from a first subject, generating a first mesh for a lower portion of a human face, based on the facial feature and the first correlation value, updating the first correlation value when a difference between the first mesh and a ground truth image of the first subject is greater than a pre-selected threshold, and providing a three-dimensional model of the human face animated by speech to an immersive reality application accessed by a client device based on the difference between the first mesh and the ground truth image of the first subject. A non-transitory, computer-readable medium storing instructions to cause a system to perform the above method, and the system, are also provided.
    Type: Application
    Filed: February 10, 2022
    Publication date: September 29, 2022
    Inventors: Alexander Richard, Michael Zollhoefer, Fernando De la Torre, Yaser Sheikh
  • Publication number: 20220245910
    Abstract: A method for training a real-time, modeling for animating an avatar for a subject is provided. The method includes collecting multiple images of a subject. The method also includes selecting a plurality of vertex positions in a guide mesh, indicative of a volumetric primitive enveloping the subject, determining a geometric attribute for the volumetric primitive including a position, a rotation, and a scale factor of the volumetric primitive, determining a payload attribute for each of the volumetric primitive, the payload attribute including a color value and an opacity value for each voxel in a voxel grid defining the volumetric primitive, determining a loss factor for each point in the volumetric primitive based on the geometric attribute, the payload attribute and a ground truth value, and updating a three-dimensional model for the subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
    Type: Application
    Filed: December 17, 2021
    Publication date: August 4, 2022
    Inventors: Stephen Anthony Lombardi, Tomas Simon Kreuz, Jason Saragih, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Yaser Sheikh
  • Publication number: 20220237843
    Abstract: A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.
    Type: Application
    Filed: January 20, 2022
    Publication date: July 28, 2022
    Inventors: Jason Saragih, Stephen Anthony Lombardi, Shunsuke Saito, Tomas Simon Kreuz, Shih-En Wei, Kevyn Alex Anthony McPhail, Yaser Sheikh, Sai Bi
  • Patent number: 11087521
    Abstract: The disclosed computer system may include an input module, an autoencoder, and a rendering module. The input module may receive geometry information and images of a subject. The geometry information may be indicative of variation in geometry of the subject over time. Each image may be associated with a respective viewpoint and may include a view-dependent texture map of the subject. The autoencoder may jointly encode texture information and the geometry information to provide a latent vector. The autoencoder may infer, using the latent vector, an inferred geometry and an inferred view-dependent texture of the subject for a predicted viewpoint. The rendering module may be configured to render a reconstructed image of the subject for the predicted viewpoint using the inferred geometry and the inferred view-dependent texture. Various other systems and methods are also disclosed.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: August 10, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Stephen Anthony Lombardi, Jason Saragih, Yaser Sheikh, Takaaki Shiratori, Shoou-I Yu, Tomas Simon Kreuz, Chenglei Wu
  • Patent number: 10789777
    Abstract: A virtual reality (VR) head mounted display (HMD) includes a light field camera on an outside surface of the HMD facing away from the wearer. Light rays and intensity captured by the light field camera is communicated to a console that identifies a virtual position of the light field camera based on a relative position of the light field camera to the user's eye when wearing the HMD. Based on the virtual position, the console selects rays of light captured by the light field camera projected to intersect a field of view of the light field camera if located at the virtual position. From the selected rays and corresponding intensities, the console generates one or more images representing the environment surrounding the HMD, providing a representation of portions of the environment surrounding the HMD visible from the position of the user's eye.
    Type: Grant
    Filed: June 6, 2018
    Date of Patent: September 29, 2020
    Assignee: Facebook Technologies, LLC
    Inventor: Yaser Sheikh
  • Patent number: 10656710
    Abstract: An interactive system may include (1) a facial coupling subsystem configured to conduct a biopotential signal generated by a user's body, (2) a receiving subsystem electrically connected to the facial coupling subsystem and configured to receive, from the user's body via a compliant electrode of the facial coupling subsystem, the biopotential signal, and (3) a detection subsystem electrically connected to the receiving subsystem and configured to (a) determine a characteristic of the biopotential signal and (b) use the characteristic of the biopotential signal to determine a gaze direction of an eye of the user and/or a facial gesture of the user. In some examples, the facial coupling subsystem may include a plurality of compliant electrodes that each are configured to comply in a direction normal to a surface of the user's face. Various other apparatus, systems, and methods are also disclosed.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: May 19, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Mohsen Shahmohammadi, Ying Yang, Yaser Sheikh, Hernan Badino, James David White
  • Patent number: 10636193
    Abstract: A virtual reality (VR) or augmented reality (AR) head mounted display (HMD) includes various image capture devices that capture images of portions of the user's face and body. Through image analysis, points of each portion of the user's face and body are identified from the images and their movement is tracked. The identified points are mapped to a three dimensional model of a face and to a three dimensional model of a body. From the identified points, animation parameters describing positioning of various points of the user's face and body are determined for each captured image. From the animation parameters and transforms mapping the captured images to three dimensions, the three dimensional model of the face and the three dimensional model of the body is altered to render movement of the user's face and body.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: April 28, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Yaser Sheikh, Hernan Badino, Alexander Trenor Hypes, Dawei Wang, Mohsen Shahmohammadi, Michal Perdoch, Jason Saragih, Shih-En Wei
  • Patent number: 10586370
    Abstract: The disclosed computer system may include an input module, an autoencoder, and a rendering module. The input module may receive geometry information and images of a subject. The geometry information may be indicative of variation in geometry of the subject over time. Each image may be associated with a respective viewpoint and may include a view-dependent texture map of the subject. The autoencoder may jointly encode texture information and the geometry information to provide a latent vector. The autoencoder may infer, using the latent vector, an inferred geometry and an inferred view-dependent texture of the subject for a predicted viewpoint. The rendering module may be configured to render a reconstructed image of the subject for the predicted viewpoint using the inferred geometry and the inferred view-dependent texture. Various other systems and methods are also disclosed.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: March 10, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Stephen Anthony Lombardi, Jason Saragih, Yaser Sheikh, Takaaki Shiratori, Shoou-I Yu, Tomas Simon Kreuz, Chenglei Wu
  • Patent number: 10535155
    Abstract: Systems and methods for articulated pose estimation are provided. Some embodiments include training a convolutional neural network for object pose estimation, which includes receiving a two-dimensional training image of an articulated object that has a plurality of components and identifying, from the two-dimensional training image, at least one key point for each of the plurality of components. Some embodiments also include testing the accuracy of the object pose estimation, which includes visualizing a three or more dimensional pose of each of the plurality of components of the articulated object from a two-dimensional testing image and providing data related to the visualization for output.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: January 14, 2020
    Assignees: Toyota Motor Engineering & Manufacturing North America, Inc., Carnegie Mellon University
    Inventors: Zhe Cao, Qi Zhu, Yaser Sheikh, Suhas E. Chelian
  • Patent number: 10529113
    Abstract: A virtual reality (VR) or augmented reality (AR) head mounted display (HMD) includes various image capture devices that capture images of portions of the user's face. Through image analysis, points of each portion of the user's face are identified from the images and their movement is tracked. The identified points are mapped to a three dimensional model of a face. From the identified points, a blendshape vector is determined for each captured image, resulting in various vectors indicating the user's facial expressions. A direct expression model that directly maps images to blendshape coefficients for a set of facial expressions based on captured information from a set of users may augment the blendshape vector in various embodiments. From the blendshape vectors and transforms mapping the captured images to three dimensions, the three dimensional model of the face is altered to render the user's facial expressions.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: January 7, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Yaser Sheikh, Hernan Badino, Jason Saragih, Shih-En Wei, Alexander Trenor Hypes, Dawei Wang, Mohsen Shahmohammadi, Michal Perdoch