Patents by Inventor Chenglei Wu
Chenglei Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240320917Abstract: A method and system for cloth registration to improve modeling clothes by providing, for example, wrinkle-accurate cloth registration. The method includes obtaining an input scan of clothing in motion. The method includes generating a mesh representing the cloth in the scan based on a diffusion-based shape prior. The method includes registering a model of the cloth from the scan using a guidance process including at least: guiding deformation of the clothing based on a coarse registration signal based on the mesh and guiding the deformation of the clothing based on a distance between points in the mesh and a template mesh.Type: ApplicationFiled: March 19, 2024Publication date: September 26, 2024Inventors: Shunsuke Saito, Jingfan Guo, Chenglei Wu, Fabian Andres Prada, Donglai Xiang, Javier Romero, Takaaki Shiratori, Hyun Soo Park
-
Publication number: 20220237879Abstract: A method for training a real-time, direct clothing modeling for animating an avatar for a subject is provided. The method includes collecting multiple images of a subject, forming a three-dimensional clothing mesh and a three-dimensional body mesh based on the images of the subject, and aligning the three-dimensional clothing mesh to the three-dimensional body mesh to form a skin-clothing boundary and a garment texture. The method also includes determining a loss factor based on a predicted cloth position and garment texture and an interpolated position and garment texture from the images of the subject, and updating a three-dimensional model including the three-dimensional clothing mesh and the three-dimensional body mesh according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to cause the system to execute the above method are also provided.Type: ApplicationFiled: January 14, 2022Publication date: July 28, 2022Inventors: Chenglei Wu, Fabian Andres Prada Nino, Timur Bagautdinov, Weipeng Xu, Jessica Hodgins, Donglai Xiang
-
Patent number: 11182947Abstract: In one embodiment, a system may access a codec that encodes an appearance associated with a subject and comprise codec portions that respectively correspond to body parts of the subject. The system may generate a training codec that comprises a first subset of the codec portions (a first set of body parts) and a modified second subset of the codec portions (muted body parts). The system may decode the training codec using a machine-learning model to generate a mesh of the subject. The system may transform the mesh of the subject based on a predetermined pose. The system may update the machine-learning model based on a comparison between the transformed mesh and a target mesh of the subject having the predetermined pose. The system in the present application can train a machine-learning model to render an avatar with a pose using uncorrelated codec portions corresponding to different body parts.Type: GrantFiled: April 17, 2020Date of Patent: November 23, 2021Assignee: Facebook Technologies, LLC.Inventors: Chenglei Wu, Jason Saragih, Tomas Simon Kreuz, Takaaki Shiratori
-
Patent number: 11087521Abstract: The disclosed computer system may include an input module, an autoencoder, and a rendering module. The input module may receive geometry information and images of a subject. The geometry information may be indicative of variation in geometry of the subject over time. Each image may be associated with a respective viewpoint and may include a view-dependent texture map of the subject. The autoencoder may jointly encode texture information and the geometry information to provide a latent vector. The autoencoder may infer, using the latent vector, an inferred geometry and an inferred view-dependent texture of the subject for a predicted viewpoint. The rendering module may be configured to render a reconstructed image of the subject for the predicted viewpoint using the inferred geometry and the inferred view-dependent texture. Various other systems and methods are also disclosed.Type: GrantFiled: January 29, 2020Date of Patent: August 10, 2021Assignee: Facebook Technologies, LLCInventors: Stephen Anthony Lombardi, Jason Saragih, Yaser Sheikh, Takaaki Shiratori, Shoou-I Yu, Tomas Simon Kreuz, Chenglei Wu
-
Patent number: 10616550Abstract: Multiple cameras with different orientations capture images of an object positioned at a target position relative to the cameras. Images from each camera are processed in parallel to determine depth information from correspondences between different regions within an image captured by each image capture device in parallel. Depth information for images from each camera is modified in parallel based on shading information for the images and stereoscopic information from the images. In various embodiments, the depth information is refined by minimizing a total energy from intensities of portions of the images having a common depth and intensities of portions of the image from shading information from images captured by multiple cameras. The modified depth information from multiple images is combined to generate a reconstruction of the object positioned at the target position.Type: GrantFiled: September 14, 2018Date of Patent: April 7, 2020Assignee: Facebook Technologies, LLCInventors: Chenglei Wu, Shoou-I Yu
-
Patent number: 10586370Abstract: The disclosed computer system may include an input module, an autoencoder, and a rendering module. The input module may receive geometry information and images of a subject. The geometry information may be indicative of variation in geometry of the subject over time. Each image may be associated with a respective viewpoint and may include a view-dependent texture map of the subject. The autoencoder may jointly encode texture information and the geometry information to provide a latent vector. The autoencoder may infer, using the latent vector, an inferred geometry and an inferred view-dependent texture of the subject for a predicted viewpoint. The rendering module may be configured to render a reconstructed image of the subject for the predicted viewpoint using the inferred geometry and the inferred view-dependent texture. Various other systems and methods are also disclosed.Type: GrantFiled: July 31, 2018Date of Patent: March 10, 2020Assignee: Facebook Technologies, LLCInventors: Stephen Anthony Lombardi, Jason Saragih, Yaser Sheikh, Takaaki Shiratori, Shoou-I Yu, Tomas Simon Kreuz, Chenglei Wu
-
Patent number: 10483004Abstract: A system and method for non-invasive reconstruction of an entire object-specific or person-specific teeth row from just a set of photographs of the mouth region of an object (e.g., an animal) or a person (e.g., an actor or a patient) are provided. A teeth statistic model defining individual teeth in a teeth row can be developed. The teeth statistical model can jointly describe shape and pose variations per tooth, and as well as placement of the individual teeth in the teeth row. In some embodiments, the teeth statistic model can be trained using teeth information from 3D scan data of different sample subjects. The 3D scan data can be used to establish a database of teeth of various shapes and poses. Geometry information regarding the individual teeth can be extracted from the 3D scan data. The teeth statistic model can be trained using the geometry information regarding the individual teeth.Type: GrantFiled: September 29, 2016Date of Patent: November 19, 2019Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)Inventors: Chenglei Wu, Derek Bradley, Thabo Beeler, Markus Gross
-
Publication number: 20190213772Abstract: The disclosed computer system may include an input module, an autoencoder, and a rendering module. The input module may receive geometry information and images of a subject. The geometry information may be indicative of variation in geometry of the subject over time. Each image may be associated with a respective viewpoint and may include a view-dependent texture map of the subject. The autoencoder may jointly encode texture information and the geometry information to provide a latent vector. The autoencoder may infer, using the latent vector, an inferred geometry and an inferred view-dependent texture of the subject for a predicted viewpoint. The rendering module may be configured to render a reconstructed image of the subject for the predicted viewpoint using the inferred geometry and the inferred view-dependent texture. Various other systems and methods are also disclosed.Type: ApplicationFiled: July 31, 2018Publication date: July 11, 2019Inventors: Stephen Anthony Lombardi, Jason Saragih, Yaser Sheikh, Takaaki Shiratori, Shoou-I Yu, Tomas Simon Kreuz, Chenglei Wu
-
Publication number: 20180085201Abstract: A system and method for non-invasive reconstruction of an entire object-specific or person-specific teeth row from just a set of photographs of the mouth region of an object (e.g., an animal) or a person (e.g., an actor or a patient) are provided. A teeth statistic model defining individual teeth in a teeth row can be developed. The teeth statistical model can jointly describe shape and pose variations per tooth, and as well as placement of the individual teeth in the teeth row. In some embodiments, the teeth statistic model can be trained using teeth information from 3D scan data of different sample subjects. The 3D scan data can be used to establish a database of teeth of various shapes and poses. Geometry information regarding the individual teeth can be extracted from the 3D scan data. The teeth statistic model can be trained using the geometry information regarding the individual teeth.Type: ApplicationFiled: September 29, 2016Publication date: March 29, 2018Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Chenglei Wu, Derek Bradley, Thabo Beeler, Markus Gross
-
Patent number: 9652890Abstract: Techniques and systems are described for generating an anatomically-constrained local model and for performing performance capture using the model. The local model includes a local shape subspace and an anatomical subspace. In one example, the local shape subspace constrains local deformation of various patches that represent the geometry of a subject's face. In the same example, the anatomical subspace includes an anatomical bone structure, and can be used to constrain movement and deformation of the patches globally on the subject's face. The anatomically-constrained local face model and performance capture technique can be used to track three-dimensional faces or other parts of a subject from motion data in a high-quality manner. Local model parameters that best describe the observed motion of the subject's physical deformations (e.g., facial expressions) under the given constraints are estimated through optimization.Type: GrantFiled: September 29, 2015Date of Patent: May 16, 2017Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖESSISCHE TECHNISCHE HOCHSCHULE ZÜRICHInventors: Thabo Beeler, Derek Bradley, Chenglei Wu
-
Patent number: 9639737Abstract: Techniques and systems are described for generating an anatomically-constrained local model and for performing performance capture using the model. The local model includes a local shape subspace and an anatomical subspace. In one example, the local shape subspace constrains local deformation of various patches that represent the geometry of a subject's face. In the same example, the anatomical subspace includes an anatomical bone structure, and can be used to constrain movement and deformation of the patches globally on the subject's face. The anatomically-constrained local face model and performance capture technique can be used to track three-dimensional faces or other parts of a subject from motion data in a high-quality manner. Local model parameters that best describe the observed motion of the subject's physical deformations (e.g., facial expressions) under the given constraints are estimated through optimization.Type: GrantFiled: September 29, 2015Date of Patent: May 2, 2017Assignees: ETH ZÜRICH (EIDGENÖESSISCHE TECHNISCHE HOCHSCHULE ZÜRICH), DISNEY ENTERPRISES, INC.Inventors: Thabo Beeler, Derek Bradley, Chenglei Wu
-
Publication number: 20170091994Abstract: Techniques and systems are described for generating an anatomically-constrained local model and for performing performance capture using the model. The local model includes a local shape subspace and an anatomical subspace. In one example, the local shape subspace constrains local deformation of various patches that represent the geometry of a subject's face. In the same example, the anatomical subspace includes an anatomical bone structure, and can be used to constrain movement and deformation of the patches globally on the subject's face. The anatomically-constrained local face model and performance capture technique can be used to track three-dimensional faces or other parts of a subject from motion data in a high-quality manner. Local model parameters that best describe the observed motion of the subject's physical deformations (e.g., facial expressions) under the given constraints are estimated through optimization.Type: ApplicationFiled: September 29, 2015Publication date: March 30, 2017Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Thabo Beeler, Derek Bradley, Chenglei Wu
-
Publication number: 20170091529Abstract: Techniques and systems are described for generating an anatomically-constrained local model and for performing performance capture using the model. The local model includes a local shape subspace and an anatomical subspace. In one example, the local shape subspace constrains local deformation of various patches that represent the geometry of a subject's face. In the same example, the anatomical subspace includes an anatomical bone structure, and can be used to constrain movement and deformation of the patches globally on the subject's face. The anatomically-constrained local face model and performance capture technique can be used to track three-dimensional faces or other parts of a subject from motion data in a high-quality manner. Local model parameters that best describe the observed motion of the subject's physical deformations (e.g., facial expressions) under the given constraints are estimated through optimization.Type: ApplicationFiled: September 29, 2015Publication date: March 30, 2017Applicants: DISNEY ENTERPRISES, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Thabo Beeler, Derek Bradley, Chenglei Wu