Patents by Inventor Menglei Chai
Menglei Chai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250252660Abstract: Three-dimensional object representation and re-rendering systems and methods for producing a 3D representation of an object from 2D images including the object that enables object-centric rendering. A modular approach is used that optimizes a Neural Radiance Field (NeRF) model to estimate object geometry and refine camera parameters and, then, infer surface material properties and per-image lighting conditions that fit the 2D images.Type: ApplicationFiled: April 28, 2025Publication date: August 7, 2025Inventors: Kyle Olszewski, Sergey Tulyakov, Zhengfei Kuang, Menglei Chai
-
Publication number: 20250209710Abstract: The subject technology generates a first image of a face using a GAN model. The subject technology applies 3D virtual hair on the first image to generate a second image with 3D virtual hair. The subject technology projects the second image with 3D virtual hair into a GAN latent space to generate a third image with realistic virtual hair. The subject technology performs a blend of the realistic virtual hair with the first image of the face to generate a new image with new realistic hair that corresponds to the 3D virtual hair. The subject technology trains a neural network that receives the second image with the 3D virtual hair and provides an output image with realistic virtual hair. The subject technology generates using the trained neural network, a particular output image with realistic hair based on a particular input image with 3D virtual hair.Type: ApplicationFiled: March 7, 2025Publication date: June 26, 2025Inventors: Aleksandr Belskikh, Menglei Chai, Antoine Chassang, Anna Kovalenko, Pavel Savchenkov
-
Patent number: 12322027Abstract: Domain adaptation frameworks for producing a 3D avatar generative adversarial network (GAN) capable of generating an avatar based on a single photographic image. The 3D avatar GAN is produced by training a target domain using an artistic dataset. Each artistic dataset includes a plurality of source images, each associated with a style type, such as caricature, cartoon, and comic. The domain adaptation framework in some implementations starts with a source domain that has been trained according to a 3D GAN and a target domain trained with a 2D GAN. The framework fine-tunes the 2D GAN by training it with the artistic datasets. The resulting 3D avatar GAN generates a 3D artistic avatar and an editing module for performing semantic and geometric edits.Type: GrantFiled: December 29, 2022Date of Patent: June 3, 2025Assignee: Snap Inc.Inventors: Rameen Abdal, Menglei Chai, Hsin-Ying Lee, Aliaksandr Siarohin, Sergey Tulyakov, Peihao Zhu
-
Patent number: 12315075Abstract: Three-dimensional object representation and re-rendering systems and methods for producing a 3D representation of an object from 2D images including the object that enables object-centric rendering. A modular approach is used that optimizes a Neural Radiance Field (NeRF) model to estimate object geometry and refine camera parameters and, then, infer surface material properties and per-image lighting conditions that fit the 2D images.Type: GrantFiled: December 28, 2022Date of Patent: May 27, 2025Assignee: Snap Inc.Inventors: Kyle Olszewski, Sergey Tulyakov, Zhengfei Kuang, Menglei Chai
-
Patent number: 12299810Abstract: A method for applying lighting conditions to a virtual object in an augmented reality (AR) device is described. In one aspect, the method includes generating, using a camera of a mobile device, an image, accessing a virtual object corresponding to an object in the image, identifying lighting parameters of the virtual object based on a machine learning model that is pre-trained with a paired dataset, the paired dataset includes synthetic source data and synthetic target data, the synthetic source data includes environment maps and 3D scans of items depicted in the environment map, the synthetic target data includes a synthetic sphere image rendered in the same environment map, applying the lighting parameters to the virtual object, and displaying, in a display of the mobile device, the shaded virtual object as a layer to the image.Type: GrantFiled: June 22, 2022Date of Patent: May 13, 2025Assignee: Snap Inc.Inventors: Menglei Chai, Sergey Demyanov, Yunqing Hu, Istvan Marton, Daniil Ostashev, Aleksei Podkin
-
Patent number: 12299905Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for accessing a set of images depicting at least a portion of a face. A set of facial regions of the face is identified, each facial region of the set of facial regions intersecting another facial region with at least one common vertex that is a member of a set of facial vertices. For each facial region of the set of facial regions, a weight formed from a set of region coefficients is generated. Based on the set of facial regions and the weight of each facial region of the set of facial regions, the face is tracked across the set of images.Type: GrantFiled: August 18, 2023Date of Patent: May 13, 2025Assignee: Snap Inc.Inventors: Chen Cao, Menglei Chai, Linjie Luo, Oliver Woodford
-
Publication number: 20250131571Abstract: An image segmentation system to perform operations that include causing display of an image within a graphical user interface of a client device, receive a set of user inputs that identify portions of a background and foreground of the image, identify a boundary of an object depicted within the image based on the set of user inputs, crop the object from the image based on the boundary, and generate a media item based on the cropped object, wherein properties of the media object, such as a size and a shape, are based on the boundary of the object.Type: ApplicationFiled: December 19, 2024Publication date: April 24, 2025Inventors: Menglei Chai, David LeMieux, Shubham Vij, lan Wehrman
-
Patent number: 12277639Abstract: Embodiments enable virtual hair generation. The virtual hair generation can be performed by generating a first image of a face using a GAN model, applying 3D virtual hair on the first image to generate a second image with 3D virtual hair, projecting the second image with 3D virtual hair into a GAN latent space to generate a third image with virtual hair, performing a blend of the virtual hair with the first image of the face to generate a new image with new virtual hair that corresponds to the 3D virtual hair, training a neural network that receives the second image with the 3D virtual hair and provides an output image with virtual hair, and generating using the trained neural network, a particular output image with hair based on a particular input image with 3D virtual hair.Type: GrantFiled: December 30, 2022Date of Patent: April 15, 2025Assignee: Snap Inc.Inventors: Aleksandr Belskikh, Menglei Chai, Antoine Chassang, Anna Kovalenko, Pavel Savchenkov
-
Patent number: 12272015Abstract: A messaging system performs neural network hair rendering for images provided by users of the messaging system. A method of neural network hair rendering includes processing a three-dimensional (3D) model of fake hair and a first real hair image depicting a first person to generate a fake hair structure, and encoding, using a fake hair encoder neural subnetwork, the fake hair structure to generate a coded fake hair structure. The method further includes processing, using a cross-domain structure embedding neural subnetwork, the coded fake hair structure to generate a fake and real hair structure, and encoding, using an appearance encoder neural subnetwork, a second real hair image depicting a second person having a second head to generate an appearance map. The method further includes processing, using a real appearance renderer neural subnetwork, the appearance map and the fake and real hair structure to generate a synthesized real image.Type: GrantFiled: May 2, 2024Date of Patent: April 8, 2025Assignee: Snap Inc.Inventors: Artem Bondich, Menglei Chai, Oleksandr Pyshchenko, Jian Ren, Sergey Tulyakov
-
Publication number: 20250054199Abstract: System and methods for compressing image-to-image models. Generative Adversarial Networks (GANs) have achieved success in generating high-fidelity images. An image compression system and method adds a novel variant to class-dependent parameters (CLADE), referred to as CLADE-Avg, which recovers the image quality without introducing extra computational cost. An extra layer of average smoothing is performed between the parameter and normalization layers. Compared to CLADE, this image compression system and method smooths abrupt boundaries, and introduces more possible values for the scaling and shift. In addition, the kernel size for the average smoothing can be selected as a hyperparameter, such as a 3×3 kernel size. This method does not introduce extra multiplications but only addition, and thus does not introduce much computational overhead, as the division can be absorbed into the parameters after training.Type: ApplicationFiled: October 22, 2024Publication date: February 13, 2025Inventors: Jian Ren, Menglei Chai, Sergey Tulyakov, Qing Jin
-
Patent number: 12223657Abstract: An image segmentation system to perform operations that include causing display of an image within a graphical user interface of a client device, receive a set of user inputs that identify portions of a background and foreground of the image, identify a boundary of an object depicted within the image based on the set of user inputs, crop the object from the image based on the boundary, and generate a media item based on the cropped object, wherein properties of the media object, such as a size and a shape, are based on the boundary of the object.Type: GrantFiled: April 18, 2023Date of Patent: February 11, 2025Assignee: Snap Inc.Inventors: Menglei Chai, David LeMieux, Shubham Vij, Ian Wehrman
-
Patent number: 12154303Abstract: System and methods for compressing image-to-image models. Generative Adversarial Networks (GANs) have achieved success in generating high-fidelity images. An image compression system and method adds a novel variant to class-dependent parameters (CLADE), referred to as CLADE-Avg, which recovers the image quality without introducing extra computational cost. An extra layer of average smoothing is performed between the parameter and normalization layers. Compared to CLADE, this image compression system and method smooths abrupt boundaries, and introduces more possible values for the scaling and shift. In addition, the kernel size for the average smoothing can be selected as a hyperparameter, such as a 3×3 kernel size. This method does not introduce extra multiplications but only addition, and thus does not introduce much computational overhead, as the division can be absorbed into the parameters after training.Type: GrantFiled: August 28, 2023Date of Patent: November 26, 2024Assignee: Snap Inc.Inventors: Jian Ren, Menglei Chai, Sergey Tulyakov, Qing Jin
-
Patent number: 12141922Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.Type: GrantFiled: June 29, 2023Date of Patent: November 12, 2024Assignee: Snap Inc.Inventors: Chen Cao, Menglei Chai, Linjie Luo, Soumyadip Sengupta
-
Patent number: 12094073Abstract: Systems, computer readable media, and methods herein describe an editing system where a three-dimensional (3D) object can be edited by editing a 2D sketch or 2D RGB views of the 3D object. The editing system uses multi-modal (MM) variational auto-decoders (VADs)(MM-VADs) that are trained with a shared latent space that enables editing 3D objects by editing 2D sketches of the 3D objects. The system determines a latent code that corresponds to an edited or sketched 2D sketch. The latent code is then used to generate a 3D object using the MM-VADs with the latent code as input. The latent space is divided into a latent space for shapes and a latent space for colors. The MM-VADs are trained with variational auto-encoders (VAE) and a ground truth.Type: GrantFiled: July 22, 2022Date of Patent: September 17, 2024Assignee: SNAP INC.Inventors: Menglei Chai, Sergey Tulyakov, Jian Ren, Hsin-Ying Lee, Kyle Olszewski, Zeng Huang, Zezhou Cheng
-
Publication number: 20240282066Abstract: A messaging system performs neural network hair rendering for images provided by users of the messaging system. A method of neural network hair rendering includes processing a three-dimensional (3D) model of fake hair and a first real hair image depicting a first person to generate a fake hair structure, and encoding, using a fake hair encoder neural subnetwork, the fake hair structure to generate a coded fake hair structure. The method further includes processing, using a cross-domain structure embedding neural subnetwork, the coded fake hair structure to generate a fake and real hair structure, and encoding, using an appearance encoder neural subnetwork, a second real hair image depicting a second person having a second head to generate an appearance map. The method further includes processing, using a real appearance renderer neural subnetwork, the appearance map and the fake and real hair structure to generate a synthesized real image.Type: ApplicationFiled: May 2, 2024Publication date: August 22, 2024Inventors: Artem Bondich, Menglei Chai, Olekssandr Pyshchenko, Jian Ren, Sergey Tulyakov
-
Publication number: 20240273809Abstract: Methods and systems are disclosed for performing operations for generating a 3D model of a scene. The operations include: receiving a set of two-dimensional (2D) images representing a first view of a real-world environment; applying a machine learning model comprising a neural light field network to the set of 2D images to predict pixel values of a target image representing a second view of the real-world environment, the machine learning model being trained to map a ray origin and direction directly to a given pixel value; and generating a three-dimensional (3D) model of the real-world environment based on the set of 2D images and the predicted target image.Type: ApplicationFiled: April 24, 2024Publication date: August 15, 2024Inventors: Zeng Huang, Jian Ren, Sergey Tulyakov, Menglei Chai, Kyle Olszewski, Huan Wang
-
Patent number: 12056792Abstract: Systems and methods herein describe a motion retargeting system. The motion retargeting system accesses a plurality of two-dimensional images comprising a person performing a plurality of body poses, extracts a plurality of implicit volumetric representations from the plurality of body poses, generates a three-dimensional warping field, the three-dimensional warping field configured to warp the plurality of implicit volumetric representations from a canonical pose to a target pose, and based on the three-dimensional warping field, generates a two-dimensional image of an artificial person performing the target pose.Type: GrantFiled: December 21, 2021Date of Patent: August 6, 2024Assignee: Snap Inc.Inventors: Jian Ren, Menglei Chai, Oliver Woodford, Kyle Olszewski, Sergey Tulyakov
-
Publication number: 20240221309Abstract: An environment synthesis framework generates virtual environments from a synthesized two-dimensional (2D) satellite map of a geographic area, a three-dimensional (3D) voxel environment, and a voxel-based neural rendering framework. In an example implementation, the synthesized 2D satellite map is generated by a map synthesis generative adversarial network (GAN) which is trained using sample city datasets. The multi-stage framework lifts the 2D map into a set of 3D octrees, generates an octree-based 3D voxel environment, and then converts it into a texturized 3D virtual environment using a neural rendering GAN and a set of pseudo ground truth images. The resulting 3D virtual environment is texturized, lifelike, editable, traversable in virtual reality (VR) and augmented reality (AR) experiences, and very large in scale.Type: ApplicationFiled: December 29, 2022Publication date: July 4, 2024Inventors: Menglei Chai, Hsin-Ying Lee, Chieh Lin, Willi Menapace, Aliaksandr Siarohin, Sergey Tulyakov
-
Publication number: 20240221314Abstract: Invertible Neural Networks (INNs) are used to build an Invertible Neural Skinning (INS) pipeline for reposing characters during animation. A Pose-conditioned Invertible Network (PIN) is built to learn pose-conditioned deformations. The end-to-end Invertible Neural Skinning (INS) pipeline is produced by placing two PINs around a differentiable Linear Blend Skinning (LBS) module using a pose-free canonical representation. The PINs help capture the non-linear surface deformations of clothes across poses and alleviate the volume loss suffered from the LBS operation. Since the canonical representation remains pose-free, the expensive mesh extraction is performed exactly once, and the mesh is reposed by warping it with the learned LBS during an inverse pass through the INS pipeline.Type: ApplicationFiled: December 29, 2022Publication date: July 4, 2024Inventors: Menglei Chai, Riza Alp Guler, Yash Mukund Kant, Jian Ren, Aliaksandr Siarohin, Sergey Tulyakov
-
Publication number: 20240221259Abstract: The subject technology generates a first image of a face using a GAN model. The subject technology applies 3D virtual hair on the first image to generate a second image with 3D virtual hair. The subject technology projects the second image with 3D virtual hair into a GAN latent space to generate a third image with realistic virtual hair. The subject technology performs a blend of the realistic virtual hair with the first image of the face to generate a new image with new realistic hair that corresponds to the 3D virtual hair. The subject technology trains a neural network that receives the second image with the 3D virtual hair and provides an output image with realistic virtual hair. The subject technology generates using the trained neural network, a particular output image with realistic hair based on a particular input image with 3D virtual hair.Type: ApplicationFiled: December 30, 2022Publication date: July 4, 2024Inventors: Aleksandr Belskikh, Menglei Chai, Antoine Chassang, Anna Kovalenko, Pavel Savchenkov