Patents by Inventor Menglei Chai

Menglei Chai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220292724
    Abstract: System and methods for compressing image-to-image models. Generative Adversarial Networks (GANs) have achieved success in generating high-fidelity images. An image compression system and method adds a novel variant to class-dependent parameters (CLADE), referred to as CLADE-Avg, which recovers the image quality without introducing extra computational cost. An extra layer of average smoothing is performed between the parameter and normalization layers. Compared to CLADE, this image compression system and method smooths abrupt boundaries, and introduces more possible values for the scaling and shift. In addition, the kernel size for the average smoothing can be selected as a hyperparameter, such as a 3×3 kernel size. This method does not introduce extra multiplications but only addition, and thus does not introduce much computational overhead, as the division can be absorbed into the parameters after training.
    Type: Application
    Filed: March 4, 2021
    Publication date: September 15, 2022
    Inventors: Jian Ren, Menglei Chai, Sergey Tulyakov, Qing Jin
  • Publication number: 20220207786
    Abstract: Systems and methods herein describe a motion retargeting system. The motion retargeting system accesses a plurality of two-dimensional images comprising a person performing a plurality of body poses, extracts a plurality of implicit volumetric representations from the plurality of body poses, generates a three-dimensional warping field, the three-dimensional warping field configured to warp the plurality of implicit volumetric representations from a canonical pose to a target pose, and based on the three-dimensional warping field, generates a two-dimensional image of an artificial person performing the target pose.
    Type: Application
    Filed: December 21, 2021
    Publication date: June 30, 2022
    Inventors: Jian Ren, Menglei Chai, Oliver Woodford, Kyle Olszewski, Sergey Tulyakov
  • Publication number: 20220101104
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for video synthesis. The program and method provide for accessing a primary generative adversarial network (GAN) comprising a pre-trained image generator, a motion generator comprising a plurality of neural networks, and a video discriminator; generating an updated GAN based on the primary GAN, by performing operations comprising identifying input data of the updated GAN, the input data comprising an initial latent code and a motion domain dataset, training the motion generator based on the input data, and adjusting weights of the plurality of neural networks of the primary GAN based on an output of the video discriminator; and generating a synthesized video based on the primary GAN and the input data.
    Type: Application
    Filed: September 30, 2021
    Publication date: March 31, 2022
    Inventors: Menglei Chai, Kyle Olszewski, Jian Ren, Yu Tian, Sergey Tulyakov
  • Publication number: 20220058880
    Abstract: A messaging system performs neural network hair rendering for images provided by users of the messaging system. A method of neural network hair rendering includes processing a three-dimensional (3D) model of fake hair and a first real hair image depicting a first person to generate a fake hair structure, and encoding, using a fake hair encoder neural subnetwork, the fake hair structure to generate a coded fake hair structure. The method further includes processing, using a cross-domain structure embedding neural subnetwork, the coded fake hair structure to generate a fake and real hair structure, and encoding, using an appearance encoder neural subnetwork, a second real hair image depicting a second person having a second head to generate an appearance map. The method further includes processing, using a real appearance renderer neural subnetwork, the appearance map and the fake and real hair structure to generate a synthesized real image.
    Type: Application
    Filed: August 20, 2021
    Publication date: February 24, 2022
    Inventors: Artem Bondich, Menglei Chai, Oleksandr Pyshchenko, Jian Ren, Sergey Tulyakov
  • Publication number: 20220036647
    Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.
    Type: Application
    Filed: October 12, 2021
    Publication date: February 3, 2022
    Inventors: Soumyadip Sengupta, Linjie Luo, Chen Cao, Menglei Chai
  • Publication number: 20210407163
    Abstract: Systems and methods herein describe novel motion representations for animating articulated objects consisting of distinct parts. The described systems and method access source image data, identify driving image data to modify image feature data in the source image sequence data, generate, using an image transformation neural network, modified source image data comprising a plurality of modified source images depicting modified versions of the image feature data, the image transformation neural network being trained to identify, for each image in the source image data, a driving image from the driving image data, the identified driving image being implemented by the image transformation neural network to modify a corresponding source image in the source image data using motion estimation differences between the identified driving image and the corresponding source image, and stores the modified source image data.
    Type: Application
    Filed: June 30, 2021
    Publication date: December 30, 2021
    Inventors: Menglei Chai, Jian Ren, Aliaksandr Siarohin, Sergey Tulyakov, Oliver Woodford
  • Patent number: 11164376
    Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: November 2, 2021
    Assignee: Snap Inc.
    Inventors: Soumyadip Sengupta, Linjie Luo, Chen Cao, Menglei Chai
  • Publication number: 20210319540
    Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.
    Type: Application
    Filed: June 23, 2021
    Publication date: October 14, 2021
    Inventors: Chen Cao, Wen Zhang, Menglei Chai, Linjie Luo
  • Patent number: 11074675
    Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: July 27, 2021
    Assignee: Snap Inc.
    Inventors: Chen Cao, Wen Zhang, Menglei Chai, Linjie Luo
  • Publication number: 20210192744
    Abstract: An image segmentation system to perform operations that include causing display of an image within a graphical user interface of a client device, receive a set of user inputs that identify portions of a background and foreground of the image, identify a boundary of an object depicted within the image based on the set of user inputs, crop the object from the image based on the boundary, and generate a media item based on the cropped object, wherein properties of the media object, such as a size and a shape, are based on the boundary of the object.
    Type: Application
    Filed: March 2, 2021
    Publication date: June 24, 2021
    Inventors: Shubham Vij, Menglei Chai, David LeMieux, Ian Wehrman
  • Publication number: 20210165998
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for accessing a set of images depicting at least a portion of a face. A set of facial regions of the face is identified, each facial region of the set of facial regions intersecting another facial region with at least one common vertex that is a member of a set of facial vertices. For each facial region of the set of facial regions, a weight formed from a set of region coefficients is generated. Based on the set of facial regions and the weight of each facial region of the set of facial regions, the face is tracked across the set of images.
    Type: Application
    Filed: February 12, 2021
    Publication date: June 3, 2021
    Inventors: Chen Cao, Menglei Chai, Linjie Luo, Oliver Woodford
  • Patent number: 10964023
    Abstract: An image segmentation system to perform operations that include causing display of an image within a graphical user interface of a client device, receive a set of user inputs that identify portions of a background and foreground of the image, identify a boundary of an object depicted within the image based on the set of user inputs, crop the object from the image based on the boundary, and generate a media item based on the cropped object, wherein properties of the media object, such as a size and a shape, are based on the boundary of the object.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: March 30, 2021
    Assignee: Snap Inc.
    Inventors: Shubham Vij, Menglei Chai, David LeMieux, Ian Wehrman
  • Patent number: 10949648
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for accessing a set of images depicting at least a portion of a face. A set of facial regions of the face is identified, each facial region of the set of facial regions intersecting another facial region with at least one common vertex which is a member of a set of facial vertices. For each facial region of the set of facial regions, a weight formed from a set of region coefficients is generated. Based on the set of facial regions and the weight of each facial region of the set of facial regions, the face is tracked across the set of images.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: March 16, 2021
    Assignee: Snap Inc.
    Inventors: Chen Cao, Menglei Chai, Linjie Luo, Oliver Woodford
  • Patent number: 10665013
    Abstract: Provided is a single-image-based fully automatic three-dimensional (3D) hair modeling method. The method mainly includes four steps: generation of hair image training data, hair segmentation and growth direction estimation based on a hierarchical depth neural network, generation and organization of 3D hair exemplars, and data-driven 3D hair modeling. The method can automatically and robustly generate a complete high quality 3D model of which the quality reaches the level of the currently most advanced user interaction-based technology. The method can be used in a series of applications, such as hair style editing in portrait images, browsing of hair style spaces, and searching for Internet images of similar hair styles.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: May 26, 2020
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Kun Zhou, Menglei Chai
  • Publication number: 20200043145
    Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Inventors: Chen Cao, Wen Zhang, Menglei Chai, Linjie Luo
  • Patent number: 10311623
    Abstract: Disclosed is a real-time motion simulation method for hair and object collisions, which is based on a small amount of pre-computation training data and generates a self-adaptive simplified model for virtual hair style for real-time selection and interpolation and collision correction, thereby realizing real-time high-quality motion simulation for hair-object collisions. The method comprises the following steps: 1) reduced model pre-computation: based on pre-computation simulation data, selecting representative hairs and generating a reduced model; 2) real-time animation and interpolation: clustering the representative hairs simulated in real time; selecting the reduced model and interpolating; and 3) collision correction: detecting collision and applying a correction force on the representative hairs to correct the collisions. The present invention proposed a real-time simulation method for hair-object collision, which achieves similar effect as off-line simulation and reduces the computation time cost.
    Type: Grant
    Filed: February 15, 2015
    Date of Patent: June 4, 2019
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Kun Zhou, Menglei Chai, Changxi Zheng
  • Publication number: 20190051048
    Abstract: Provided is a single-image-based fully automatic three-dimensional (3D) hair modeling method. The method mainly includes four steps: generation of hair image training data, hair segmentation and growth direction estimation based on a hierarchical depth neural network, generation and organization of 3D hair exemplars, and data-driven 3D hair modeling. The method can automatically and robustly generate a complete high quality 3D model of which the quality reaches the level of the currently most advanced user interaction-based technology. The method can be used in a series of applications, such as hair style editing in portrait images, browsing of hair style spaces, and searching for Internet images of similar hair styles.
    Type: Application
    Filed: October 17, 2018
    Publication date: February 14, 2019
    Inventors: KUN ZHOU, MENGLEI CHAI
  • Publication number: 20180268591
    Abstract: Disclosed is a real-time motion simulation method for hair and object collisions, which is based on a small amount of pre-computation training data and generates a self-adaptive simplified model for virtual hair style for real-time selection and interpolation and collision correction, thereby realizing real-time high-quality motion simulation for hair-object collisions. The method comprises the following steps: 1) reduced model pre-computation: based on pre-computation simulation data, selecting representative hairs and generating a reduced model; 2) real-time animation and interpolation: clustering the representative hairs simulated in real time; selecting the reduced model and interpolating; and 3) collision correction: detecting collision and applying a correction force on the representative hairs to correct the collisions. The present invention proposed a real-time simulation method for hair-object collision, which achieves similar effect as off-line simulation and reduces the computation time cost.
    Type: Application
    Filed: February 15, 2015
    Publication date: September 20, 2018
    Applicant: Zhejiang University
    Inventors: Kun ZHOU, Menglei CHAI, Changxi ZHENG
  • Patent number: 9792725
    Abstract: The invention discloses a method for image and video virtual hairstyle modeling, including: performing data acquisition for a target subject by using a digital device and obtaining a hairstyle region from an image by segmenting; obtaining a uniformly distributed static hairstyle model which conforms to the original hairstyle region by solving an orientation ambiguity problem of an image hairstyle orientation field, calculating a movement of the hairstyle in a video by tracing a movement of a head model and estimating non-rigid deformation, generating a dynamic hairstyle model in every moment during the moving process, so that the dynamic hairstyle model fits the real movement of the hairstyle in the video naturally. The method is used to perform virtual 3D model reconstruction with physical rationality for individual hairstyles in single-views and video sequences, and widely applied in creating virtual characters and many hairstyle editing applications for images and videos.
    Type: Grant
    Filed: November 7, 2014
    Date of Patent: October 17, 2017
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Yanlin Weng, Menglei Chai, Lvdi Wang, Kun Zhou
  • Patent number: 9679192
    Abstract: Systems and methods are disclosed herein for 3-Dimensional portrait reconstruction from a single photo. A face portion of a person depicted in a portrait photo is detected and a 3-Dimensional model of the person depicted in the portrait photo constructed. In one embodiment, constructing the 3-Dimensional model involves fitting hair portions of the portrait photo to one or more helices. In another embodiment, constructing the 3-Dimensional model involves applying positional and normal boundary conditions determined based on one or more relationships between face portion shape and hair portion shape. In yet another embodiment, constructing the 3-Dimensional model involves using shape from shading to capture fine-scale details in a form of surface normals, the shape from shading based on an adaptive albedo model and/or a lighting condition estimated based on shape fitting the face portion.
    Type: Grant
    Filed: April 24, 2015
    Date of Patent: June 13, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Linjie Luo, Sunil Hadap, Nathan Carr, Kalyan Sunkavalli, Menglei Chai