Patents by Inventor Menglei Chai
Menglei Chai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220292724Abstract: System and methods for compressing image-to-image models. Generative Adversarial Networks (GANs) have achieved success in generating high-fidelity images. An image compression system and method adds a novel variant to class-dependent parameters (CLADE), referred to as CLADE-Avg, which recovers the image quality without introducing extra computational cost. An extra layer of average smoothing is performed between the parameter and normalization layers. Compared to CLADE, this image compression system and method smooths abrupt boundaries, and introduces more possible values for the scaling and shift. In addition, the kernel size for the average smoothing can be selected as a hyperparameter, such as a 3×3 kernel size. This method does not introduce extra multiplications but only addition, and thus does not introduce much computational overhead, as the division can be absorbed into the parameters after training.Type: ApplicationFiled: March 4, 2021Publication date: September 15, 2022Inventors: Jian Ren, Menglei Chai, Sergey Tulyakov, Qing Jin
-
Publication number: 20220207786Abstract: Systems and methods herein describe a motion retargeting system. The motion retargeting system accesses a plurality of two-dimensional images comprising a person performing a plurality of body poses, extracts a plurality of implicit volumetric representations from the plurality of body poses, generates a three-dimensional warping field, the three-dimensional warping field configured to warp the plurality of implicit volumetric representations from a canonical pose to a target pose, and based on the three-dimensional warping field, generates a two-dimensional image of an artificial person performing the target pose.Type: ApplicationFiled: December 21, 2021Publication date: June 30, 2022Inventors: Jian Ren, Menglei Chai, Oliver Woodford, Kyle Olszewski, Sergey Tulyakov
-
Publication number: 20220101104Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for video synthesis. The program and method provide for accessing a primary generative adversarial network (GAN) comprising a pre-trained image generator, a motion generator comprising a plurality of neural networks, and a video discriminator; generating an updated GAN based on the primary GAN, by performing operations comprising identifying input data of the updated GAN, the input data comprising an initial latent code and a motion domain dataset, training the motion generator based on the input data, and adjusting weights of the plurality of neural networks of the primary GAN based on an output of the video discriminator; and generating a synthesized video based on the primary GAN and the input data.Type: ApplicationFiled: September 30, 2021Publication date: March 31, 2022Inventors: Menglei Chai, Kyle Olszewski, Jian Ren, Yu Tian, Sergey Tulyakov
-
Publication number: 20220058880Abstract: A messaging system performs neural network hair rendering for images provided by users of the messaging system. A method of neural network hair rendering includes processing a three-dimensional (3D) model of fake hair and a first real hair image depicting a first person to generate a fake hair structure, and encoding, using a fake hair encoder neural subnetwork, the fake hair structure to generate a coded fake hair structure. The method further includes processing, using a cross-domain structure embedding neural subnetwork, the coded fake hair structure to generate a fake and real hair structure, and encoding, using an appearance encoder neural subnetwork, a second real hair image depicting a second person having a second head to generate an appearance map. The method further includes processing, using a real appearance renderer neural subnetwork, the appearance map and the fake and real hair structure to generate a synthesized real image.Type: ApplicationFiled: August 20, 2021Publication date: February 24, 2022Inventors: Artem Bondich, Menglei Chai, Oleksandr Pyshchenko, Jian Ren, Sergey Tulyakov
-
Publication number: 20220036647Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.Type: ApplicationFiled: October 12, 2021Publication date: February 3, 2022Inventors: Soumyadip Sengupta, Linjie Luo, Chen Cao, Menglei Chai
-
Publication number: 20210407163Abstract: Systems and methods herein describe novel motion representations for animating articulated objects consisting of distinct parts. The described systems and method access source image data, identify driving image data to modify image feature data in the source image sequence data, generate, using an image transformation neural network, modified source image data comprising a plurality of modified source images depicting modified versions of the image feature data, the image transformation neural network being trained to identify, for each image in the source image data, a driving image from the driving image data, the identified driving image being implemented by the image transformation neural network to modify a corresponding source image in the source image data using motion estimation differences between the identified driving image and the corresponding source image, and stores the modified source image data.Type: ApplicationFiled: June 30, 2021Publication date: December 30, 2021Inventors: Menglei Chai, Jian Ren, Aliaksandr Siarohin, Sergey Tulyakov, Oliver Woodford
-
Patent number: 11164376Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.Type: GrantFiled: August 29, 2018Date of Patent: November 2, 2021Assignee: Snap Inc.Inventors: Soumyadip Sengupta, Linjie Luo, Chen Cao, Menglei Chai
-
Publication number: 20210319540Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.Type: ApplicationFiled: June 23, 2021Publication date: October 14, 2021Inventors: Chen Cao, Wen Zhang, Menglei Chai, Linjie Luo
-
Patent number: 11074675Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.Type: GrantFiled: July 31, 2018Date of Patent: July 27, 2021Assignee: Snap Inc.Inventors: Chen Cao, Wen Zhang, Menglei Chai, Linjie Luo
-
Publication number: 20210192744Abstract: An image segmentation system to perform operations that include causing display of an image within a graphical user interface of a client device, receive a set of user inputs that identify portions of a background and foreground of the image, identify a boundary of an object depicted within the image based on the set of user inputs, crop the object from the image based on the boundary, and generate a media item based on the cropped object, wherein properties of the media object, such as a size and a shape, are based on the boundary of the object.Type: ApplicationFiled: March 2, 2021Publication date: June 24, 2021Inventors: Shubham Vij, Menglei Chai, David LeMieux, Ian Wehrman
-
Publication number: 20210165998Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for accessing a set of images depicting at least a portion of a face. A set of facial regions of the face is identified, each facial region of the set of facial regions intersecting another facial region with at least one common vertex that is a member of a set of facial vertices. For each facial region of the set of facial regions, a weight formed from a set of region coefficients is generated. Based on the set of facial regions and the weight of each facial region of the set of facial regions, the face is tracked across the set of images.Type: ApplicationFiled: February 12, 2021Publication date: June 3, 2021Inventors: Chen Cao, Menglei Chai, Linjie Luo, Oliver Woodford
-
Patent number: 10964023Abstract: An image segmentation system to perform operations that include causing display of an image within a graphical user interface of a client device, receive a set of user inputs that identify portions of a background and foreground of the image, identify a boundary of an object depicted within the image based on the set of user inputs, crop the object from the image based on the boundary, and generate a media item based on the cropped object, wherein properties of the media object, such as a size and a shape, are based on the boundary of the object.Type: GrantFiled: March 26, 2019Date of Patent: March 30, 2021Assignee: Snap Inc.Inventors: Shubham Vij, Menglei Chai, David LeMieux, Ian Wehrman
-
Patent number: 10949648Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for accessing a set of images depicting at least a portion of a face. A set of facial regions of the face is identified, each facial region of the set of facial regions intersecting another facial region with at least one common vertex which is a member of a set of facial vertices. For each facial region of the set of facial regions, a weight formed from a set of region coefficients is generated. Based on the set of facial regions and the weight of each facial region of the set of facial regions, the face is tracked across the set of images.Type: GrantFiled: October 25, 2018Date of Patent: March 16, 2021Assignee: Snap Inc.Inventors: Chen Cao, Menglei Chai, Linjie Luo, Oliver Woodford
-
Patent number: 10665013Abstract: Provided is a single-image-based fully automatic three-dimensional (3D) hair modeling method. The method mainly includes four steps: generation of hair image training data, hair segmentation and growth direction estimation based on a hierarchical depth neural network, generation and organization of 3D hair exemplars, and data-driven 3D hair modeling. The method can automatically and robustly generate a complete high quality 3D model of which the quality reaches the level of the currently most advanced user interaction-based technology. The method can be used in a series of applications, such as hair style editing in portrait images, browsing of hair style spaces, and searching for Internet images of similar hair styles.Type: GrantFiled: October 17, 2018Date of Patent: May 26, 2020Assignee: ZHEJIANG UNIVERSITYInventors: Kun Zhou, Menglei Chai
-
Publication number: 20200043145Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.Type: ApplicationFiled: July 31, 2018Publication date: February 6, 2020Inventors: Chen Cao, Wen Zhang, Menglei Chai, Linjie Luo
-
Patent number: 10311623Abstract: Disclosed is a real-time motion simulation method for hair and object collisions, which is based on a small amount of pre-computation training data and generates a self-adaptive simplified model for virtual hair style for real-time selection and interpolation and collision correction, thereby realizing real-time high-quality motion simulation for hair-object collisions. The method comprises the following steps: 1) reduced model pre-computation: based on pre-computation simulation data, selecting representative hairs and generating a reduced model; 2) real-time animation and interpolation: clustering the representative hairs simulated in real time; selecting the reduced model and interpolating; and 3) collision correction: detecting collision and applying a correction force on the representative hairs to correct the collisions. The present invention proposed a real-time simulation method for hair-object collision, which achieves similar effect as off-line simulation and reduces the computation time cost.Type: GrantFiled: February 15, 2015Date of Patent: June 4, 2019Assignee: ZHEJIANG UNIVERSITYInventors: Kun Zhou, Menglei Chai, Changxi Zheng
-
Publication number: 20190051048Abstract: Provided is a single-image-based fully automatic three-dimensional (3D) hair modeling method. The method mainly includes four steps: generation of hair image training data, hair segmentation and growth direction estimation based on a hierarchical depth neural network, generation and organization of 3D hair exemplars, and data-driven 3D hair modeling. The method can automatically and robustly generate a complete high quality 3D model of which the quality reaches the level of the currently most advanced user interaction-based technology. The method can be used in a series of applications, such as hair style editing in portrait images, browsing of hair style spaces, and searching for Internet images of similar hair styles.Type: ApplicationFiled: October 17, 2018Publication date: February 14, 2019Inventors: KUN ZHOU, MENGLEI CHAI
-
Publication number: 20180268591Abstract: Disclosed is a real-time motion simulation method for hair and object collisions, which is based on a small amount of pre-computation training data and generates a self-adaptive simplified model for virtual hair style for real-time selection and interpolation and collision correction, thereby realizing real-time high-quality motion simulation for hair-object collisions. The method comprises the following steps: 1) reduced model pre-computation: based on pre-computation simulation data, selecting representative hairs and generating a reduced model; 2) real-time animation and interpolation: clustering the representative hairs simulated in real time; selecting the reduced model and interpolating; and 3) collision correction: detecting collision and applying a correction force on the representative hairs to correct the collisions. The present invention proposed a real-time simulation method for hair-object collision, which achieves similar effect as off-line simulation and reduces the computation time cost.Type: ApplicationFiled: February 15, 2015Publication date: September 20, 2018Applicant: Zhejiang UniversityInventors: Kun ZHOU, Menglei CHAI, Changxi ZHENG
-
Patent number: 9792725Abstract: The invention discloses a method for image and video virtual hairstyle modeling, including: performing data acquisition for a target subject by using a digital device and obtaining a hairstyle region from an image by segmenting; obtaining a uniformly distributed static hairstyle model which conforms to the original hairstyle region by solving an orientation ambiguity problem of an image hairstyle orientation field, calculating a movement of the hairstyle in a video by tracing a movement of a head model and estimating non-rigid deformation, generating a dynamic hairstyle model in every moment during the moving process, so that the dynamic hairstyle model fits the real movement of the hairstyle in the video naturally. The method is used to perform virtual 3D model reconstruction with physical rationality for individual hairstyles in single-views and video sequences, and widely applied in creating virtual characters and many hairstyle editing applications for images and videos.Type: GrantFiled: November 7, 2014Date of Patent: October 17, 2017Assignee: ZHEJIANG UNIVERSITYInventors: Yanlin Weng, Menglei Chai, Lvdi Wang, Kun Zhou
-
Patent number: 9679192Abstract: Systems and methods are disclosed herein for 3-Dimensional portrait reconstruction from a single photo. A face portion of a person depicted in a portrait photo is detected and a 3-Dimensional model of the person depicted in the portrait photo constructed. In one embodiment, constructing the 3-Dimensional model involves fitting hair portions of the portrait photo to one or more helices. In another embodiment, constructing the 3-Dimensional model involves applying positional and normal boundary conditions determined based on one or more relationships between face portion shape and hair portion shape. In yet another embodiment, constructing the 3-Dimensional model involves using shape from shading to capture fine-scale details in a form of surface normals, the shape from shading based on an adaptive albedo model and/or a lighting condition estimated based on shape fitting the face portion.Type: GrantFiled: April 24, 2015Date of Patent: June 13, 2017Assignee: Adobe Systems IncorporatedInventors: Linjie Luo, Sunil Hadap, Nathan Carr, Kalyan Sunkavalli, Menglei Chai