Patents by Inventor Linjie LUO
Linjie LUO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12260485Abstract: A method of generating a style image is described. The method includes receiving an input image of a subject. The method further includes encoding the input image using a first encoder of a generative adversarial network (GAN) to obtain a first latent code. The method further includes decoding the first latent code using a first decoder of the GAN to obtain a normalized style image of the subject, wherein the GAN is trained using a loss function according to semantic regions of the input image and the normalized style image.Type: GrantFiled: October 12, 2022Date of Patent: March 25, 2025Assignee: Lemon Inc.Inventors: Guoxian Song, Shen Sang, Tiancheng Zhi, Jing Liu, Linjie Luo
-
Patent number: 12243292Abstract: Systems and methods for multi-task joint training of a neural network including an encoder module and a multi-headed attention mechanism are provided. In one aspect, the system includes a processor configured to receive input data including a first set of labels and a second set of labels. Using the encoder module, features are extracted from the input data. Using a multi-headed attention mechanism, training loss metrics are computed. A first training loss metric is computed using the extracted features and the first set of labels, and a second training loss metric is computed using the extracted features and the second set of labels. A first mask is applied to filter the first training loss metric, and a second mask is applied to filter the second training loss metric. A final training loss metric is computed based on the filtered first and second training loss metrics.Type: GrantFiled: September 2, 2022Date of Patent: March 4, 2025Assignee: LEMON INC.Inventors: Shuo Cheng, Wanchun Ma, Linjie Luo
-
Patent number: 12238404Abstract: A dolly zoom effect can be applied to one or more images captured via a resource-constrained device (e.g., a mobile smartphone) by manipulating the size of a target feature while the background in the one or more images changes due to physical movement of the resource-constrained device. The target feature can be detected using facial recognition or shape detection techniques. The target feature can be resized before the size is manipulated as the background changes (e.g., changes perspective).Type: GrantFiled: July 29, 2022Date of Patent: February 25, 2025Assignee: Snap Inc.Inventors: Linjie Luo, Chongyang Ma, Zehao Xue
-
Patent number: 12217466Abstract: Systems and methods directed to controlling the similarity between stylized portraits and an original photo are described. In examples, an input image is received and encoded using a variational autoencoder to generate a latent vector. The latent vector may be blended with latent vectors that best represent a face in the original user portrait image. The resulting blended latent vector may be provided to a generative adversarial network (GAN) generator to generate a controlled stylized image. In examples, one or more layers of the stylized GAN generator may be swapped with one or more layers of the original GAN generator. Accordingly, a user can interactively determine how much stylization vs. personalization should be included in a resulting stylized portrait.Type: GrantFiled: November 5, 2021Date of Patent: February 4, 2025Assignee: LEMON, INC.Inventors: Jing Liu, Chunpong Lai, Guoxian Song, Linjie Luo
-
Patent number: 12198357Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.Type: GrantFiled: September 12, 2023Date of Patent: January 14, 2025Assignee: Snap Inc.Inventors: Shenlong Wang, Linjie Luo, Ning Zhang, Jia Li
-
Patent number: 12190481Abstract: Methods and systems for enlarging a stylized region of an image are disclosed that include receiving an input image, generating, using a first generative adversarial network (GAN) generator, a first stylized image, based on the input image, normalizing the input image, generating, using a second generative adversarial network (GAN) generator, a second stylized image, based on the normalized input image, blending the first stylized image and the second stylized image to obtain a third stylized image, and providing the third stylized image as an output.Type: GrantFiled: June 17, 2022Date of Patent: January 7, 2025Assignee: Lemon Inc.Inventors: Guoxian Song, Jing Liu, Weihong Zeng, Jingna Sun, Xu Wang, Linjie Luo
-
Patent number: 12169907Abstract: Methods and systems for generating a texturized image are disclosed. Some examples may include: receiving an input image, receiving an exemplar texture image, generating, using an encoder, a first latent code vector representation based on the input image, generating, using a generative adversarial network generator, a second latent code vector representation based on the exemplar texture image, blending the first latent code vector representation and the second latent code vector representation to obtain a blended latent code vector representation, generating, by the GAN generator, a texturized image based on the blended latent code vector representation and providing the texturized image as an output image.Type: GrantFiled: November 24, 2021Date of Patent: December 17, 2024Assignee: Lemon Inc.Inventors: Guoxian Song, Jing Liu, Chunpong Lai, Linjie Luo
-
Patent number: 12165335Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.Type: GrantFiled: September 1, 2023Date of Patent: December 10, 2024Assignee: Snap Inc.Inventors: Yuncheng Li, Linjie Luo, Xuecheng Nie, Ning Zhang
-
Patent number: 12148095Abstract: Systems and methods for rendering a translucent object are provided. In one aspect, the system includes a processor coupled to a storage medium that stores instructions, which, upon execution by the processor, cause the processor to receive at least one mesh representing at least one translucent object. For each pixel to be rendered, the processor performs a rasterization-based differentiable rendering of the pixel to be rendered using the at least one mesh and determines a plurality of values for the pixel to be rendered based on the rasterization-based differentiable rendering. The rasterization-based differentiable rendering can include performing a probabilistic rasterization process along with aggregation techniques to compute the plurality of values for the pixel to be rendered. The plurality of values includes a set of color channel values and an opacity channel value. Once values are determined for all pixels, an image can be rendered.Type: GrantFiled: September 15, 2022Date of Patent: November 19, 2024Assignee: LEMON INC.Inventors: Tiancheng Zhi, Shen Sang, Guoxian Song, Chunpong Lai, Jing Liu, Linjie Luo
-
Patent number: 12141922Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.Type: GrantFiled: June 29, 2023Date of Patent: November 12, 2024Assignee: Snap Inc.Inventors: Chen Cao, Menglei Chai, Linjie Luo, Soumyadip Sengupta
-
Patent number: 12112573Abstract: The present disclosure describes techniques for facial expression recognition. A first loss function may be determined based on a first set of feature vectors associated with a first set of images depicting facial expressions and a first set of labels indicative of the facial expressions. A second loss function may be determined based on a second set of feature vectors associated with a second set of images depicting asymmetric facial expressions and a second set of labels indicative of the asymmetric facial expressions. The first loss function and the second loss function may be used to determine a maximum loss function. The maximum loss function may be applied during training of a model. The trained model may be configured to predict at least one asymmetric facial expression in a subsequently received image.Type: GrantFiled: August 13, 2021Date of Patent: October 8, 2024Assignee: Lemon Inc.Inventors: Michael Leong Hou Tay, Wanchun Ma, Shuo Cheng, Chao Wang, Linjie Luo
-
Publication number: 20240331245Abstract: The present disclosure provides a video processing method, a video processing apparatus, and a storage medium. A picture of the video includes a landmark building and a moving subject. The video processing method includes: identifying and tracking the landmark building in the video; extracting and tracking a key point of the moving subject in the video, and determining a posture of the moving subject based on information of the extracted key point of the moving subject; and making the key point of the moving subject correspond to the landmark building, and driving the landmark building to perform a corresponding action based on an action of the key point of the moving subject, so as to make a posture of the landmark building in the picture of the video correspond to the posture of the moving subject. The video processing method may enhance interactions between the user and the shot landmark.Type: ApplicationFiled: August 4, 2021Publication date: October 3, 2024Inventors: Zhili CHEN, Linjie LUO, Jianchao YANG, Guohui WANG
-
Patent number: 12079931Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. Point cloud data describing an environment is then accessed. A two-dimensional surface of an image of an environment is captured, and a portion of the image is matched to a portion of key points in the point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.Type: GrantFiled: July 1, 2022Date of Patent: September 3, 2024Assignee: SNAP INC.Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M Rodriguez, II, Rahul Bhupendra Sheth, Jia Li, Xutao Lv
-
Publication number: 20240273871Abstract: A method for generating a multi-dimensional stylized image. The method includes providing input data into a latent space for a style conditioned multi-dimensional generator of a multi-dimensional generative model and generating the multi-dimensional stylized image from the input data by the style conditioned multi-dimensional generator. The method further includes synthesizing content for the multi-dimensional stylized image using a latent code and corresponding camera pose from the latent space to formulate an intermediate code to modulate synthesis convolution layers to generate feature images as multi-planar representations and synthesizing stylized feature images of the feature images for generating the multi-dimensional stylized image of the input data. The style conditioned multi-dimensional generator is tuned using a guided transfer learning process using a style prior generator.Type: ApplicationFiled: February 14, 2023Publication date: August 15, 2024Inventors: Guoxian Song, Hongyi Xu, Jing Liu, Tiancheng Zhi, Yichun Shi, Jianfeng Zhang, Zihang Jiang, Jiashi Feng, Shen Sang, Linjie Luo
-
Publication number: 20240265628Abstract: A three-dimensional generative adversarial network includes a generator, a discriminator, and a renderer. The generator is configured to receive an intermediate latent code mapped from a latent code and a camera pose, generate two-dimensional backgrounds for a set of images, and generate, based on the intermediate latent code, multi-grid representation features. The renderer is configured to synthesize images based on the camera pose, a camera pose offset, and the multi-grid representation features; the camera pose offset being mapped from the latent code and the camera pose; and render a foreground mask. The discriminator is configured to supervise a training of the foreground mask with an up-sampled image and a super-resolved image.Type: ApplicationFiled: February 7, 2023Publication date: August 8, 2024Inventors: Hongyi XU, Sizhe AN, Yichun SHI, Guoxian SONG, Linjie LUO
-
Publication number: 20240265621Abstract: Technologies are described and recited herein for producing controllable synthesized images include a geometry guided 3D GAN framework for high-quality 3D head synthesis with full control on camera poses, facial expressions, head shape, articulated neck and jaw poses; and a semantic SDF (signed distance function) formulation that defines volumetric correspondence from observation space to canonical space, allowing full disentanglement of control parameters in 3D GAN training.Type: ApplicationFiled: February 7, 2023Publication date: August 8, 2024Inventors: Hongyi Xu, Guoxian Song, Zihang Jiang, Jianfeng Zhang, Yichun Shi, Jing Liu, Wanchun Ma, Jiashi Feng, Linjie Luo
-
Patent number: 12051168Abstract: Systems and methods are provided that include a processor executing an avatar generation program to obtain driving view(s), calculate a skeletal pose of the user, and generate a coarse human mesh based on a template mesh and the skeletal pose of the user. The program further constructs a texture map based on the driving view(s) and the coarse human mesh, extracts a plurality of image features from the texture map, the image features being aligned to a UV map, and constructs a UV positional map based on the coarse human mesh. The program further extracts a plurality of pose features from the UV positional map, the pose features being aligned to the UV map, generates a plurality of pose-image features based on the UV map-aligned image features and UV map-aligned pose features, and renders an avatar based on the plurality of pose-image features.Type: GrantFiled: September 15, 2022Date of Patent: July 30, 2024Assignees: LEMON INC., BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.Inventors: Hongyi Xu, Tao Hu, Linjie Luo
-
Publication number: 20240242452Abstract: Three-dimensional (3D) avatars may be produced by stylizing a dataset of images based on a user-input text prompt input to a stable diffusion model, and using the output stylized dataset of images to train an efficient geometry-aware 3D generative adversarial network (EG3D) model.Type: ApplicationFiled: January 17, 2023Publication date: July 18, 2024Inventors: Tiancheng Zhi, Rushikesh Dudhat, Jing Liu, Linjie Luo
-
Publication number: 20240135627Abstract: A method of generating a style image is described. The method includes receiving an input image of a subject. The method further includes encoding the input image using a first encoder of a generative adversarial network (GAN) to obtain a first latent code. The method further includes decoding the first latent code using a first decoder of the GAN to obtain a normalized style image of the subject, wherein the GAN is trained using a loss function according to semantic regions of the input image and the normalized style image.Type: ApplicationFiled: October 12, 2022Publication date: April 25, 2024Inventors: Guoxian SONG, Shen Sang, Tiancheng Zhi, Jing Liu, Linjie Luo
-
Publication number: 20240135621Abstract: A method of generating a stylized 3D avatar is provided. The method includes receiving an input image of a user, generating, using a generative adversarial network (GAN) generator, a stylized image, based on the input image, and providing the stylized image to a first model to generate a first plurality of parameters. The first plurality of parameters include a discrete parameter and a continuous parameter. The method further includes providing the stylized image and the first plurality of parameters to a second model that is trained to generate an avatar image, receiving, from the second model, the avatar image, comparing the stylized image to the avatar image, based on a loss function, to determine an error, updating the first model to generate a second plurality of parameters that correspond to the first plurality of parameters, based on the error, and providing the second plurality of parameters as an output.Type: ApplicationFiled: October 12, 2022Publication date: April 25, 2024Inventors: Shen SANG, Tiancheng Zhi, Guoxian Song, Jing Liu, Linjie Luo, Chunpong Lai, Weihong Zeng, Jingna Sun, Xu Wang