Patents by Inventor Linjie LUO

Linjie LUO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135627
    Abstract: A method of generating a style image is described. The method includes receiving an input image of a subject. The method further includes encoding the input image using a first encoder of a generative adversarial network (GAN) to obtain a first latent code. The method further includes decoding the first latent code using a first decoder of the GAN to obtain a normalized style image of the subject, wherein the GAN is trained using a loss function according to semantic regions of the input image and the normalized style image.
    Type: Application
    Filed: October 12, 2022
    Publication date: April 25, 2024
    Inventors: Guoxian SONG, Shen Sang, Tiancheng Zhi, Jing Liu, Linjie Luo
  • Publication number: 20240135621
    Abstract: A method of generating a stylized 3D avatar is provided. The method includes receiving an input image of a user, generating, using a generative adversarial network (GAN) generator, a stylized image, based on the input image, and providing the stylized image to a first model to generate a first plurality of parameters. The first plurality of parameters include a discrete parameter and a continuous parameter. The method further includes providing the stylized image and the first plurality of parameters to a second model that is trained to generate an avatar image, receiving, from the second model, the avatar image, comparing the stylized image to the avatar image, based on a loss function, to determine an error, updating the first model to generate a second plurality of parameters that correspond to the first plurality of parameters, based on the error, and providing the second plurality of parameters as an output.
    Type: Application
    Filed: October 12, 2022
    Publication date: April 25, 2024
    Inventors: Shen SANG, Tiancheng Zhi, Guoxian Song, Jing Liu, Linjie Luo, Chunpong Lai, Weihong Zeng, Jingna Sun, Xu Wang
  • Patent number: 11954828
    Abstract: Systems and method directed to generating a stylized image are disclosed. In particular, the method includes, in a first data path, (a) applying first stylization to an input image and (b) applying enlargement to the stylized image from (a). The method also includes, in a second data path, (c) applying segmentation to the input image to identify a face region of the input image and generate a mask image, and (d) applying second stylization to an entirety of the input image and inpainting to the identified face region of the stylized image. Machine-assisted blending is performed based on (1) the stylized image after the enlargement from the first data path, (2) the inpainted image from the second data path, and (3) the mask image, in order to obtain a final stylized image.
    Type: Grant
    Filed: October 14, 2021
    Date of Patent: April 9, 2024
    Assignee: Lemon Inc.
    Inventors: Jing Liu, Chunpong Lai, Guoxian Song, Linjie Luo, Ye Yuan
  • Publication number: 20240096018
    Abstract: Systems and methods for rendering a translucent object are provided. In one aspect, the system includes a processor coupled to a storage medium that stores instructions, which, upon execution by the processor, cause the processor to receive at least one mesh representing at least one translucent object. For each pixel to be rendered, the processor performs a rasterization-based differentiable rendering of the pixel to be rendered using the at least one mesh and determines a plurality of values for the pixel to be rendered based on the rasterization-based differentiable rendering. The rasterization-based differentiable rendering can include performing a probabilistic rasterization process along with aggregation techniques to compute the plurality of values for the pixel to be rendered. The plurality of values includes a set of color channel values and an opacity channel value. Once values are determined for all pixels, an image can be rendered.
    Type: Application
    Filed: September 15, 2022
    Publication date: March 21, 2024
    Inventors: Tiancheng Zhi, Shen Sang, Guoxian Song, Chunpong Lai, Jing Liu, Linjie Luo
  • Publication number: 20240096041
    Abstract: Systems and methods are provided that include a processor executing an avatar generation program to obtain driving view(s), calculate a skeletal pose of the user, and generate a coarse human mesh based on a template mesh and the skeletal pose of the user. The program further constructs a texture map based on the driving view(s) and the coarse human mesh, extracts a plurality of image features from the texture map, the image features being aligned to a UV map, and constructs a UV positional map based on the coarse human mesh. The program further extracts a plurality of pose features from the UV positional map, the pose features being aligned to the UV map, generates a plurality of pose-image features based on the UV map-aligned image features and UV map-aligned pose features, and renders an avatar based on the plurality of pose-image features.
    Type: Application
    Filed: September 15, 2022
    Publication date: March 21, 2024
    Inventors: Hongyi Xu, Tao Hu, Linjie Luo
  • Publication number: 20240078792
    Abstract: Systems and methods for multi-task joint training of a neural network including an encoder module and a multi-headed attention mechanism are provided. In one aspect, the system includes a processor configured to receive input data including a first set of labels and a second set of labels. Using the encoder module, features are extracted from the input data. Using a multi-headed attention mechanism, training loss metrics are computed. A first training loss metric is computed using the extracted features and the first set of labels, and a second training loss metric is computed using the extracted features and the second set of labels. A first mask is applied to filter the first training loss metric, and a second mask is applied to filter the second training loss metric. A final training loss metric is computed based on the filtered first and second training loss metrics.
    Type: Application
    Filed: September 2, 2022
    Publication date: March 7, 2024
    Inventors: Shuo Cheng, Wanchun Ma, Linjie Luo
  • Publication number: 20240062390
    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.
    Type: Application
    Filed: September 1, 2023
    Publication date: February 22, 2024
    Inventors: Yuncheng Li, Linjie Luo, Xuecheng Nie, Ning Zhang
  • Patent number: 11861854
    Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: January 2, 2024
    Assignee: Snap Inc.
    Inventors: Shenlong Wang, Linjie Luo, Ning Zhang, Jia Li
  • Publication number: 20230419512
    Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.
    Type: Application
    Filed: September 12, 2023
    Publication date: December 28, 2023
    Inventors: Shenlong Wang, Linjie Luo, Ning Zhang, Jia Li
  • Publication number: 20230410267
    Abstract: Methods and systems for enlarging a stylized region of an image are disclosed that include receiving an input image, generating, using a first generative adversarial network (GAN) generator, a first stylized image, based on the input image, normalizing the input image, generating, using a second generative adversarial network (GAN) generator, a second stylized image, based on the normalized input image, blending the first stylized image and the second stylized image to obtain a third stylized image, and providing the third stylized image as an output.
    Type: Application
    Filed: June 17, 2022
    Publication date: December 21, 2023
    Inventors: Guoxian Song, Jing Liu, Weihong Zeng, Jingna Sun, Xu Wang, Linjie Luo
  • Publication number: 20230401791
    Abstract: A landmark data collection method includes: determining (S100) a first basic collection point (A), a second basic collection point (B), and a third basic collection point (C) sequentially in an observation area of a landmark building; and collecting (S200) photos of the landmark building based on each of the first basic collection point (A), the second basic collection point (B), and the third basic collection point (C) to obtain landmark data of the landmark building. A photo of the landmark building is taken by a camera at the i-th basic collection point; and a collection point is determined at every predetermined distance interval as the camera is moved in a counterclockwise direction and/or a clockwise direction, and the photo of the landmark building is taken by the camera at the collection point, until the camera is moved out of the observation area, where i=1, 2, 3.
    Type: Application
    Filed: August 4, 2021
    Publication date: December 14, 2023
    Inventors: Zhili CHEN, Linjie LUO, Xiao YANG, Jianchao YANG, Jing LIU, Guohui WANG
  • Publication number: 20230394681
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for accessing a set of images depicting at least a portion of a face. A set of facial regions of the face is identified, each facial region of the set of facial regions intersecting another facial region with at least one common vertex that is a member of a set of facial vertices. For each facial region of the set of facial regions, a weight formed from a set of region coefficients is generated. Based on the set of facial regions and the weight of each facial region of the set of facial regions, the face is tracked across the set of images.
    Type: Application
    Filed: August 18, 2023
    Publication date: December 7, 2023
    Inventors: Chen Cao, Menglei Chai, Linjie Luo, Oliver Woodford
  • Publication number: 20230394756
    Abstract: Provided are a three-dimensional reconstruction method, a three-dimensional reconstruction apparatus, and a non-transitory computer-readable storage medium.
    Type: Application
    Filed: July 29, 2021
    Publication date: December 7, 2023
    Inventors: Zhili CHEN, Linjie LUO
  • Publication number: 20230377368
    Abstract: Methods and systems for generating synthetic images based on an input image are described. The method may include receiving an input image; generating, using an encoder, a first latent code vector representation based on the input image; receiving a latent code corresponding to a feature to be added to the input image; modifying the first latent code vector representation based on the latent code corresponding to the feature to be added; generating, by an image decoder, a synthesized image based on the modified first latent code vector representation; identifying, using a landmark detector, one or more landmarks in the base image; identifying, using a landmark detector, one or more landmarks in the synthesized image; determining a measure of similarity between the landmark identified on the base image and the landmark identified in the synthesized image; and discarding the synthesized image based on the comparison.
    Type: Application
    Filed: May 23, 2022
    Publication date: November 23, 2023
    Inventors: Shuo CHENG, Guoxian SONG, Wanchun MA, Chao Wang, Linjie LUO
  • Patent number: 11803996
    Abstract: Techniques for face tracking comprise receiving landmark data associated with a plurality of images indicative of at least one facial part. Representative images corresponding to the plurality of images may be generated based on the landmark data. Each representative image may depict a plurality of segments, and each segment may correspond to a region of the at least one facial part. The plurality of images and corresponding representative images may be input into a neural network to train the neural network to predict a feature associated with a subsequently received image comprising a face. An animation associated with a facial expression may be controlled based on output from the trained neural network.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: October 31, 2023
    Assignee: LEMON INC.
    Inventors: Wanchun Ma, Shuo Cheng, Chao Wang, Michael Leong Hou Tay, Linjie Luo
  • Publication number: 20230343033
    Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.
    Type: Application
    Filed: June 29, 2023
    Publication date: October 26, 2023
    Inventors: Chen Cao, Menglei Chai, Linjie Luo, Soumyadip Sengupta
  • Publication number: 20230328197
    Abstract: Embodiments of the present disclosure provide a display method and apparatus based on augmented reality, a device, and a storage medium, the method includes receiving a first video; acquiring a video material by segmenting a target object from the first video; acquiring and displaying a real scene image, where the real scene image is acquired by an image collection apparatus; and displaying the video material at a target position of the real scene image in an augmented manner and playing the video material. Since the video material is acquired by receiving the first video and segmenting the target object from the first video, the video material may be set according to the needs of the user.
    Type: Application
    Filed: June 9, 2023
    Publication date: October 12, 2023
    Inventors: Yaxi GAO, Chenyu SUN, Xiao YANG, Zhili CHEN, Linjie LUO, Jing LIU, Hengkai GUO, Huaxia LI, Hwankyoo Shawn KIM, Jianchao YANG
  • Publication number: 20230325975
    Abstract: A method for training an image processor having a neural network model is described. A first training set of images having a first image resolution is generated. A second training set of images having a second image resolution is generated. The second image resolution is larger than the first image resolution. The neural network model of the image processor is trained using the first training set of images during a first training session. The neural network model of the image processor is trained using the second training set of images during a second training session after the first training session.
    Type: Application
    Filed: June 12, 2023
    Publication date: October 12, 2023
    Inventors: Tiancheng ZHI, Shen SANG, Jing LIU, Linjie LUO
  • Patent number: 11783494
    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.
    Type: Grant
    Filed: April 25, 2022
    Date of Patent: October 10, 2023
    Assignee: Snap Inc.
    Inventors: Yuncheng Li, Linjie Luo, Xuecheng Nie, Ning Zhang
  • Patent number: 11769259
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for accessing a set of images depicting at least a portion of a face. A set of facial regions of the face is identified, each facial region of the set of facial regions intersecting another facial region with at least one common vertex that is a member of a set of facial vertices. For each facial region of the set of facial regions, a weight formed from a set of region coefficients is generated. Based on the set of facial regions and the weight of each facial region of the set of facial regions, the face is tracked across the set of images.
    Type: Grant
    Filed: February 12, 2021
    Date of Patent: September 26, 2023
    Assignee: Snap Inc.
    Inventors: Chen Cao, Menglei Chai, Linjie Luo, Oliver Woodford