Patents by Inventor Linjie LUO

Linjie LUO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230325975
    Abstract: A method for training an image processor having a neural network model is described. A first training set of images having a first image resolution is generated. A second training set of images having a second image resolution is generated. The second image resolution is larger than the first image resolution. The neural network model of the image processor is trained using the first training set of images during a first training session. The neural network model of the image processor is trained using the second training set of images during a second training session after the first training session.
    Type: Application
    Filed: June 12, 2023
    Publication date: October 12, 2023
    Inventors: Tiancheng ZHI, Shen SANG, Jing LIU, Linjie LUO
  • Patent number: 11783494
    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.
    Type: Grant
    Filed: April 25, 2022
    Date of Patent: October 10, 2023
    Assignee: Snap Inc.
    Inventors: Yuncheng Li, Linjie Luo, Xuecheng Nie, Ning Zhang
  • Patent number: 11769307
    Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A set of structure façade data describing one or more structure façades associated with the first position estimate is then accessed. A first image of an environment is captured, and a portion of the image is matched to part of the structure façade data. A second position is then estimated based on a comparison of the structure façade data with the portion of the image matched to the structure façade data.
    Type: Grant
    Filed: April 25, 2022
    Date of Patent: September 26, 2023
    Assignee: Snap Inc.
    Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M. Rodriguez, II, Rahul Bhupendra Sheth, Jia Li, Xutao Lv
  • Patent number: 11769259
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for accessing a set of images depicting at least a portion of a face. A set of facial regions of the face is identified, each facial region of the set of facial regions intersecting another facial region with at least one common vertex that is a member of a set of facial vertices. For each facial region of the set of facial regions, a weight formed from a set of region coefficients is generated. Based on the set of facial regions and the weight of each facial region of the set of facial regions, the face is tracked across the set of images.
    Type: Grant
    Filed: February 12, 2021
    Date of Patent: September 26, 2023
    Assignee: Snap Inc.
    Inventors: Chen Cao, Menglei Chai, Linjie Luo, Oliver Woodford
  • Publication number: 20230290094
    Abstract: A positioning model optimization method, a positioning method, and a positioning device are provided. The positioning model optimization method includes: inputting a positioning model for a scene, the positioning model including a three-dimensional (3D) point cloud and a plurality of descriptors corresponding to each 3D point in the 3D point cloud; calculating a significance of each 3D point in the 3D point cloud, and if the significance is greater than a predetermined threshold, outputting the 3D point and the plurality of descriptors corresponding to the 3D point to an optimized positioning model for the scene; and outputting the optimized positioning model for the scene.
    Type: Application
    Filed: July 22, 2021
    Publication date: September 14, 2023
    Inventors: Linjie LUO, Jing LIU, Zhili CHEN, Guohui WANG, Xiao YANG., Jianchao YANG, Xiaochen LIAN
  • Patent number: 11756276
    Abstract: An image processing method and apparatus for augmented reality, an electronic device and a storage medium, including: acquiring a target image in response to an image acquiring instruction triggered by a user, where the target image includes a target object; acquiring an augmented reality model of the target object, and outputting the augmented reality model in combination with the target object; acquiring target audio data selected by the user, and determining an audio feature with temporal regularity according to the target audio data; and driving the augmented reality model according to the audio feature and a playing progress of the target audio data when outputting the target audio data.
    Type: Grant
    Filed: November 8, 2022
    Date of Patent: September 12, 2023
    Assignees: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD., BYTEDANCE INC.
    Inventors: Yunzhu Li, Jingcong Zhang, Xuchen Song, Jianchao Yang, Guohui Wang, Zhili Chen, Linjie Luo, Xiao Yang, Haoze Li, Jing Liu
  • Publication number: 20230267664
    Abstract: An animation processing method and apparatus, an electronic device and a storage medium, applied to a shader, the method including: acquiring an animation sample of a model unit; acquiring an external interaction parameter of an augmented reality model, where the augmented reality model includes a plurality of model units; and outputting the augmented reality model, and driving the animation sample of the model unit in the output augmented reality model according to the external interaction parameter. The animation sample of the model unit can be driven according to the external interaction parameter, so that animation of the model can be adjusted according to user operation, thereby improving usability.
    Type: Application
    Filed: July 1, 2021
    Publication date: August 24, 2023
    Inventors: Jingcong ZHANG, Beixin HU, Yuanlong CHEN, Zhili CHEN, Linjie LUO, Jing LIU, Xiao YANG, Guohui WANG, Jianchao YANG
  • Patent number: 11727660
    Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: August 15, 2023
    Assignee: Snap Inc.
    Inventors: Jia Li, Linjie Luo, Rahul Bhupendra Sheth, Ning Xu, Jianchao Yang
  • Patent number: 11720994
    Abstract: Systems and method directed to an inversion-consistent transfer learning framework for generating portrait stylization using only limited exemplars. In examples, an input image is received and encoded using a variational autoencoder to generate a latent vector. The latent vector may be provided to a generative adversarial network (GAN) generator to generate a stylized image. In examples, the variational autoencoder is trained using a plurality of images while keeping the weights of a pre-trained GAN generator fixed, where the pre-trained GAN generator acts as a decoder for the encoder. In other examples, a multi-path attribute aware generator is trained using a plurality of exemplar images and learning transfer using the pre-trained GAN generator.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: August 8, 2023
    Assignee: Lemon Inc.
    Inventors: Linjie Luo, Guoxian Song, Jing Liu, Wanchun Ma
  • Publication number: 20230245398
    Abstract: The embodiments of the present disclosure disclose an image effect implementing method and apparatus, an electronic device, a storage medium, a computer program product and a computer program. The method includes: acquiring a first image, recognizing a set object in the first image, and acquiring an augmented reality model corresponding to the set object; superimposing, according to coordinate information of pixels of the set object, the augmented reality model onto the first image to obtain a second image; and upon detection of a preset deformation event, controlling, according to a set deformation policy, at least one sub-model of the augmented reality model in the second image to deform, and displaying the deformed second image.
    Type: Application
    Filed: June 22, 2021
    Publication date: August 3, 2023
    Inventors: Jingcong ZHANG, Yunzhu LI, Haoze LI, Zhili CHEN, Linjie LUO, Jing LIU, Xiao YANG, Guohui WANG, Jianchao YANG, Xuchen SONG
  • Patent number: 11710275
    Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: July 25, 2023
    Assignee: Snap Inc.
    Inventors: Soumyadip Sengupta, Linjie Luo, Chen Cao, Menglei Chai
  • Publication number: 20230222749
    Abstract: A positioning model optimization method, an image-based positioning method and positioning device, and a computer-readable storage medium are provided. The positioning model optimization method includes: inputting a positioning model for a scene, the positioning model including a three-dimensional (3D) point cloud and a plurality of descriptors corresponding to each 3D point in the 3D point cloud; determining, for each 3D point in the 3D point cloud, a plurality of neighboring points of the 3D point, and if a distance relationship between each of the plurality of neighboring points and the 3D point is smaller than a predetermined threshold, outputting the 3D point and the plurality of descriptors corresponding to the 3D point to an optimized positioning model for the scene; and outputting the optimized positioning model for the scene.
    Type: Application
    Filed: July 22, 2021
    Publication date: July 13, 2023
    Inventors: Linjie LUO, Jing LIU, Zhili CHEN, Guohui WANG, Xiao YANG, Jianchao YANG, Xiaochen LIAN
  • Publication number: 20230146676
    Abstract: Systems and methods directed to controlling the similarity between stylized portraits and an original photo are described. In examples, an input image is received and encoded using a variational autoencoder to generate a latent vector. The latent vector may be blended with latent vectors that best represent a face in the original user portrait image. The resulting blended latent vector may be provided to a generative adversarial network (GAN) generator to generate a controlled stylized image. In examples, one or more layers of the stylized GAN generator may be swapped with one or more layers of the original GAN generator. Accordingly, a user can interactively determine how much stylization vs. personalization should be included in a resulting stylized portrait.
    Type: Application
    Filed: November 5, 2021
    Publication date: May 11, 2023
    Inventors: Jing Liu, Chunpong Lai, Guoxian Song, Linjie Luo
  • Publication number: 20230124252
    Abstract: Systems and method directed to generating a stylized image are disclosed. In particular, the method includes, in a first data path, (a) applying first stylization to an input image and (b) applying enlargement to the stylized image from (a). The method also includes, in a second data path, (c) applying segmentation to the input image to identify a face region of the input image and generate a mask image, and (d) applying second stylization to an entirety of the input image and inpainting to the identified face region of the stylized image. Machine-assisted blending is performed based on (1) the stylized image after the enlargement from the first data path, (2) the inpainted image from the second data path, and (3) the mask image, in order to obtain a final stylized image.
    Type: Application
    Filed: October 14, 2021
    Publication date: April 20, 2023
    Inventors: Jing Liu, Chunpong Lai, Guoxian Song, Linjie Luo, Ye Yuan
  • Publication number: 20230061012
    Abstract: An image processing method and apparatus for augmented reality, an electronic device and a storage medium, including: acquiring a target image in response to an image acquiring instruction triggered by a user, where the target image includes a target object; acquiring an augmented reality model of the target object, and outputting the augmented reality model in combination with the target object; acquiring target audio data selected by the user, and determining an audio feature with temporal regularity according to the target audio data; and driving the augmented reality model according to the audio feature and a playing progress of the target audio data when outputting the target audio data.
    Type: Application
    Filed: November 8, 2022
    Publication date: March 2, 2023
    Inventors: Yunzhu LI, Jingcong ZHANG, Xuchen SONG, Jianchao YANG, Guohui WANG, Zhili CHEN, Linjie LUO, Xiao YANG, Haoze LI, Jing LIU
  • Publication number: 20230046286
    Abstract: The present disclosure describes techniques for facial expression recognition. A first loss function may be determined based on a first set of feature vectors associated with a first set of images depicting facial expressions and a first set of labels indicative of the facial expressions. A second loss function may be determined based on a second set of feature vectors associated with a second set of images depicting asymmetric facial expressions and a second set of labels indicative of the asymmetric facial expressions. The first loss function and the second loss function may be used to determine a maximum loss function. The maximum loss function may be applied during training of a model. The trained model may be configured to predict at least one asymmetric facial expression in a subsequently received image.
    Type: Application
    Filed: August 13, 2021
    Publication date: February 16, 2023
    Inventors: Michael Leong Hou Tay, Wanchun Ma, Shuo Cheng, Chao Wang, Linjie Luo
  • Publication number: 20230036903
    Abstract: The present disclosure describes techniques for face tracking. The techniques comprise receiving landmark data associated with a plurality of images indicative of at least one facial part. Representative images corresponding to the plurality of images may be generated based on the landmark data. Each representative image may depict a plurality of segments, and each segment may correspond to a region of the at least one facial part. The plurality of images and corresponding representative images may be input into a neural network to train the neural network to predict a feature associated with a subsequently received image comprising a face. An animation associated with a facial expression may be controlled based on output from the trained neural network.
    Type: Application
    Filed: July 30, 2021
    Publication date: February 2, 2023
    Inventors: Wanchun MA, Shuo CHENG, Chao WANG, Michael Leong Hou TAY, Linjie LUO
  • Publication number: 20230010480
    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.
    Type: Application
    Filed: April 25, 2022
    Publication date: January 12, 2023
    Inventors: Yuncheng Li, Linjie Luo, Xuecheng Nie, Ning Zhang
  • Publication number: 20220406008
    Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. Point cloud data describing an environment is then accessed. A two-dimensional surface of an image of an environment is captured, and a portion of the image is matched to a portion of key points in the point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.
    Type: Application
    Filed: July 1, 2022
    Publication date: December 22, 2022
    Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M. Rodriguez, II, Rahul Bhupendra Sheth, Jia Li, Xutao Lv
  • Publication number: 20220375024
    Abstract: Systems and method directed to an inversion-consistent transfer learning framework for generating portrait stylization using only limited exemplars. In examples, an input image is received and encoded using a variational autoencoder to generate a latent vector. The latent vector may be provided to a generative adversarial network (GAN) generator to generate a stylized image. In examples, the variational autoencoder is trained using a plurality of images while keeping the weights of a pre-trained GAN generator fixed, where the pre-trained GAN generator acts as a decoder for the encoder. In other examples, a multi-path attribute aware generator is trained using a plurality of exemplar images and learning transfer using the pre-trained GAN generator.
    Type: Application
    Filed: May 14, 2021
    Publication date: November 24, 2022
    Inventors: Linjie LUO, Guoxian SONG, Jing LIU, Wanchun MA