Patents by Inventor Linjie LUO

Linjie LUO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220368824
    Abstract: A dolly zoom effect can be applied to one or more images captured via a resource-constrained device (e.g., a mobile smartphone) by manipulating the size of a target feature while the background in the one or more images changes due to physical movement of the resource-constrained device. The target feature can be detected using facial recognition or shape detection techniques. The target feature can be resized before the size is manipulated as the background changes (e.g., changes perspective).
    Type: Application
    Filed: July 29, 2022
    Publication date: November 17, 2022
    Inventors: Linjie Luo, Chongyang Ma, Zehao Xue
  • Publication number: 20220358738
    Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.
    Type: Application
    Filed: April 18, 2022
    Publication date: November 10, 2022
    Inventors: Jia Li, Linjie Luo, Rahul Bhupendra Sheth, Ning Xu, Jianchao Yang
  • Patent number: 11468544
    Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: October 11, 2022
    Assignee: Snap Inc.
    Inventors: Chen Cao, Wen Zhang, Menglei Chai, Linjie Luo
  • Publication number: 20220292697
    Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.
    Type: Application
    Filed: May 26, 2022
    Publication date: September 15, 2022
    Inventors: Shenlong Wang, Linjie Luo, Ning Zhang, Jia Li
  • Patent number: 11418704
    Abstract: A dolly zoom effect can be applied to one or more images captured via a resource-constrained device (e.g., a mobile smartphone) by manipulating the size of a target feature while the background in the one or more images changes due to physical movement of the resource-constrained device. The target feature can be detected using facial recognition or shape detection techniques. The target feature can be resized before the size is manipulated as the background changes (e.g., changes perspective).
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: August 16, 2022
    Assignee: Snap Inc.
    Inventors: Linjie Luo, Chongyang Ma, Zehao Xue
  • Publication number: 20220245907
    Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A set of structure façade data describing one or more structure façades associated with the first position estimate is then accessed. A first image of an environment is captured, and a portion of the image is matched to part of the structure façade data. A second position is then estimated based on a comparison of the structure façade data with the portion of the image matched to the structure façade data.
    Type: Application
    Filed: April 25, 2022
    Publication date: August 4, 2022
    Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M. Rodriguez, II, Rahul Bhupendra Sheth, Jia Li, Xutao Lv
  • Patent number: 11380051
    Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. Point cloud data describing an environment is then accessed. A two-dimensional surface of an image of an environment is captured, and a portion of the image is matched to a portion of key points in the point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.
    Type: Grant
    Filed: February 10, 2021
    Date of Patent: July 5, 2022
    Assignee: Snap Inc.
    Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
  • Patent number: 11367205
    Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: June 21, 2022
    Assignee: Snap Inc.
    Inventors: Shenlong Wang, Linjie Luo, Ning Zhang, Jia Li
  • Patent number: 11315259
    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: April 26, 2022
    Assignee: Snap Inc.
    Inventors: Yuncheng Li, Linjie Luo, Xuecheng Nie, Ning Zhang
  • Patent number: 11315331
    Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A set of structure façade data describing one or more structure façades associated with the first position estimate is then accessed. A first image of an environment is captured, and a portion of the image is matched to part of the structure façade data. A second position is then estimated based on a comparison of the structure façade data with the portion of the image matched to the structure façade data.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: April 26, 2022
    Assignee: Snap Inc.
    Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
  • Patent number: 11308706
    Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: April 19, 2022
    Assignee: Snap Inc.
    Inventors: Jia Li, Linjie Luo, Rahul Bhupendra Sheth, Ning Xu, Jianchao Yang
  • Publication number: 20220036647
    Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.
    Type: Application
    Filed: October 12, 2021
    Publication date: February 3, 2022
    Inventors: Soumyadip Sengupta, Linjie Luo, Chen Cao, Menglei Chai
  • Patent number: 11164376
    Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: November 2, 2021
    Assignee: Snap Inc.
    Inventors: Soumyadip Sengupta, Linjie Luo, Chen Cao, Menglei Chai
  • Publication number: 20210319540
    Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.
    Type: Application
    Filed: June 23, 2021
    Publication date: October 14, 2021
    Inventors: Chen Cao, Wen Zhang, Menglei Chai, Linjie Luo
  • Patent number: 11087513
    Abstract: Systems and methods are provided for receiving an image from a camera of a mobile device, analyzing the image to determine a subject of the image, segmenting the subject of the image to generate a mask indicating an area of the image comprising the subject of the image, applying a bokeh effect to a background region of the image to generate a processed background region, generating an output image comprising the subject of the image and the processed background region, and causing the generated output image to display on a display of the mobile device.
    Type: Grant
    Filed: November 9, 2018
    Date of Patent: August 10, 2021
    Assignee: Snap Inc.
    Inventors: Kun Duan, Nan Hu, Linjie Luo, Chongyang Ma, Guohui Wang
  • Patent number: 11074675
    Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: July 27, 2021
    Assignee: Snap Inc.
    Inventors: Chen Cao, Wen Zhang, Menglei Chai, Linjie Luo
  • Publication number: 20210174578
    Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. Point cloud data describing an environment is then accessed. A two-dimensional surface of an image of an environment is captured, and a portion of the image is matched to a portion of key points in the point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.
    Type: Application
    Filed: February 10, 2021
    Publication date: June 10, 2021
    Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
  • Publication number: 20210165998
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for accessing a set of images depicting at least a portion of a face. A set of facial regions of the face is identified, each facial region of the set of facial regions intersecting another facial region with at least one common vertex that is a member of a set of facial vertices. For each facial region of the set of facial regions, a weight formed from a set of region coefficients is generated. Based on the set of facial regions and the weight of each facial region of the set of facial regions, the face is tracked across the set of images.
    Type: Application
    Filed: February 12, 2021
    Publication date: June 3, 2021
    Inventors: Chen Cao, Menglei Chai, Linjie Luo, Oliver Woodford
  • Patent number: 10997783
    Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. Point cloud data describing an environment is then accessed. A two-dimensional surface of an image of an environment is captured, and a portion of the image is matched to a portion of key points in the point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: May 4, 2021
    Assignee: Snap Inc.
    Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
  • Publication number: 20210125342
    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.
    Type: Application
    Filed: November 5, 2020
    Publication date: April 29, 2021
    Inventors: Yuncheng Li, Linjie Luo, Xuecheng Nie, Ning Zhang