Patents by Inventor Yuncheng Li

Yuncheng Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11551374
    Abstract: Systems and methods herein describe using a neural network to identify a first set of joint location coordinates and a second set of joint location coordinates and identifying a three-dimensional hand pose based on both the first and second sets of joint location coordinates.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: January 10, 2023
    Assignee: Snap Inc.
    Inventors: Yuncheng Li, Jonathan M. Rodriguez, II, Zehao Xue, Yingying Wang
  • Publication number: 20220414985
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for receiving a monocular image that includes a depiction of a hand and extracting features of the monocular image using a plurality of machine learning techniques. The program and method further include modeling, based on the extracted features, a pose of the hand depicted in the monocular image by adjusting skeletal joint positions of a three-dimensional (3D) hand mesh using a trained graph convolutional neural network (CNN); modeling, based on the extracted features, a shape of the hand in the monocular image by adjusting blend shape values of the 3D hand mesh representing surface features of the hand depicted in the monocular image using the trained graph CNN; and generating, for display, the 3D hand mesh adjusted to model the pose and shape of the hand depicted in the monocular image.
    Type: Application
    Filed: August 31, 2022
    Publication date: December 29, 2022
    Inventors: Liuhao Ge, Zhou Ren, Yuncheng Li, Zehao Xue, Yingying Wang
  • Publication number: 20220345435
    Abstract: Systems, methods, devices, computer readable instruction media, and other embodiments are described for automated image processing and insight presentation. One embodiment involves receiving a plurality of ephemeral content messages from a plurality of client devices, and processing the messages to identify content associated with at least a first content type. A set of analysis data associated with the first content type is then generated from the messages, and portions of the messages associated with the first content type are processed to generate a first content collection. The first content collection and the set of analysis data are then communicated to a client device configured for a display interface comprising the first content collection and a representation of at least a portion of the set of analysis data.
    Type: Application
    Filed: April 4, 2022
    Publication date: October 27, 2022
    Inventors: Harsh Agrawal, Xuan Huang, Jung Hyun Kim, Yuncheng Li, Yiwei Ma, Tao Ning, Ye Tao
  • Patent number: 11468636
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for receiving a monocular image that includes a depiction of a hand and extracting features of the monocular image using a plurality of machine learning techniques. The program and method further include modeling, based on the extracted features, a pose of the hand depicted in the monocular image by adjusting skeletal joint positions of a three-dimensional (3D) hand mesh using a trained graph convolutional neural network (CNN); modeling, based on the extracted features, a shape of the hand in the monocular image by adjusting blend shape values of the 3D hand mesh representing surface features of the hand depicted in the monocular image using the trained graph CNN; and generating, for display, the 3D hand mesh adjusted to model the pose and shape of the hand depicted in the monocular image.
    Type: Grant
    Filed: April 5, 2021
    Date of Patent: October 11, 2022
    Assignee: Snap Inc.
    Inventors: Liuhao Ge, Zhou Ren, Yuncheng Li, Zehao Xue, Yingying Wang
  • Patent number: 11410439
    Abstract: Systems and methods are disclosed for capturing multiple sequences of views of a three-dimensional object using a plurality of virtual cameras. The systems and methods generate aligned sequences from the multiple sequences based on an arrangement of the plurality of virtual cameras in relation to the three-dimensional object. Using a convolutional network, the systems and methods classify the three-dimensional object based on the aligned sequences and identify the three-dimensional object using the classification.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: August 9, 2022
    Assignee: Snap Inc.
    Inventors: Yuncheng Li, Zhou Ren, Ning Xu, Enxu Yan, Tan Yu
  • Publication number: 20220230277
    Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.
    Type: Application
    Filed: April 6, 2022
    Publication date: July 21, 2022
    Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
  • Publication number: 20220172448
    Abstract: Systems, devices, media, and methods are presented for object detection and inserting graphical elements into an image stream in response to detecting the object. The systems and methods detect an object of interest in received frames of a video stream. The systems and methods identify a bounding box for the object of interest and estimate a three-dimensional position of the object of interest based on a scale of the object of interest. The systems and methods generate one or more graphical elements having a size based on the scale of the object of interest and a position based on the three-dimensional position estimated for the object of interest. The one or more graphical elements are generated within the video stream to form a modified video stream. The systems and methods cause presentation of the modified video stream including the object of interest and the one or more graphical elements.
    Type: Application
    Filed: February 17, 2022
    Publication date: June 2, 2022
    Inventors: Travis Chen, Samuel Edward Hare, Yuncheng Li, Tony Mathew, Jonathan Solichin, Jianchao Yang, Ning Zhang
  • Patent number: 11315259
    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: April 26, 2022
    Assignee: Snap Inc.
    Inventors: Yuncheng Li, Linjie Luo, Xuecheng Nie, Ning Zhang
  • Patent number: 11315219
    Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: April 26, 2022
    Assignee: Snap Inc.
    Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
  • Patent number: 11297027
    Abstract: Systems, methods, devices, computer readable instruction media, and other embodiments are described for automated image processing and insight presentation. One embodiment involves receiving a plurality of ephemeral content messages from a plurality of client devices, and processing the messages to identify content associated with at least a first content type. A set of analysis data associated with the first content type is then generated from the messages, and portions of the messages associated with the first content type are processed to generate a first content collection. The first content collection and the set of analysis data are then communicated to a client device configured for a display interface comprising the first content collection and a representation of at least a portion of the set of analysis data.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: April 5, 2022
    Assignee: Snap Inc.
    Inventors: Harsh Agrawal, Xuan Huang, Jung Hyun Kim, Yuncheng Li, Yiwei Ma, Tao Ning, Ye Tao
  • Patent number: 11288879
    Abstract: Systems, devices, media, and methods are presented for object detection and inserting graphical elements into an image stream in response to detecting the object. The systems and methods detect an object of interest in received frames of a video stream. The systems and methods identify a bounding box for the object of interest and estimate a three-dimensional position of the object of interest based on a scale of the object of interest. The systems and methods generate one or more graphical elements having a size based on the scale of the object of interest and a position based on the three-dimensional position estimated for the object of interest. The one or more graphical elements are generated within the video stream to form a modified video stream. The systems and methods cause presentation of the modified video stream including the object of interest and the one or more graphical elements.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: March 29, 2022
    Assignee: Snap Inc.
    Inventors: Travis Chen, Samuel Edward Hare, Yuncheng Li, Tony Mathew, Jonathan Solichin, Jianchao Yang, Ning Zhang
  • Publication number: 20220044010
    Abstract: Segmentation of an image into individual body parts is performed based on a trained model. The model is trained with a plurality of training images, each training image representing a corresponding training figure. The model is also trained with a corresponding plurality of segmentations of the training figures. Each segmentation is generated by positioning body parts between defined positions of joints of the represented figure. The body parts are represented by body part templates obtained from a template library, with the templates defining characteristics of body parts represented by the templates.
    Type: Application
    Filed: October 22, 2021
    Publication date: February 10, 2022
    Inventors: Yuncheng Li, Linjie Yang, Ning Zheng, Zhengyuan Yang
  • Publication number: 20210407548
    Abstract: Aspects of the present disclosure involve a system comprising a storage medium storing a program and method for receiving a video comprising a plurality of video segments; selecting a target action sequence that includes a sequence of action phases; receiving features of each of the video segments; computing, based on the received features, for each of the plurality of video segments, a plurality of action phase confidence scores indicating a likelihood that a given video segment includes a given action phase of the sequence of action phases; identifying a set of consecutive video segments of the plurality of video segments that corresponds to the target action sequence, wherein video segments in the set of consecutive video segments are arranged according to the sequence of action phases; and generating a display of the video that includes the set of consecutive video segments and skips other video segments in the video.
    Type: Application
    Filed: September 2, 2021
    Publication date: December 30, 2021
    Inventors: Zhou Ren, Yuncheng Li, Ning Xu, Enxu Yan, Tan Yu
  • Patent number: 11182603
    Abstract: Segmentation of an image into individual body parts is performed based on a trained model. The model is trained with a plurality of training images, each training image representing a corresponding training figure. The model is also trained with a corresponding plurality of segmentations of the training figures. Each segmentation is generated by positioning body parts between defined positions of joints of the represented figure. The body parts are represented by body part templates obtained from a template library, with the templates defining characteristics of body parts represented by the templates.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: November 23, 2021
    Assignee: Snap Inc.
    Inventors: Yuncheng Li, Linjie Yang, Ning Zhang, Zhengyuan Yang
  • Patent number: 11158351
    Abstract: Aspects of the present disclosure involve a system comprising a storage medium storing a program and method for receiving a video comprising a plurality of video segments; selecting a target action sequence that includes a sequence of action phases; receiving features of each of the video segments; computing, based on the received features, for each of the plurality of video segments, a plurality of action phase confidence scores indicating a likelihood that a given video segment includes a given action phase of the sequence of action phases; identifying a set of consecutive video segments of the plurality of video segments that corresponds to the target action sequence, wherein video segments in the set of consecutive video segments are arranged according to the sequence of action phases; and generating a display of the video that includes the set of consecutive video segments and skips other video segments in the video.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: October 26, 2021
    Assignee: Snap Inc.
    Inventors: Zhou Ren, Yuncheng Li, Ning Xu, Enxu Yan, Tan Yu
  • Publication number: 20210225077
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for receiving a monocular image that includes a depiction of a hand and extracting features of the monocular image using a plurality of machine learning techniques. The program and method further include modeling, based on the extracted features, a pose of the hand depicted in the monocular image by adjusting skeletal joint positions of a three-dimensional (3D) hand mesh using a trained graph convolutional neural network (CNN); modeling, based on the extracted features, a shape of the hand in the monocular image by adjusting blend shape values of the 3D hand mesh representing surface features of the hand depicted in the monocular image using the trained graph CNN; and generating, for display, the 3D hand mesh adjusted to model the pose and shape of the hand depicted in the monocular image.
    Type: Application
    Filed: April 5, 2021
    Publication date: July 22, 2021
    Inventors: Liuhao Ge, Zhou Ren, Yuncheng LI, Zehao Xue, Yingying Wang
  • Publication number: 20210209825
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for detecting a pose of a user. The program and method include receiving a monocular image that includes a depiction of a body of a user; detecting a plurality of skeletal joints of the body depicted in the monocular image; and determining a pose represented by the body depicted in the monocular image based on the detected plurality of skeletal joints of the body. A pose of an avatar is modified to match the pose represented by the body depicted in the monocular image by adjusting a set of skeletal joints of a rig of an avatar based on the detected plurality of skeletal joints of the body; and the avatar having the modified pose that matches the pose represented by the body depicted in the monocular image is generated for display.
    Type: Application
    Filed: March 25, 2021
    Publication date: July 8, 2021
    Inventors: Avihay Assouline, Itamar Berger, Yuncheng Li
  • Patent number: 10997787
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for receiving a monocular image that includes a depiction of a hand and extracting features of the monocular image using a plurality of machine learning techniques. The program and method further include modeling, based on the extracted features, a pose of the hand depicted in the monocular image by adjusting skeletal joint positions of a three-dimensional (3D) hand mesh using a trained graph convolutional neural network (CNN); modeling, based on the extracted features, a shape of the hand in the monocular image by adjusting blend shape values of the 3D hand mesh representing surface features of the hand depicted in the monocular image using the trained graph CNN; and generating, for display, the 3D hand mesh adjusted to model the pose and shape of the hand depicted in the monocular image.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: May 4, 2021
    Assignee: Snap Inc.
    Inventors: Liuhao Ge, Zhou Ren, Yuncheng Li, Zehao Xue, Yingying Wang
  • Publication number: 20210125342
    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.
    Type: Application
    Filed: November 5, 2020
    Publication date: April 29, 2021
    Inventors: Yuncheng Li, Linjie Luo, Xuecheng Nie, Ning Zhang
  • Patent number: 10984575
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for detecting a pose of a user. The program and method include receiving a monocular image that includes a depiction of a body of a user; detecting a plurality of skeletal joints of the body depicted in the monocular image; and determining a pose represented by the body depicted in the monocular image based on the detected plurality of skeletal joints of the body. A pose of an avatar is modified to match the pose represented by the body depicted in the monocular image by adjusting a set of skeletal joints of a rig of an avatar based on the detected plurality of skeletal joints of the body; and the avatar having the modified pose that matches the pose represented by the body depicted in the monocular image is generated for display.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: April 20, 2021
    Assignee: Snap Inc.
    Inventors: Avihay Assouline, Itamar Berger, Yuncheng Li