Patents by Inventor Yuncheng Li

Yuncheng Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240108743
    Abstract: A linker peptide for constructing a fusion protein. The linker peptide comprises a flexible peptide and a rigid peptide. The flexible peptide consists of one or more flexible units. The rigid peptide consists of one or more rigid units. The flexible unit comprises two or more amino acid residues selected from Gly, Ser, Ala, and Thr. The rigid unit comprises a human chorionic gonadotropin ?-subunit carboxy-terminal peptide (CTP) bearing a plurality of glycosylation sites. The linker peptide can more effectively eliminate mutual steric hindrance of two fusion molecules, decreasing a reduction/loss of polymerization or activity resulting from improper folding of an active protein or a conformational change. On the other hand, the negatively charged, highly sialylated CTP can resist renal clearance, further prolonging a half-life of a fused molecule and enhancing bioavailability of a fused protein.
    Type: Application
    Filed: October 24, 2023
    Publication date: April 4, 2024
    Inventors: Qiang LI, Yuanli LI, Si CHEN, Zhu WANG, Zhao DONG, Zirui LI, Xinlu MA, Lu YANG, Yongjuan GAO, Yuncheng ZHENG, Naichao SUN
  • Publication number: 20240089325
    Abstract: A technique is provided for determining a loss risk assessment score for a vehicle trip. The technique includes, at a vehicle, a computing device receiving first information indicative of operation of the vehicle. The technique also includes, at the vehicle, the computing device receiving second information indicative of an environment at a particular location and time. The computing device correlates the first information and the second information to generate a data set. The technique also includes determining a score for the vehicle trip based at least in part upon the generated data set.
    Type: Application
    Filed: November 14, 2023
    Publication date: March 14, 2024
    Inventors: Krishna Nemmani, Zebediah Robert Black, Corey Casmedes, Hoang Dang, Yuncheng Gao, Tyler Hargreaves, John Steven Kirtzic, Zongzhe Li, Einar Longva, Victor Mao, Trac Nguyen, Sivarama Kirshna Panguluri, Dalton Sherer, Edward Yang
  • Publication number: 20240062390
    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.
    Type: Application
    Filed: September 1, 2023
    Publication date: February 22, 2024
    Inventors: Yuncheng Li, Linjie Luo, Xuecheng Nie, Ning Zhang
  • Publication number: 20240062335
    Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.
    Type: Application
    Filed: November 1, 2023
    Publication date: February 22, 2024
    Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
  • Patent number: 11880509
    Abstract: Systems and methods herein describe using a neural network to identify a first set of joint location coordinates and a second set of joint location coordinates and identifying a three-dimensional hand pose based on both the first and second sets of joint location coordinates.
    Type: Grant
    Filed: January 9, 2023
    Date of Patent: January 23, 2024
    Assignee: SNAP INC.
    Inventors: Yuncheng Li, Jonathan M. Rodriguez, II, Zehao Xue, Yingying Wang
  • Patent number: 11847760
    Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.
    Type: Grant
    Filed: April 6, 2022
    Date of Patent: December 19, 2023
    Assignee: Snap Inc.
    Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
  • Patent number: 11830209
    Abstract: Systems, devices, media, and methods are presented for object detection and inserting graphical elements into an image stream in response to detecting the object. The systems and methods detect an object of interest in received frames of a video stream. The systems and methods identify a bounding box for the object of interest and estimate a three-dimensional position of the object of interest based on a scale of the object of interest. The systems and methods generate one or more graphical elements having a size based on the scale of the object of interest and a position based on the three-dimensional position estimated for the object of interest. The one or more graphical elements are generated within the video stream to form a modified video stream. The systems and methods cause presentation of the modified video stream including the object of interest and the one or more graphical elements.
    Type: Grant
    Filed: February 17, 2022
    Date of Patent: November 28, 2023
    Assignee: SNAP INC.
    Inventors: Travis Chen, Samuel Edward Hare, Yuncheng Li, Tony Mathew, Jonathan Solichin, Jianchao Yang, Ning Zhang
  • Publication number: 20230376757
    Abstract: Systems and methods are disclosed for capturing multiple sequences of views of a three-dimensional object using a plurality of virtual cameras. The systems and methods generate aligned sequences from the multiple sequences based on an arrangement of the plurality of virtual cameras in relation to the three-dimensional object. Using a convolutional network, the systems and methods classify the three-dimensional object based on the aligned sequences and identify the three-dimensional object using the classification.
    Type: Application
    Filed: August 4, 2023
    Publication date: November 23, 2023
    Inventors: Yuncheng Li, Zhou Ren, Ning Xu, Enxu Yan, Tan Yu
  • Patent number: 11783494
    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.
    Type: Grant
    Filed: April 25, 2022
    Date of Patent: October 10, 2023
    Assignee: Snap Inc.
    Inventors: Yuncheng Li, Linjie Luo, Xuecheng Nie, Ning Zhang
  • Publication number: 20230290174
    Abstract: Segmentation of an image into individual body parts is performed based on a trained model. The model is trained with a plurality of training images, each training image representing a corresponding training figure. The model is also trained with a corresponding plurality of segmentations of the training figures. Each segmentation is generated by positioning body parts between defined positions of joints of the represented figure. The body parts are represented by body part templates obtained from a template library, with the templates defining characteristics of body parts represented by the templates.
    Type: Application
    Filed: May 16, 2023
    Publication date: September 14, 2023
    Inventors: Yuncheng Li, Linjie Yang, Ning Zhang, Zhengyuan Yang
  • Patent number: 11755910
    Abstract: Systems and methods are disclosed for capturing multiple sequences of views of a three-dimensional object using a plurality of virtual cameras. The systems and methods generate aligned sequences from the multiple sequences based on an arrangement of the plurality of virtual cameras in relation to the three-dimensional object. Using a convolutional network, the systems and methods classify the three-dimensional object based on the aligned sequences and identify the three-dimensional object using the classification.
    Type: Grant
    Filed: August 1, 2022
    Date of Patent: September 12, 2023
    Assignee: SNAP INC.
    Inventors: Yuncheng Li, Zhou Ren, Ning Xu, Enxu Yan, Tan Yu
  • Patent number: 11734844
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for receiving a monocular image that includes a depiction of a hand and extracting features of the monocular image using a plurality of machine learning techniques. The program and method further include modeling, based on the extracted features, a pose of the hand depicted in the monocular image by adjusting skeletal joint positions of a three-dimensional (3D) hand mesh using a trained graph convolutional neural network (CNN); modeling, based on the extracted features, a shape of the hand in the monocular image by adjusting blend shape values of the 3D hand mesh representing surface features of the hand depicted in the monocular image using the trained graph CNN; and generating, for display, the 3D hand mesh adjusted to model the pose and shape of the hand depicted in the monocular image.
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: August 22, 2023
    Assignee: Snap Inc.
    Inventors: Liuhao Ge, Zhou Ren, Yuncheng Li, Zehao Xue, Yingying Wang
  • Patent number: 11727710
    Abstract: Segmentation of an image into individual body parts is performed based on a trained model. The model is trained with a plurality of training images, each training image representing a corresponding training figure. The model is also trained with a corresponding plurality of segmentations of the training figures. Each segmentation is generated by positioning body parts between defined positions of joints of the represented figure. The body parts are represented by body part templates obtained from a template library, with the templates defining characteristics of body parts represented by the templates.
    Type: Grant
    Filed: October 22, 2021
    Date of Patent: August 15, 2023
    Assignee: Snap Inc.
    Inventors: Yuncheng Li, Linjie Yang, Ning Zhang, Zhengyuan Yang
  • Patent number: 11704893
    Abstract: Aspects of the present disclosure involve a system comprising a storage medium storing a program and method for receiving a video comprising a plurality of video segments; selecting a target action sequence that includes a sequence of action phases; receiving features of each of the video segments; computing, based on the received features, for each of the plurality of video segments, a plurality of action phase confidence scores indicating a likelihood that a given video segment includes a given action phase of the sequence of action phases; identifying a set of consecutive video segments of the plurality of video segments that corresponds to the target action sequence, wherein video segments in the set of consecutive video segments are arranged according to the sequence of action phases; and generating a display of the video that includes the set of consecutive video segments and skips other video segments in the video.
    Type: Grant
    Filed: September 2, 2021
    Date of Patent: July 18, 2023
    Assignee: Snap Inc.
    Inventors: Zhou Ren, Yuncheng Li, Ning Xu, Enxu Yan, Tan Yu
  • Publication number: 20230146563
    Abstract: Systems, methods, devices, computer readable instruction media, and other embodiments are described for automated image processing and insight presentation. One embodiment involves receiving a plurality of ephemeral content messages from a plurality of client devices, and processing the messages to identify content associated with at least a first content type. A set of analysis data associated with the first content type is then generated from the messages, and portions of the messages associated with the first content type are processed to generate a first content collection. The first content collection and the set of analysis data are then communicated to a client device configured for a display interface comprising the first content collection and a representation of at least a portion of the set of analysis data.
    Type: Application
    Filed: January 4, 2023
    Publication date: May 11, 2023
    Inventors: Harsh Agrawal, Xuan Huang, Jung Hyun Kim, Yuncheng Li, Yiwei Ma, Tao Ning, Ye Tao
  • Publication number: 20230090086
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for detecting a pose of a user. The program and method include receiving a monocular image that includes a depiction of a body of a user; detecting a plurality of skeletal joints of the body depicted in the monocular image; and determining a pose represented by the body depicted in the monocular image based on the detected plurality of skeletal joints of the body. A pose of an avatar is modified to match the pose represented by the body depicted in the monocular image by adjusting a set of skeletal joints of a rig of an avatar based on the detected plurality of skeletal joints of the body; and the avatar having the modified pose that matches the pose represented by the body depicted in the monocular image is generated for display.
    Type: Application
    Filed: November 30, 2022
    Publication date: March 23, 2023
    Inventors: Avihay Assouline, Itamar Berger, Yuncheng Li
  • Patent number: 11601391
    Abstract: Systems, methods, devices, computer readable instruction media, and other embodiments are described for automated image processing and insight presentation. One embodiment involves receiving a plurality of ephemeral content messages from a plurality of client devices, and processing the messages to identify content associated with at least a first content type. A set of analysis data associated with the first content type is then generated from the messages, and portions of the messages associated with the first content type are processed to generate a first content collection. The first content collection and the set of analysis data are then communicated to a client device configured for a display interface comprising the first content collection and a representation of at least a portion of the set of analysis data.
    Type: Grant
    Filed: April 4, 2022
    Date of Patent: March 7, 2023
    Assignee: Snap Inc.
    Inventors: Harsh Agrawal, Xuan Huang, Jung Hyun Kim, Yuncheng Li, Yiwei Ma, Tao Ning, Ye Tao
  • Publication number: 20230034794
    Abstract: Systems and methods are disclosed for capturing multiple sequences of views of a three-dimensional object using a plurality of virtual cameras. The systems and methods generate aligned sequences from the multiple sequences based on an arrangement of the plurality of virtual cameras in relation to the three-dimensional object. Using a convolutional network, the systems and methods classify the three-dimensional object based on the aligned sequences and identify the three-dimensional object using the classification.
    Type: Application
    Filed: August 1, 2022
    Publication date: February 2, 2023
    Inventors: Yuncheng Li, Zhou Ren, Ning Xu, Enxu Yan, Tan Yu
  • Patent number: 11557075
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for detecting a pose of a user. The program and method include receiving a monocular image that includes a depiction of a body of a user; detecting a plurality of skeletal joints of the body depicted in the monocular image; and determining a pose represented by the body depicted in the monocular image based on the detected plurality of skeletal joints of the body. A pose of an avatar is modified to match the pose represented by the body depicted in the monocular image by adjusting a set of skeletal joints of a rig of an avatar based on the detected plurality of skeletal joints of the body; and the avatar having the modified pose that matches the pose represented by the body depicted in the monocular image is generated for display.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: January 17, 2023
    Assignee: Snap Inc.
    Inventors: Avihay Assouline, Itamar Berger, Yuncheng Li
  • Publication number: 20230010480
    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.
    Type: Application
    Filed: April 25, 2022
    Publication date: January 12, 2023
    Inventors: Yuncheng Li, Linjie Luo, Xuecheng Nie, Ning Zhang