Patents by Inventor Jiancheng LYU

Jiancheng LYU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12236614
    Abstract: Systems and techniques are provided for performing scene segmentation and object tracking. For example, a method for processing one or more frames. The method may include determining first one or more features from a first frame. The first frame includes a target object. The method may include obtaining a first mask associated with the first frame. The first mask includes an indication of the target object. The method may further include generating, based on the first mask and the first one or more features, a representation of a foreground and a background of the first frame. The method may include determining second one or more features from a second frame and determining, based on the representation of the foreground and the background of the first frame and the second one or more features, a location of the target object in the second frame.
    Type: Grant
    Filed: May 31, 2022
    Date of Patent: February 25, 2025
    Assignee: QUALCOMM Incorporated
    Inventors: Jiancheng Lyu, Dashan Gao, Yingyong Qi, Shuai Zhang, Ning Bi
  • Publication number: 20240378727
    Abstract: Techniques are provided for image processing. For instance, a process can include obtaining an image; extracting a first set of features at a first scale resolution; extracting a second set of features at a second scale resolution (lower than the first scale resolution); performing a self-attention transform to generate similarity scores for the second set of features; adding the similarity scores to the second set of features to generate a first feature extractor output; up-sampling the first feature extractor output to generate a second feature extractor output; adding the second feature extractor output to the first set of features to generate a third feature extractor output; receiving an instance query; performing a cross-attention transform on the instance query and the first feature extractor output to generate a set of weights; and matrix multiplying the set of weights and the third feature extractor output to generate instance masks.
    Type: Application
    Filed: May 12, 2023
    Publication date: November 14, 2024
    Inventors: Xin LI, Jiancheng LYU, Yingyong QI
  • Patent number: 12141981
    Abstract: Systems and techniques are provided for performing semantic image segmentation using a machine learning system (e.g., including one or more cross-attention transformer layers). For instance, a process can include generating one or more input image features for a frame of image data and generating one or more input depth features for a frame of depth data. One or more fused image features can be determined, at least in part, by fusing the one or more input depth features with the one or more input image features, using a first cross-attention transformer network. One or more segmentation masks can be generated for the frame of image data based on the one or more fused image features.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: November 12, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Shuai Zhang, Xiaowen Ying, Jiancheng Lyu, Yingyong Qi
  • Publication number: 20240233140
    Abstract: Methods and systems of frame based image segmentation are provided. For example, a method for feature object tracking between frames of video data is provided. The method comprises receiving a first frame of video data, extracting a mask feature for each of one or more objects of the first frame, adjusting the first frame by applying each initial mask and corresponding identification to a respective object of the first frame, and outputting the adjusted first frame. The method further comprises tracking the one or more objects in one or more consecutive frames. The tracking comprises extracting a masked feature for each of one or more objects in the consecutive frame, adjusting the consecutive frame by applying each initial mask and corresponding identification for the consecutive frame to the respective object of the one or more objects of the consecutive frame, and outputting the adjusted consecutive frame.
    Type: Application
    Filed: October 25, 2022
    Publication date: July 11, 2024
    Inventors: Xin Li, Jiancheng Lyu, Yingyong Qi
  • Publication number: 20240135549
    Abstract: Methods and systems of frame based image segmentation are provided. For example, a method for feature object tracking between frames of video data is provided. The method comprises receiving a first frame of video data, extracting a mask feature for each of one or more objects of the first frame, adjusting the first frame by applying each initial mask and corresponding identification to a respective object of the first frame, and outputting the adjusted first frame. The method further comprises tracking the one or more objects in one or more consecutive frames. The tracking comprises extracting a masked feature for each of one or more objects in the consecutive frame, adjusting the consecutive frame by applying each initial mask and corresponding identification for the consecutive frame to the respective object of the one or more objects of the consecutive frame, and outputting the adjusted consecutive frame.
    Type: Application
    Filed: October 24, 2022
    Publication date: April 25, 2024
    Inventors: Xin Li, Jiancheng Lyu, Yingyong Qi
  • Publication number: 20230386052
    Abstract: Systems and techniques are provided for performing scene segmentation and object tracking. For example, a method for processing one or more frames. The method may include determining first one or more features from a first frame. The first frame includes a target object. The method may include obtaining a first mask associated with the first frame. The first mask includes an indication of the target object. The method may further include generating, based on the first mask and the first one or more features, a representation of a foreground and a background of the first frame. The method may include determining second one or more features from a second frame and determining, based on the representation of the foreground and the background of the first frame and the second one or more features, a location of the target object in the second frame.
    Type: Application
    Filed: May 31, 2022
    Publication date: November 30, 2023
    Inventors: Jiancheng LYU, Dashan GAO, Yingyong QI, Shuai ZHANG, Ning BI
  • Publication number: 20230306600
    Abstract: Systems and techniques are provided for performing semantic image segmentation using a machine learning system (e.g., including one or more cross-attention transformer layers). For instance, a process can include generating one or more input image features for a frame of image data and generating one or more input depth features for a frame of depth data. One or more fused image features can be determined, at least in part, by fusing the one or more input depth features with the one or more input image features, using a first cross-attention transformer network. One or more segmentation masks can be generated for the frame of image data based on the one or more fused image features.
    Type: Application
    Filed: February 10, 2022
    Publication date: September 28, 2023
    Inventors: Shuai ZHANG, Xiaowen YING, Jiancheng LYU, Yingyong QI