Patents by Inventor Zejian YUAN

Zejian YUAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220180543
    Abstract: A method of depth map completion is described. A color map and a sparse depth map of a target scenario can be received. Resolutions of the color map and the sparse depth map are adjusted to generate n pairs of color maps and sparse depth maps of n different resolutions. The n pairs of color maps and the sparse depth maps can be processed to generate n prediction result maps using a cascade hourglass network including n levels of hourglass networks. Each of the n pair is input to a respective one of the n levels to generate the respective one of the n prediction result maps. The n prediction result maps each include a dense depth map of the same resolution as the corresponding pair. A final dense depth map of the target scenario can be generated according to the dense depth maps.
    Type: Application
    Filed: February 25, 2022
    Publication date: June 9, 2022
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Yonggen LING, Wanchao CHI, Chong ZHANG, Shenghao ZHANG, Zhengyou ZHANG, Zejian YUAN, Ang LI, Zidong CAO
  • Publication number: 20220051061
    Abstract: An artificial intelligence-based action recognition method includes: determining, according to video data comprising an interactive object, node sequence information corresponding to video frames in the video data, the node sequence information of each video frame including position information of nodes in a node sequence, the nodes in the node sequence being nodes of the interactive object that are moved to implement a corresponding interactive action; determining action categories corresponding to the video frames in the video data, including: determining, according to the node sequence information corresponding to N consecutive video frames in the video data, action categories respectively corresponding to the N consecutive video frames; and determining, according to the action categories corresponding to the video frames in the video data, a target interactive action made by the interactive object in the video data.
    Type: Application
    Filed: November 1, 2021
    Publication date: February 17, 2022
    Inventors: Wanchao CHI, Chong ZHANG, Yonggen LING, Wei LIU, Zhengyou ZHANG, Zejian YUAN, Ziyang SONG, Ziyi YIN