Patents by Inventor Yonghong Tian

Yonghong Tian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230412769
    Abstract: A visual computing system is disclosed. The visual computing system may include a front-end device, an edge service and a cloud service which are in communication connection, the front-end device is configured to output compressed video data and feature data, the edge service is configured to store the video data, and converge the feature data, transmit various types of data and control commands, and the cloud service is configured to store algorithm models used to support various applications, and return a model stream according to a model query command, realizing a data transmission architecture with multiple streams of video stream, feature stream, and model stream in parallel, and a system architecture of end, edge, and cloud collaboration.
    Type: Application
    Filed: April 13, 2021
    Publication date: December 21, 2023
    Applicant: PENG CHENG LABORATORY
    Inventors: Wen Gao, Yaowei Wang, Xinbei Bai, Wen Ji, Yonghong Tian
  • Publication number: 20230075664
    Abstract: Disclosed is a method and system for achieving optimal separable convolutions, the method is applied to image analyzing and processing and comprises steps of: inputting an image to be analyzed and processed; calculating three sets of parameters of a separable convolution: an internal number of groups, a channel size and a kernel size of each separated convolution, and achieving optimal separable convolution process; and performing deep neural network image process. The method and system in the present disclosure adopts implementation of separable convolution which efficiently reduces a computational complexity of deep neural network process. Comparing to the FFT and low rank approximation approaches, the method and system disclosed in the present disclosure is efficient for both small and large kernel sizes and shall not require a pre-trained model to operate on and can be deployed to applications where resources are highly constrained.
    Type: Application
    Filed: September 8, 2021
    Publication date: March 9, 2023
    Inventors: Tao WEI, Yonghong TIAN, Yaowei WANG, Yun LIANG, Chang Wen CHEN, Wen GAO
  • Publication number: 20220164580
    Abstract: Disclosed herein is a method for performing few shot action classification and localization in untrimmed videos, where novel-class untrimmed testing videos are recognized with only few trimmed training videos (i.e., few-shot learning), with prior knowledge transferred from un-overlapped base classes where only untrimmed videos and class labels are available (i.e., weak supervision).
    Type: Application
    Filed: November 17, 2021
    Publication date: May 26, 2022
    Inventors: José M.F. Moura, Yixiong Zou, Shanghang Zhang, Guangyao Chen, Yonghong Tian
  • Patent number: 10937132
    Abstract: A spike signal-based display method and a spike signal-based display system are disclosed by the present application. The method includes: analyzing a spike sequence corresponding to a single pixel position to obtain spike-firing information; acquiring respective pixel values corresponding to multiple spike-firing times before a single spike-firing time, and accumulating the pixel values as a first accumulated pixel value; setting a first specific amount corresponding to the single spike-firing time of the pixel position, and summing the first specific amount and the first accumulated pixel value to obtain a first pixel value of the pixel position; comparing the first pixel value with a pixel threshold range, and obtaining a second specific amount based on the first specific amount; and obtaining a second pixel value of the pixel position by summing the first accumulated pixel value and the second specific amount, and generating an image by using the second pixel values.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: March 2, 2021
    Assignee: Peking University
    Inventors: Tiejun Huang, Lin Zhu, Yonghong Tian, Yihua Fu, Jianing Li, Siwei Dong, Yaowei Wang
  • Publication number: 20200226723
    Abstract: A spike signal-based display method and a spike signal-based display system are disclosed by the present application. The method includes: analyzing a spike sequence corresponding to a single pixel position to obtain spike-firing information; acquiring respective pixel values corresponding to multiple spike-firing times before a single spike-firing time, and accumulating the pixel values as a first accumulated pixel value; setting a first specific amount corresponding to the single spike-firing time of the pixel position, and summing the first specific amount and the first accumulated pixel value to obtain a first pixel value of the pixel position; comparing the first pixel value with a pixel threshold range, and obtaining a second specific amount based on the first specific amount; and obtaining a second pixel value of the pixel position by summing the first accumulated pixel value and the second specific amount, and generating an image by using the second pixel values.
    Type: Application
    Filed: October 29, 2019
    Publication date: July 16, 2020
    Applicant: Peking University
    Inventors: Tiejun HUANG, Lin ZHU, Yonghong TIAN, Yihua FU, Jianing LI, Siwei DONG, Yaowei WANG
  • Patent number: 10390040
    Abstract: Embodiments of the present disclosure provide a method, an apparatus, and a system for deep feature coding and decoding. The method comprises: extracting features of respective video frames; determining types of the features, the types reflecting time-domain correlation degrees between the features and a reference feature; encoding the features using predetermined coding patterns matching the types to obtain coded features; and transmitting the coded features to the server such that the server decodes the coded features for a vision analysis task. By using the embodiments of the present disclosure, videos per se may not be transmitted to the cloud server; instead, the features of the video, after being encoded, are transmitted to the cloud server for a vision analysis task; compared with the prior art, data transmission pressure may be lowered, and the storage pressure at the cloud server may also be lowered.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: August 20, 2019
    Assignee: PEKING UNIVERSITY
    Inventors: Yonghong Tian, Lin Ding, Tiejun Huang, Wen Gao
  • Publication number: 20180332301
    Abstract: Embodiments of the present disclosure provide a method, an apparatus, and a system for deep feature coding and decoding. The method comprises: extracting features of respective video frames; determining types of the features, the types reflecting time-domain correlation degrees between the features and a reference feature; encoding the features using predetermined coding patterns matching the types to obtain coded features; and transmitting the coded features to the server such that the server decodes the coded features for a vision analysis task. By using the embodiments of the present disclosure, videos per se may not be transmitted to the cloud server; instead, the features of the video, after being encoded, are transmitted to the cloud server for a vision analysis task; compared with the prior art, data transmission pressure may be lowered, and the storage pressure at the cloud server may also be lowered.
    Type: Application
    Filed: August 30, 2017
    Publication date: November 15, 2018
    Applicant: Peking University
    Inventors: Yonghong TIAN, Lin DING, Tiejun HUANG, Wen GAO
  • Patent number: 9549206
    Abstract: A media decoding method based on cloud computing and decoder thereof are provided by embodiments of the present invention, which are easy to use and applicable to a media of any form, and its requirement for computer resource is low. The method includes: extracting representing features from a media code stream to be decoded; searching in the cloud for a media object which has similar representing features with the media code stream to be decoded by using a feature matching method and the representing features extracted; filling, replacing and improving parts or segments of the media code stream to be decoded with whole or parts of the media object.
    Type: Grant
    Filed: November 6, 2014
    Date of Patent: January 17, 2017
    Assignee: PEKING UNIVERSITY
    Inventors: Tiejun Huang, Wen Gao, Yonghong Tian
  • Publication number: 20150131917
    Abstract: A media decoding method based on cloud computing and decoder thereof are provided by embodiments of the present invention, which are easy to use and applicable to a media of any form, and its requirement for computer resource is low. The method includes: extracting representing features from a media code stream to be decoded; searching in the cloud for a media object which has similar representing features with the media code stream to be decoded by using a feature matching method and the representing features extracted; filling, replacing and improving parts or segments of the media code stream to be decoded with whole or parts of the media object.
    Type: Application
    Filed: November 6, 2014
    Publication date: May 14, 2015
    Inventors: Tiejun Huang, Wen Gao, Yonghong Tian
  • Patent number: 8750602
    Abstract: Embodiments of the present invention relate to a method and a system for personalized advertisement push based on user interest learning. The method may include: obtaining multiple user interest models through multitask sorting learning; extracting an object of interest in a video according to the user interest models; and extracting multiple visual features of the object of interest, and according to the visual features, retrieving related advertising information in an advertisement database. Through the method and the system provided in embodiments of the present invention, a push advertisement may be closely relevant to the content of the video, thereby meeting personalized requirements of a user to a certain extent and achieving personalized advertisement push.
    Type: Grant
    Filed: December 10, 2012
    Date of Patent: June 10, 2014
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Jia Li, Yunchao Gao, Haonan Yu, Jun Zhang, Yonghong Tian, Jun Yan