Patents by Inventor XIANTONG ZHEN

XIANTONG ZHEN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960576
    Abstract: Videos captured in low light conditions can be processed in order to identify an activity being performed in the video. The processing may use both the video and audio streams for identifying the activity in the low light video. The video portion is processed to generate a darkness-aware feature which may be used to modulate the features generated from the audio and video features. The audio features may be used to generate a video attention feature and the video features may be used to generate an audio attention feature. The audio and video attention features may also be used in modulating the audio video features. The modulated audio and video features may be used to predict an activity occurring in the video.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: April 16, 2024
    Assignee: Inception Institute of Artificial Intelligence Ltd
    Inventors: Yunhua Zhang, Xiantong Zhen, Ling Shao, Cees G. M. Snoek
  • Publication number: 20230039641
    Abstract: Videos captured in low light conditions can be processed in order to identify an activity being performed in the video. The processing may use both the video and audio streams for identifying the activity in the low light video. The video portion is processed to generate a darkness-aware feature which may be used to modulate the features generated from the audio and video features. The audio features may be used to generate a video attention feature and the video features may be used to generate an audio attention feature. The audio and video attention features may also be used in modulating the audio video features. The modulated audio and video features may be used to predict an activity occurring in the video.
    Type: Application
    Filed: July 20, 2021
    Publication date: February 9, 2023
    Inventors: Yunhua Zhang, Xiantong Zhen, Ling Shao, Cees G.M. Snoek
  • Publication number: 20220398262
    Abstract: Methods, systems, and techniques for kernel continuing learning. A dataset is obtained that corresponds to a classification task. Feature extraction is performed on the dataset using an artificial neural network. A kernel is constructed using features extracted during that feature extraction for use in performing the classification task. More particularly, during training, a coreset dataset corresponding to the classification task is saved; and during subsequent inference, the coreset dataset is retrieved and used to construct a task-specific kernel for classification.
    Type: Application
    Filed: June 13, 2021
    Publication date: December 15, 2022
    Applicant: INCEPTION INSTITUTE OF ARTIFICIAL INTELLIGENCE LIMITED
    Inventors: Mohammad Derakhshani, Xiantong Zhen, Ling Shao, Cees Snoek
  • Patent number: 11126862
    Abstract: The present disclosure provides a dense crowd counting method and an apparatus, including: acquiring an image to be detected, where the image to be detected includes images of people; feeding the image to be detected into a convolutional neural network model to obtain a crowd density map of the image to be detected; and determining the number of the images of people in the image to be detected according to the crowd density map. Feature information of an image to be detected may be fully extracted through the above mentioned process, thereby realizing a better performance of crowd counting and density estimation, providing great convenience for subsequent security monitoring, crowd control and other applications.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: September 21, 2021
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xianbin Cao, Xiantong Zhen, Yan Li, Lei Yue, Zehao Xiao, Yutao Hu
  • Patent number: 10719940
    Abstract: The present disclosure provides a target tracking method and device oriented to airborne-based monitoring scenarios. The method includes: obtaining a video to-be-tracked of the target object in real time; extracting a first frame and a second frame; trimming and capturing the first frame to derive an image for first interest region; trimming and capturing the second frame to derive an image for target template and an image for second interest region; inputting the image for target template and the image for first interest region into an appearance tracker network to derive an appearance tracking position; inputting the image for first interest region and the image for second interest region into a motion tracker network to derive a motion tracking position; and finally inputting the appearance tracking position and the motion tracking position into a deep integration network to derive a final tracking position.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: July 21, 2020
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xianbin Cao, Xiantong Zhen, Yan Li, Xiaolong Jiang, Yutao Hu
  • Publication number: 20200074186
    Abstract: The present disclosure provides a dense crowd counting method and an apparatus, including: acquiring an image to be detected, where the image to be detected includes images of people; feeding the image to be detected into a convolutional neural network model to obtain a crowd density map of the image to be detected; and determining the number of the images of people in the image to be detected according to the crowd density map. Feature information of an image to be detected may be fully extracted through the above mentioned process, thereby realizing a better performance of crowd counting and density estimation, providing great convenience for subsequent security monitoring, crowd control and other applications.
    Type: Application
    Filed: January 29, 2019
    Publication date: March 5, 2020
    Inventors: XIANBIN CAO, XIANTONG ZHEN, YAN LI, LEI YUE, ZEHAO XIAO, YUTAO HU
  • Publication number: 20200051250
    Abstract: The present disclosure provides a target tracking method and device oriented to airborne-based monitoring scenarios. The method includes: obtaining a video to-be-tracked of the target object in real time; extracting a first frame and a second frame; trimming and capturing the first frame to derive an image for first interest region; trimming and capturing the second frame to derive an image for target template and an image for second interest region; inputting the image for target template and the image for first interest region into an appearance tracker network to derive an appearance tracking position; inputting the image for first interest region and the image for second interest region into a motion tracker network to derive a motion tracking position; and finally inputting the appearance tracking position and the motion tracking position into a deep integration network to derive a final tracking position.
    Type: Application
    Filed: September 28, 2018
    Publication date: February 13, 2020
    Inventors: XIANBIN CAO, XIANTONG ZHEN, YAN LI, XIAOLONG JIANG, YUTAO HU