Patents by Inventor Qinpin ZHAO

Qinpin ZHAO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11568637
    Abstract: The present disclosure provides a UAV video aesthetic quality evaluation method based on multi-modal deep learning, which establishes a UAV video aesthetic evaluation data set, analyzes the UAV video through a multi-modal neural network, extracts high-dimensional features, and concatenates the extracted features, thereby achieving aesthetic quality evaluation of the UAV video. There are four steps, step one to: establish a UAV video aesthetic evaluation data set, which is divided into positive samples and negative samples according to the video shooting quality; step two to: use SLAM technology to restore the UAV's flight trajectory and to reconstruct a sparse 3D structure of the scene; step three to: through a multi-modal neural network, extract features of the input UAV video on the image branch, motion branch, and structure branch respectively; and step four to: concatenate the features on multiple branches to obtain the final video aesthetic label and video scene type.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: January 31, 2023
    Assignee: Beihang University
    Inventors: Bin Zhou, Qi Kuang, Qinpin Zhao
  • Patent number: 11288884
    Abstract: A method for urban scene reconstruction uses the top view of a scene as priori information to generate a UVA initial flight path, optimizes the initial path in real time, and realizes 3D reconstruction of the urban scene. There are four steps: (1): to analyze the top view of a scene, obtain the scene layout, and generate a UAV initial path; (2): to reconstruct the sparse point cloud of the building and estimate the building height according to the initial path, combine the scene layout to generate a rough scene model, and adjust the initial path height; (3): to use the rough scene model, sparse point cloud and the UAV flight trajectory to obtain the scene coverage confidence map and the details that need close-ups, optimize the flight path in real time; and (4): to obtain high resolution images, reconstruct them to obtain a 3D model of the scene.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: March 29, 2022
    Assignee: BEIHANG UNIVERSITY
    Inventors: Bin Zhou, Qi Kuang, Jinbo Wu, Qinpin Zhao
  • Publication number: 20210158009
    Abstract: A method for urban scene reconstruction uses the top view of a scene as priori information to generate a UVA initial flight path, optimizes the initial path in real time, and realizes 3D reconstruction of the urban scene. There are four steps: (1): to analyze the top view of a scene, obtain the scene layout, and generate a UAV initial path; (2): to reconstruct the sparse point cloud of the building and estimate the building height according to the initial path, combine the scene layout to generate a rough scene model, and adjust the initial path height; (3): to use the rough scene model, sparse point cloud and the UAV flight trajectory to obtain the scene coverage confidence map and the details that need close-ups, optimize the flight path in real time; and (4): to obtain high resolution images, reconstruct them to obtain a 3D model of the scene.
    Type: Application
    Filed: August 19, 2020
    Publication date: May 27, 2021
    Inventors: Bin ZHOU, Qi KUANG, Jinbo WU, Qinpin ZHAO
  • Publication number: 20210158008
    Abstract: The present disclosure provides a UAV video aesthetic quality evaluation method based on multi-modal deep learning, which establishes a UAV video aesthetic evaluation data set, analyzes the UAV video through a multi-modal neural network, extracts high-dimensional features, and concatenates the extracted features, thereby achieving aesthetic quality evaluation of the UAV video. There are four steps, step one to: establish a UAV video aesthetic evaluation data set, which is divided into positive samples and negative samples according to the video shooting quality; step two to: use SLAM technology to restore the UAV's flight trajectory and to reconstruct a sparse 3D structure of the scene; step three to: through a multi-modal neural network, extract features of the input UAV video on the image branch, motion branch, and structure branch respectively; and step four to: concatenate the features on multiple branches to obtain the final video aesthetic label and video scene type.
    Type: Application
    Filed: August 19, 2020
    Publication date: May 27, 2021
    Inventors: Bin ZHOU, Qi KUANG, Qinpin ZHAO