Patents by Inventor Qinping Zhao

Qinping Zhao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11974050
    Abstract: The embodiments of the present disclosure disclose data simulation method and device for event camera. A specific embodiment of the method includes: decoding the video to be processed to obtain a video frame sequence; inputting a target video frame to a fully convolutional network UNet to obtain event camera contrast threshold distribution information; sampling each pixel in the target video frame to obtain an event camera contrast threshold set; performing processing on the event camera contrast threshold set and the video frame sequence, to obtain the simulated event camera data; performing generative adversarial learning on the simulated event camera data and event camera shooting data, to obtain updated event camera contrast threshold distribution information; generating simulated event camera data. The present disclosure is a computer vision system that can be widely applied to such fields as national defense and military, film and television production, public security, and etc.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: April 30, 2024
    Assignee: Beihang University
    Inventors: Jia Li, Daxin Gu, Yunshan Qi, Qinping Zhao
  • Patent number: 11875546
    Abstract: The present disclosure provides a visual perception method and apparatus, a perception network training method and apparatus, a device and a storage medium. The visual perception method recognizes the acquired image to be perceived with a perception network to determine a perceived target and a pose of the perceived target, and finally determines a control command according to a preset control algorithm and the pose, so as to enable an object to be controlled to determine a processing strategy for the perceived target according to the control command. According to the perception network training method, acquire image data and model data, then generate an edited image with a preset editing algorithm according to a 2D image and a 3D model, and finally train the perception network to be trained according to the edited image and the label.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: January 16, 2024
    Assignee: BEIHANG UNIVERSITY
    Inventors: Bin Zhou, Zongdai Liu, Qinping Zhao, Hongyu Wu
  • Publication number: 20230419001
    Abstract: A three-dimensional fluid reverse modeling method based on physical perception. The method comprises: encoding a fluid surface height field sequence by a surface velocity field convolutional neural network to obtain a surface velocity field at a time t; inputting the surface velocity field into a pre-trained three-dimensional convolutional neural network to obtain a three-dimensional flow field, wherein the three-dimensional flow field includes a velocity field and a pressure field; inputting the surface velocity field into a pre-trained regression network to obtain fluid parameters; and inputting the three-dimensional flow field and the fluid parameters into a physics-based fluid simulator to obtain a time series of the three-dimensional flow field. The requirements for real fluid reproduction and physics-based fluid reediting are met.
    Type: Application
    Filed: September 7, 2023
    Publication date: December 28, 2023
    Inventors: Yang GAO, Xueguang XIE, Fei HOU, Aimin HAO, Qinping ZHAO
  • Publication number: 20220382553
    Abstract: Embodiments of the present disclosure provides a fine-grained image recognition method and apparatus using graph structure represented high-order relation discovery, wherein the method includes: inputting an image to be classified into a convolutional neural network feature extractor with multiple stages, extracting two layers of network feature graphs in the last stage, constructing a hybrid high-order attention module according to the network feature graphs, and forming a high-order feature vector pool according to the hybrid high-order attention module, using each vector in the vector pool as a node, and utilizing semantic similarity among high-order features to form representative vector nodes in groups, and performing global pooling on the representative vector nodes to obtain classification vectors, and obtaining a fine-grained classification result through a fully connected layer and a classifier based on the classification vectors.
    Type: Application
    Filed: December 9, 2021
    Publication date: December 1, 2022
    Inventors: JIA LI, YIFAN ZHAO, DINGFENG SHI, QINPING ZHAO
  • Publication number: 20220377235
    Abstract: The embodiments of the present disclosure disclose data simulation method and device for event camera. A specific embodiment of the method includes: decoding the video to be processed to obtain a video frame sequence; inputting a target video frame to a fully convolutional network UNet to obtain event camera contrast threshold distribution information; sampling each pixel in the target video frame to obtain an event camera contrast threshold set; performing processing on the event camera contrast threshold set and the video frame sequence, to obtain the simulated event camera data; performing generative adversarial learning on the simulated event camera data and event camera shooting data, to obtain updated event camera contrast threshold distribution information; generating simulated event camera data. The present disclosure is a computer vision system that can be widely applied to such fields as national defense and military, film and television production, public security, and etc.
    Type: Application
    Filed: July 15, 2022
    Publication date: November 24, 2022
    Inventors: Jia LI, Daxin GU, Yunshan QI, Qinping ZHAO
  • Patent number: 11461999
    Abstract: The embodiments of this disclosure disclose an image object detection method, device, electronic equipment, and computer-readable medium. A specific mode of carrying out the method includes: performing region segmentation on a target image to obtain at least one image region; performing feature extraction on each image region in the at least one image region to obtain at least one feature map; generating a semantic relation graph and a spatial distribution relation graph based on the at least one feature map and the at least one image region; generating an image region relation graph based on the semantic relation graph and spatial distribution relation graph; determining a target image region from the at least one image region based on the image region relation graph; displaying the target image region. This implementation mode achieves an improvement of user experience and a growth of network traffic.
    Type: Grant
    Filed: October 1, 2020
    Date of Patent: October 4, 2022
    Assignee: Beihang University
    Inventors: Jia Li, Kui Fu, Kai Mu, Qinping Zhao
  • Publication number: 20220254031
    Abstract: The embodiments of the present disclosure disclose an edge-guided human eye image analyzing method. A specific implementation of this method comprises: collect a human eye image as an image to be detected; obtain a human eye detection contour map; obtain a semantic segmentation detection map and an initial human eye image detection fitting parameter; performing an iterative search on the initial human eye image detection fitting parameter to determine a target human eye image detection fitting parameter; sending the semantic segmentation detection map and the target human eye image detection fitting parameter as image analyzing results to a display terminal for display. This implementation improves the accuracy at the boundary dividing the pupil-iris area, and increases the structural integrity of the ellipse resulted from dividing the pupil-iris area. In addition, the iterative search can achieve a more accurate ellipse parameter fitting result.
    Type: Application
    Filed: April 26, 2022
    Publication date: August 11, 2022
    Inventors: Feng Lu, Yuxin Zhao, Zhimin Wang, Qinping Zhao
  • Patent number: 11361590
    Abstract: This disclosure provides a method and an apparatus for monitoring a working state, which automatically collect an image of a staff in real time, determines point of gaze information of the staff based on the image of the staff thus collected, and further determines the working state of the staff according to the point of gaze information. Since the whole process does not require the participation of the staff, the normal work of the staff is not disturbed. Moreover, the accuracy in the monitoring of the working state is improved by avoiding influence of the subjective factors on the assessment result if staff participation is involved.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: June 14, 2022
    Assignee: BEIHANG UNIVERSITY
    Inventors: Feng Lu, Qinping Zhao
  • Publication number: 20220179485
    Abstract: The present application provides a gaze point estimation method, device, and an electronic device. The method includes: acquiring user image data; acquiring a facial feature vector according to a preset first convolutional neural network and the facial image; acquiring a position feature vector according to a preset first fully connected network and the position data; acquiring a binocular fusion feature vector according to a preset eye feature fusion network, the left-eye image and the right-eye image; and acquiring position information about a gaze point of a user according to a preset second fully connected network, the facial feature vector, the position feature vector, and the binocular fusion feature vector. In this technical solution, relation between eye images and face images is utilized to achieve accurate gaze point estimation.
    Type: Application
    Filed: December 8, 2021
    Publication date: June 9, 2022
    Inventors: FENG LU, YIWEI BAO, QINPING ZHAO
  • Publication number: 20220027657
    Abstract: The embodiments of this disclosure disclose an image object detection method, device, electronic equipment, and computer-readable medium. A specific mode of carrying out the method includes: performing region segmentation on a target image to obtain at least one image region; performing feature extraction on each image region in the at least one image region to obtain at least one feature map; generating a semantic relation graph and a spatial distribution relation graph based on the at least one feature map and the at least one image region; generating an image region relation graph based on the semantic relation graph and spatial distribution relation graph; determining a target image region from the at least one image region based on the image region relation graph; displaying the target image region. This implementation mode achieves an improvement of user experience and a growth of network traffic.
    Type: Application
    Filed: October 1, 2020
    Publication date: January 27, 2022
    Inventors: Jia LI, Kui FU, Kai MU, Qinping ZHAO
  • Patent number: 11211070
    Abstract: The present disclosure provides a method, device and system for detecting a working state of a tower controller, the method includes: collecting voice data of a tower controller, and extracting a keyword from the voice data; acquiring a video image of the tower controller, and acquiring a gaze area of the tower controller from the video image; analyzing and detecting whether the tower controller has correctly accomplished an observation action according to the gaze area of the tower controller and the keyword. The present disclosure implements more efficient and accurate detection on the working state of the tower controller, and at the same time ensures the safety of an aircraft in an airport area and reduces a risk of colliding with other obstacles.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: December 28, 2021
    Assignee: BEIHANG UNIVERSITY
    Inventors: Feng Lu, Ze Yang, Qinping Zhao
  • Publication number: 20210387646
    Abstract: The present disclosure provides a visual perception method and apparatus, a perception network training method and apparatus, a device and a storage medium. The visual perception method recognizes the acquired image to be perceived with a perception network to determine a perceived target and a pose of the perceived target, and finally determines a control command according to a preset control algorithm and the pose, so as to enable an object to be controlled to determine a processing strategy for the perceived target according to the control command. According to the perception network training method, acquire image data and model data, then generate an edited image with a preset editing algorithm according to a 2D image and a 3D model, and finally train the perception network to be trained according to the edited image and the label.
    Type: Application
    Filed: March 11, 2021
    Publication date: December 16, 2021
    Inventors: Bin ZHOU, Zongdai LIU, Qinping ZHAO, Hongyu WU
  • Patent number: 11151725
    Abstract: An image salient object segmentation method and an apparatus based on reciprocal attention between a foreground and a background, and the method includes: obtaining a feature map corresponding to a training image based on a convolutional neural backbone network, and obtaining foreground and background initial feature responses according to the feature map; obtaining a reciprocal attention weight matrix, and updating the foreground and background initial feature responses according to the reciprocal attention weight matrix, to obtain foreground and background feature maps; training the convolutional neural backbone network according to the foreground and the background feature maps based on a cross entropy loss function and a cooperative loss function, to obtain a foreground and background segmentation convolutional neural network model; and inputting an image to be segmented into the foreground and background segmentation convolutional neural network model to obtain foreground and background prediction result
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: October 19, 2021
    Assignee: BEIHANG UNIVERSITY
    Inventors: Jia Li, Changqun Xia, Jinming Su, Qinping Zhao
  • Patent number: 10970909
    Abstract: Embodiments of the present disclosure provide a method and an apparatus for eye movement synthesis, the method including: obtaining eye movement feature data and speech feature data, wherein the eye movement feature data reflects an eye movement behavior, and the speech feature data reflects a voice feature; obtaining a driving model according to the eye movement feature data and the speech feature data, wherein the driving model is configured to indicate an association between the eye movement feature data and the speech feature data; synthesizing an eye movement of a virtual human according to speech input data and the driving model and controlling the virtual human to exhibit the synthesized eye movement. The embodiment makes the virtual human to exhibit an eye movement corresponding to the voice data according to the eye movement feature data and the speech feature data, thereby improving the authenticity in the interaction.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: April 6, 2021
    Assignee: BEIHANG UNIVERSITY
    Inventors: Feng Lu, Qinping Zhao
  • Patent number: 10963597
    Abstract: Disclosed are a method and apparatus for adaptively constructing a three-dimensional indoor scenario, the method including: establishing an object association map corresponding to different scenario categories according to an annotated indoor layout; selecting a corresponding target indoor object according to room information inputted by a user and the object association map; generating a target indoor layout according to preset room parameters inputted by the user and the annotated indoor layout; and constructing a three-dimensional indoor scenario according to the target indoor object and the target indoor layout. The disclosed method and apparatus help improving the efficiency in constructing the three-dimensional scenario.
    Type: Grant
    Filed: January 17, 2018
    Date of Patent: March 30, 2021
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Qiang Fu, Bin Zhou, Qinping Zhao, Xiaotian Wang, Sijia Wen
  • Publication number: 20210034845
    Abstract: This disclosure provides a method and an apparatus for monitoring a working state, which automatically collect an image of a staff in real time, determines point of gaze information of the staff based on the image of the staff thus collected, and further determines the working state of the staff according to the point of gaze information. Since the whole process does not require the participation of the staff, the normal work of the staff is not disturbed. Moreover, the accuracy in the monitoring of the working state is improved by avoiding influence of the subjective factors on the assessment result if staff participation is involved.
    Type: Application
    Filed: December 27, 2019
    Publication date: February 4, 2021
    Inventors: Feng Lu, Qinping Zhao
  • Publication number: 20200372660
    Abstract: An image salient object segmentation method and an apparatus based on reciprocal attention between a foreground and a background, and the method includes: obtaining a feature map corresponding to a training image based on a convolutional neural backbone network, and obtaining foreground and background initial feature responses according to the feature map; obtaining a reciprocal attention weight matrix, and updating the foreground and background initial feature responses according to the reciprocal attention weight matrix, to obtain foreground and background feature maps; training the convolutional neural backbone network according to the foreground and the background feature maps based on a cross entropy loss function and a cooperative loss function, to obtain a foreground and background segmentation convolutional neural network model; and inputting an image to be segmented into the foreground and background segmentation convolutional neural network model to obtain foreground and background prediction result
    Type: Application
    Filed: September 24, 2019
    Publication date: November 26, 2020
    Inventors: Jia LI, Changqun XIA, Jinming SU, Qinping ZHAO
  • Publication number: 20200357162
    Abstract: The present disclosure provides a modeling method, apparatus, device and storage medium of a dynamic cardiovascular system. The method includes: obtaining CMR data and CCTA data of a patient to be operated; constructing a dynamic ventricular model of the patient to be operated using the CMR data; constructing a dynamic heart model of the patient to be operated according to the dynamic ventricular model and a preset heart model; constructing a coronary artery model of the patient to be operated using the CCTA data; and constructing a dynamic cardiovascular system model of the patient to be operated according to the dynamic heart model and the coronary artery model, and constructing a personalized dynamic cardiovascular system model for different patients.
    Type: Application
    Filed: May 8, 2020
    Publication date: November 12, 2020
    Inventors: SHUAI LI, AIMIN HAO, QINPING ZHAO
  • Publication number: 20200349750
    Abstract: Embodiments of the present disclosure provide a method and an apparatus for eye movement synthesis, the method including: obtaining eye movement feature data and speech feature data, wherein the eye movement feature data reflects an eye movement behavior, and the speech feature data reflects a voice feature; obtaining a driving model according to the eye movement feature data and the speech feature data, wherein the driving model is configured to indicate an association between the eye movement feature data and the speech feature data; synthesizing an eye movement of a virtual human according to speech input data and the driving model and controlling the virtual human to exhibit the synthesized eye movement. The embodiment makes the virtual human to exhibit an eye movement corresponding to the voice data according to the eye movement feature data and the speech feature data, thereby improving the authenticity in the interaction.
    Type: Application
    Filed: December 23, 2019
    Publication date: November 5, 2020
    Inventors: FENG LU, QINPING ZHAO
  • Patent number: 10740897
    Abstract: Embodiments of the present invention provide a method and a device for three-dimensional feature-embedded image object component-level semantic segmentation, the method includes: acquiring three-dimensional feature information of a target two-dimensional image; performing a component-level semantic segmentation on the target two-dimensional image according to the three-dimensional feature information of the target two-dimensional image and two-dimensional feature information of the target two-dimensional image. In the technical solution of the present application, not only the two-dimensional feature information of the image but also the three-dimensional feature information of the image are taken into consideration when performing the component-level semantic segmentation on the image, thereby improving the accuracy of the image component-level semantic segmentation.
    Type: Grant
    Filed: January 15, 2018
    Date of Patent: August 11, 2020
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Jia Li, Yafei Song, Yifan Zhao, Qinping Zhao