Patents by Inventor Qinping Zhao

Qinping Zhao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10970909
    Abstract: Embodiments of the present disclosure provide a method and an apparatus for eye movement synthesis, the method including: obtaining eye movement feature data and speech feature data, wherein the eye movement feature data reflects an eye movement behavior, and the speech feature data reflects a voice feature; obtaining a driving model according to the eye movement feature data and the speech feature data, wherein the driving model is configured to indicate an association between the eye movement feature data and the speech feature data; synthesizing an eye movement of a virtual human according to speech input data and the driving model and controlling the virtual human to exhibit the synthesized eye movement. The embodiment makes the virtual human to exhibit an eye movement corresponding to the voice data according to the eye movement feature data and the speech feature data, thereby improving the authenticity in the interaction.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: April 6, 2021
    Assignee: BEIHANG UNIVERSITY
    Inventors: Feng Lu, Qinping Zhao
  • Patent number: 10963597
    Abstract: Disclosed are a method and apparatus for adaptively constructing a three-dimensional indoor scenario, the method including: establishing an object association map corresponding to different scenario categories according to an annotated indoor layout; selecting a corresponding target indoor object according to room information inputted by a user and the object association map; generating a target indoor layout according to preset room parameters inputted by the user and the annotated indoor layout; and constructing a three-dimensional indoor scenario according to the target indoor object and the target indoor layout. The disclosed method and apparatus help improving the efficiency in constructing the three-dimensional scenario.
    Type: Grant
    Filed: January 17, 2018
    Date of Patent: March 30, 2021
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Qiang Fu, Bin Zhou, Qinping Zhao, Xiaotian Wang, Sijia Wen
  • Publication number: 20210034845
    Abstract: This disclosure provides a method and an apparatus for monitoring a working state, which automatically collect an image of a staff in real time, determines point of gaze information of the staff based on the image of the staff thus collected, and further determines the working state of the staff according to the point of gaze information. Since the whole process does not require the participation of the staff, the normal work of the staff is not disturbed. Moreover, the accuracy in the monitoring of the working state is improved by avoiding influence of the subjective factors on the assessment result if staff participation is involved.
    Type: Application
    Filed: December 27, 2019
    Publication date: February 4, 2021
    Inventors: Feng Lu, Qinping Zhao
  • Publication number: 20200372660
    Abstract: An image salient object segmentation method and an apparatus based on reciprocal attention between a foreground and a background, and the method includes: obtaining a feature map corresponding to a training image based on a convolutional neural backbone network, and obtaining foreground and background initial feature responses according to the feature map; obtaining a reciprocal attention weight matrix, and updating the foreground and background initial feature responses according to the reciprocal attention weight matrix, to obtain foreground and background feature maps; training the convolutional neural backbone network according to the foreground and the background feature maps based on a cross entropy loss function and a cooperative loss function, to obtain a foreground and background segmentation convolutional neural network model; and inputting an image to be segmented into the foreground and background segmentation convolutional neural network model to obtain foreground and background prediction result
    Type: Application
    Filed: September 24, 2019
    Publication date: November 26, 2020
    Inventors: Jia LI, Changqun XIA, Jinming SU, Qinping ZHAO
  • Publication number: 20200357162
    Abstract: The present disclosure provides a modeling method, apparatus, device and storage medium of a dynamic cardiovascular system. The method includes: obtaining CMR data and CCTA data of a patient to be operated; constructing a dynamic ventricular model of the patient to be operated using the CMR data; constructing a dynamic heart model of the patient to be operated according to the dynamic ventricular model and a preset heart model; constructing a coronary artery model of the patient to be operated using the CCTA data; and constructing a dynamic cardiovascular system model of the patient to be operated according to the dynamic heart model and the coronary artery model, and constructing a personalized dynamic cardiovascular system model for different patients.
    Type: Application
    Filed: May 8, 2020
    Publication date: November 12, 2020
    Inventors: SHUAI LI, AIMIN HAO, QINPING ZHAO
  • Publication number: 20200349750
    Abstract: Embodiments of the present disclosure provide a method and an apparatus for eye movement synthesis, the method including: obtaining eye movement feature data and speech feature data, wherein the eye movement feature data reflects an eye movement behavior, and the speech feature data reflects a voice feature; obtaining a driving model according to the eye movement feature data and the speech feature data, wherein the driving model is configured to indicate an association between the eye movement feature data and the speech feature data; synthesizing an eye movement of a virtual human according to speech input data and the driving model and controlling the virtual human to exhibit the synthesized eye movement. The embodiment makes the virtual human to exhibit an eye movement corresponding to the voice data according to the eye movement feature data and the speech feature data, thereby improving the authenticity in the interaction.
    Type: Application
    Filed: December 23, 2019
    Publication date: November 5, 2020
    Inventors: FENG LU, QINPING ZHAO
  • Patent number: 10740897
    Abstract: Embodiments of the present invention provide a method and a device for three-dimensional feature-embedded image object component-level semantic segmentation, the method includes: acquiring three-dimensional feature information of a target two-dimensional image; performing a component-level semantic segmentation on the target two-dimensional image according to the three-dimensional feature information of the target two-dimensional image and two-dimensional feature information of the target two-dimensional image. In the technical solution of the present application, not only the two-dimensional feature information of the image but also the three-dimensional feature information of the image are taken into consideration when performing the component-level semantic segmentation on the image, thereby improving the accuracy of the image component-level semantic segmentation.
    Type: Grant
    Filed: January 15, 2018
    Date of Patent: August 11, 2020
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Jia Li, Yafei Song, Yifan Zhao, Qinping Zhao
  • Publication number: 20200202854
    Abstract: The present disclosure provides a method, device and system for detecting a working state of a tower controller, the method includes: collecting voice data of a tower controller, and extracting a keyword from the voice data; acquiring a video image of the tower controller, and acquiring a gaze area of the tower controller from the video image; analyzing and detecting whether the tower controller has correctly accomplished an observation action according to the gaze area of the tower controller and the keyword. The present disclosure implements more efficient and accurate detection on the working state of the tower controller, and at the same time ensures the safety of an aircraft in an airport area and reduces a risk of colliding with other obstacles.
    Type: Application
    Filed: November 26, 2019
    Publication date: June 25, 2020
    Inventors: FENG LU, ZE YANG, QINPING ZHAO
  • Patent number: 10672130
    Abstract: Disclosed is a co-segmentation method and apparatus for a three-dimensional model set, which includes: obtaining a super patch set for the three-dimensional model set which includes at least two three-dimensional models, each of the three-dimensional models including at least two super patches; obtaining a consistent affinity propagation model according to a first predefined condition and a conventional affinity propagation model, the consistent affinity propagation model being constraint by the first predefined condition which is position information for at least two super patches that are in the super patch set and belong to a common three-dimensional model set; converting the consistent affinity propagation model into a consistent convergence affinity propagation model; clustering the super patch set through the consistent convergence affinity propagation model to generate a co-segmentation outcome for the three-dimensional model set.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: June 2, 2020
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Dongqing Zou, Xiaogang Wang, Zongji Wang, Qinping Zhao
  • Patent number: 10529132
    Abstract: Disclosed is a method for real-time cutting of digital organ based on a metaball model and a hybrid driving method, including a cutting procedure for driving a model using a position-based dynamics and a meshless method, a cutting mode which begins from a metaball driven by the position-based dynamics, proceeds to a point set driven by the meshless method and then create a new metaball. The method includes: a preprocessing procedure which performs an initialization operation while reading a model file; a deforming procedure which drives a model using a method based on the position-based dynamics; a cutting procedure which drives the model using the hybrid driving method and performs cutting using said cutting mode; and a rendering procedure which renders the model during the second and third procedures.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: January 7, 2020
    Assignee: BEIHANG UNIVERSITY
    Inventors: JunJun Pan, Shizeng Yan, Qinping Zhao
  • Patent number: 10521955
    Abstract: The disclosed provides a posture-guided cross-category method and device for combination modeling of cross-category 3D models, the method including: receiving a first posture model inputted by a user; calculating similarities between q first regions of the first posture model and q second regions of each of second posture models in a preset model database, respectively; where the model database includes a plurality of models partitioned into model components, and second posture models corresponding to the models; where the q first regions correspond to the q second regions one on one, where q is an integer greater than or equal to 2; selecting a corresponding plurality of model components of the q second regions of the second posture model according to the similarities; and combining the selected plurality of model components to generate a 3D model. The embodiment of the present application can combine cross-category models with large structural differences.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: December 31, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Qiang Fu, Bin Zhou, Qinping Zhao, Xiaotian Wang, Sijia Wen
  • Patent number: 10504209
    Abstract: Present invention provides a ranking convolutional neural network constructing method and an image processing method and apparatus thereof. The ranking convolutional neural network includes a ranking layer that is configured to sort an output of a previous layer of the ranking layer, generate an output of the ranking layer according to the sorted output, and output the output of the ranking layer to a next layer of the ranking layer. Using the ranking convolutional neural network enables obtaining an output feature corresponding to the input feature image through automatic learning. Compared with prior art methods that obtain features through manual calculation, the method of the present invention is superior in terms of reflecting the objective laws contained by the patterns of the actual scene. When applied to the field of image processing, the method can significantly improve the effect of image processing.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: December 10, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Yafei Song, Jia Li, Qinping Zhao, Xiaogang Wang
  • Patent number: 10489914
    Abstract: The present application provides a method and an apparatus for parsing and processing a three-dimensional CAD model, where the method includes: determining three kinds of adjacency relation information for each component in the three-dimensional model; performing aggregation processing on all components of the three-dimensional model, and generating three part hypothesis sets for the three-dimensional model; performing voxelization expression processing on each part hypothesis in each part hypothesis set, and generating voxelization information for each part hypothesis; inputting voxelization information of all part hypotheses in each part hypothesis set into an identification model to obtain a confidence score and a semantic category probability distribution for each part hypothesis; and constructing, according to the confidence score and the semantic category probability distribution of each part hypothesis in the part hypothesis sets, a high-order conditional random field model, and obtaining a semantic ca
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: November 26, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Xiaogang Wang, Bin Zhou, Haiyue Fang, Qinping Zhao
  • Patent number: 10417805
    Abstract: Provided is a method for automatic constructing a three-dimensional indoor scenario with behavior-constrained, including: determining a room, parameter of the room and a first behavioral semantics according to entered information; obtaining a corresponding first object category according to the first behavioral semantics; determining a corresponding first reference layout and a three-dimensional model of objects in the room according to the first object category and the parameter of the room; and directing and generating a three-dimensional indoor scenario in the room according to the first reference layout and the three-dimensional model.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: September 17, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Qiang Fu, Bin Zhou, Qinping Zhao, Xiaotian Wang, Bo Gao
  • Patent number: 10402680
    Abstract: A method and an apparatus for extracting a saliency map are provided in the embodiment of the present application, the method includes: conducting first convolution processing, first pooling processing and normalization processing on an original image via a prediction model to obtain eye fixation information from the original image, where the eye fixation information is used for indicating a region at which human eye gaze; conducting second convolution processing and second pooling processing on the original image via the prediction model to obtain semantic description information from the original image; fusing the eye fixation information and the semantic description information via element-wise summation function; and conducting detection processing on the fused eye fixation information and semantic description information via the prediction model to obtain a saliency map from the original image. It is used for improving the efficiency of extracting the saliency map from image.
    Type: Grant
    Filed: August 22, 2017
    Date of Patent: September 3, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Anlin Zheng, Jia Li, Qinping Zhao, Feng Lu
  • Patent number: 10387748
    Abstract: Provided is a method for salient object segmentation of an image by aggregating a multi-linear exemplar regressors, including: analyzing and summarizing visual attributes and features of a salient object and a non-salient object using background prior and constructing a quadratic optimization problem, calculating an initial saliency probability map, selecting a most trusted foreground and a background seed point, performing manifold preserving foreground propagation, generating a final foreground probability map; generating a candidate object set for the image via an objectness adopting proposal, using a shape feature, a foregroundness and an attention feature to characterize each candidate object, training the linear exemplar regressors for each training image to characterize a particular saliency pattern of the image; aggregating a plurality of linear exemplar regressors, calculating saliency values for the candidate object set of a test image, and forming an image salient object segmentation model capable
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: August 20, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Changqun Xia, Jia Li, Qinping Zhao
  • Patent number: 10354392
    Abstract: The disclosure involves an image guided video semantic object segmentation method and apparatus, locate a target object in a sample image to obtain an object sample; extract a candidate region from each frame; match multiple candidate regions extracted from the each frame with the object sample to obtain a similarity rating of each candidate region; rank the similarity rating of each candidate region to select a predefined candidate region number of high rating candidate region ranked by the similarity; preliminarily segment a foreground and a background from the selected high rating candidate region; construct an optimization function for the preliminarily segmented foreground and background; solve the optimization function to obtain a optimal candidate region set; and propagate a preliminary foreground segmentation corresponding to the optimal candidate region to an entire video to obtain a semantic object segmentation of the input video.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: July 16, 2019
    Assignee: Beihang University
    Inventors: Xiaowu Chen, Yu Zhang, Jia Li, Wei Teng, Haokun Song, Qinping Zhao
  • Patent number: 10275653
    Abstract: Provided is a method and a system for detecting and segmenting primary video objects with neighborhood reversibility, including: dividing each video frame of a video into super pixel blocks; representing each super pixel block with visual features; constructing and training a deep neural network to predict the initial foreground value for each super pixel block in the spatial domain; constructing a neighborhood reversible matrix and transmitting the initial foreground value, constructing an iterative optimization problem and resolving the final foreground value in the temporal spatial domain; performing pixel level transformation on the final foreground value; optimizing the final foreground value for the pixel using morphological smoothing operations; determining whether the pixel belongs to the primary video objects according to the final foreground value.
    Type: Grant
    Filed: September 28, 2017
    Date of Patent: April 30, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Jia Li, Xiaowu Chen, Bin Zhou, Qinping Zhao, Changqun Xia, Anlin Zheng, Yu Zhang
  • Publication number: 20190087964
    Abstract: The present application provides a method and an apparatus for parsing and processing a three-dimensional CAD model, where the method includes: determining three kinds of adjacency relation information for each component in the three-dimensional model; performing aggregation processing on all components of the three-dimensional model, and generating three part hypothesis sets for the three-dimensional model; performing voxelization expression processing on each part hypothesis in each part hypothesis set, and generating voxelization information for each part hypothesis; inputting voxelization information of all part hypotheses in each part hypothesis set into an identification model to obtain a confidence score and a semantic category probability distribution for each part hypothesis; and constructing, according to the confidence score and the semantic category probability distribution of each part hypothesis in the part hypothesis sets, a high-order conditional random field model, and obtaining a semantic ca
    Type: Application
    Filed: September 14, 2018
    Publication date: March 21, 2019
    Inventors: XIAOWU CHEN, XIAOGANG WANG, BIN ZHOU, HAIYUE FANG, QINPING ZHAO
  • Publication number: 20190080455
    Abstract: Embodiments of the present invention provide a method and a device for three-dimensional feature-embedded image object component-level semantic segmentation, the method includes: acquiring three-dimensional feature information of a target two-dimensional image; performing a component-level semantic segmentation on the target two-dimensional image according to the three-dimensional feature information of the target two-dimensional image and two-dimensional feature information of the target two-dimensional image. In the technical solution of the present application, not only the two-dimensional feature information of the image but also the three-dimensional feature information of the image are taken into consideration when performing the component-level semantic segmentation on the image, thereby improving the accuracy of the image component-level semantic segmentation.
    Type: Application
    Filed: January 15, 2018
    Publication date: March 14, 2019
    Inventors: XIAOWU CHEN, JIA LI, YAFEI SONG, YIFAN ZHAO, QINPING ZHAO