Patents by Inventor Qinping Zhao

Qinping Zhao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10672130
    Abstract: Disclosed is a co-segmentation method and apparatus for a three-dimensional model set, which includes: obtaining a super patch set for the three-dimensional model set which includes at least two three-dimensional models, each of the three-dimensional models including at least two super patches; obtaining a consistent affinity propagation model according to a first predefined condition and a conventional affinity propagation model, the consistent affinity propagation model being constraint by the first predefined condition which is position information for at least two super patches that are in the super patch set and belong to a common three-dimensional model set; converting the consistent affinity propagation model into a consistent convergence affinity propagation model; clustering the super patch set through the consistent convergence affinity propagation model to generate a co-segmentation outcome for the three-dimensional model set.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: June 2, 2020
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Dongqing Zou, Xiaogang Wang, Zongji Wang, Qinping Zhao
  • Patent number: 10529132
    Abstract: Disclosed is a method for real-time cutting of digital organ based on a metaball model and a hybrid driving method, including a cutting procedure for driving a model using a position-based dynamics and a meshless method, a cutting mode which begins from a metaball driven by the position-based dynamics, proceeds to a point set driven by the meshless method and then create a new metaball. The method includes: a preprocessing procedure which performs an initialization operation while reading a model file; a deforming procedure which drives a model using a method based on the position-based dynamics; a cutting procedure which drives the model using the hybrid driving method and performs cutting using said cutting mode; and a rendering procedure which renders the model during the second and third procedures.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: January 7, 2020
    Assignee: BEIHANG UNIVERSITY
    Inventors: JunJun Pan, Shizeng Yan, Qinping Zhao
  • Patent number: 10521955
    Abstract: The disclosed provides a posture-guided cross-category method and device for combination modeling of cross-category 3D models, the method including: receiving a first posture model inputted by a user; calculating similarities between q first regions of the first posture model and q second regions of each of second posture models in a preset model database, respectively; where the model database includes a plurality of models partitioned into model components, and second posture models corresponding to the models; where the q first regions correspond to the q second regions one on one, where q is an integer greater than or equal to 2; selecting a corresponding plurality of model components of the q second regions of the second posture model according to the similarities; and combining the selected plurality of model components to generate a 3D model. The embodiment of the present application can combine cross-category models with large structural differences.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: December 31, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Qiang Fu, Bin Zhou, Qinping Zhao, Xiaotian Wang, Sijia Wen
  • Patent number: 10504209
    Abstract: Present invention provides a ranking convolutional neural network constructing method and an image processing method and apparatus thereof. The ranking convolutional neural network includes a ranking layer that is configured to sort an output of a previous layer of the ranking layer, generate an output of the ranking layer according to the sorted output, and output the output of the ranking layer to a next layer of the ranking layer. Using the ranking convolutional neural network enables obtaining an output feature corresponding to the input feature image through automatic learning. Compared with prior art methods that obtain features through manual calculation, the method of the present invention is superior in terms of reflecting the objective laws contained by the patterns of the actual scene. When applied to the field of image processing, the method can significantly improve the effect of image processing.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: December 10, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Yafei Song, Jia Li, Qinping Zhao, Xiaogang Wang
  • Patent number: 10489914
    Abstract: The present application provides a method and an apparatus for parsing and processing a three-dimensional CAD model, where the method includes: determining three kinds of adjacency relation information for each component in the three-dimensional model; performing aggregation processing on all components of the three-dimensional model, and generating three part hypothesis sets for the three-dimensional model; performing voxelization expression processing on each part hypothesis in each part hypothesis set, and generating voxelization information for each part hypothesis; inputting voxelization information of all part hypotheses in each part hypothesis set into an identification model to obtain a confidence score and a semantic category probability distribution for each part hypothesis; and constructing, according to the confidence score and the semantic category probability distribution of each part hypothesis in the part hypothesis sets, a high-order conditional random field model, and obtaining a semantic ca
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: November 26, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Xiaogang Wang, Bin Zhou, Haiyue Fang, Qinping Zhao
  • Patent number: 10417805
    Abstract: Provided is a method for automatic constructing a three-dimensional indoor scenario with behavior-constrained, including: determining a room, parameter of the room and a first behavioral semantics according to entered information; obtaining a corresponding first object category according to the first behavioral semantics; determining a corresponding first reference layout and a three-dimensional model of objects in the room according to the first object category and the parameter of the room; and directing and generating a three-dimensional indoor scenario in the room according to the first reference layout and the three-dimensional model.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: September 17, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Qiang Fu, Bin Zhou, Qinping Zhao, Xiaotian Wang, Bo Gao
  • Patent number: 10402680
    Abstract: A method and an apparatus for extracting a saliency map are provided in the embodiment of the present application, the method includes: conducting first convolution processing, first pooling processing and normalization processing on an original image via a prediction model to obtain eye fixation information from the original image, where the eye fixation information is used for indicating a region at which human eye gaze; conducting second convolution processing and second pooling processing on the original image via the prediction model to obtain semantic description information from the original image; fusing the eye fixation information and the semantic description information via element-wise summation function; and conducting detection processing on the fused eye fixation information and semantic description information via the prediction model to obtain a saliency map from the original image. It is used for improving the efficiency of extracting the saliency map from image.
    Type: Grant
    Filed: August 22, 2017
    Date of Patent: September 3, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Anlin Zheng, Jia Li, Qinping Zhao, Feng Lu
  • Patent number: 10387748
    Abstract: Provided is a method for salient object segmentation of an image by aggregating a multi-linear exemplar regressors, including: analyzing and summarizing visual attributes and features of a salient object and a non-salient object using background prior and constructing a quadratic optimization problem, calculating an initial saliency probability map, selecting a most trusted foreground and a background seed point, performing manifold preserving foreground propagation, generating a final foreground probability map; generating a candidate object set for the image via an objectness adopting proposal, using a shape feature, a foregroundness and an attention feature to characterize each candidate object, training the linear exemplar regressors for each training image to characterize a particular saliency pattern of the image; aggregating a plurality of linear exemplar regressors, calculating saliency values for the candidate object set of a test image, and forming an image salient object segmentation model capable
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: August 20, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Changqun Xia, Jia Li, Qinping Zhao
  • Patent number: 10354392
    Abstract: The disclosure involves an image guided video semantic object segmentation method and apparatus, locate a target object in a sample image to obtain an object sample; extract a candidate region from each frame; match multiple candidate regions extracted from the each frame with the object sample to obtain a similarity rating of each candidate region; rank the similarity rating of each candidate region to select a predefined candidate region number of high rating candidate region ranked by the similarity; preliminarily segment a foreground and a background from the selected high rating candidate region; construct an optimization function for the preliminarily segmented foreground and background; solve the optimization function to obtain a optimal candidate region set; and propagate a preliminary foreground segmentation corresponding to the optimal candidate region to an entire video to obtain a semantic object segmentation of the input video.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: July 16, 2019
    Assignee: Beihang University
    Inventors: Xiaowu Chen, Yu Zhang, Jia Li, Wei Teng, Haokun Song, Qinping Zhao
  • Patent number: 10275653
    Abstract: Provided is a method and a system for detecting and segmenting primary video objects with neighborhood reversibility, including: dividing each video frame of a video into super pixel blocks; representing each super pixel block with visual features; constructing and training a deep neural network to predict the initial foreground value for each super pixel block in the spatial domain; constructing a neighborhood reversible matrix and transmitting the initial foreground value, constructing an iterative optimization problem and resolving the final foreground value in the temporal spatial domain; performing pixel level transformation on the final foreground value; optimizing the final foreground value for the pixel using morphological smoothing operations; determining whether the pixel belongs to the primary video objects according to the final foreground value.
    Type: Grant
    Filed: September 28, 2017
    Date of Patent: April 30, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Jia Li, Xiaowu Chen, Bin Zhou, Qinping Zhao, Changqun Xia, Anlin Zheng, Yu Zhang
  • Publication number: 20190018909
    Abstract: Disclosed are a method and apparatus for adaptively constructing a three-dimensional indoor scenario, the method including: establishing an object association map corresponding to different scenario categories according to an annotated indoor layout; selecting a corresponding target indoor object according to room information inputted by a user and the object association map; generating a target indoor layout according to preset room parameters inputted by the user and the annotated indoor layout; and constructing a three-dimensional indoor scenario according to the target indoor object and the target indoor layout. The disclosed method and apparatus help improving the efficiency in constructing the three-dimensional scenario.
    Type: Application
    Filed: January 17, 2018
    Publication date: January 17, 2019
    Inventors: Xiaowu Chen, Qiang Fu, Bin Zhou, Qinping Zhao, Xiaotian Wang, Sijia Wen
  • Patent number: 10152797
    Abstract: Presently disclosed is a method for co-segmenting three-dimensional models represented by sparse and low-rank feature, comprising: pre-segmenting each three-dimensional model of a three-dimensional model class to obtain three-dimensional model patches for the each three-dimensional model; constructing a histogram for the three-dimensional model patches of the each three-dimensional model to obtain a patch feature vector for the each three-dimensional model; performing a sparse and low-rank representation to the patch feature vector for the each three-dimensional model to obtain a representation coefficient and a representation error of the each three-dimensional model; determining a confident representation coefficient for the each three-dimensional model according to the representation coefficient and the representation error of the each three-dimensional model; and clustering the confident representation coefficient of the each three-dimensional model to co-segment the each three-dimensional model respectiv
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: December 11, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Kan Guo, Qinping Zhao
  • Patent number: 10082868
    Abstract: The invention provides a calculation method of line-of-sight direction based on analysis and match of iris contour in human eye image, including: a data driven method, for stable calculation of 3D line-of-sight direction via inputting human eye image to be matched with synthetic data of virtual eyeball appearance; two novel optimization matching criterions of eyeball appearance, which effectively reduce effects of uncontrollable factors, such as image scaling and noise on results; a joint optimization method, for the case of continuously shooting multiple human eye images, to further improve calculation accuracy. One application of the invention is virtual reality and human computer interaction which is under the principle that shooting eye images of a user and calculating line-of-sight direction of user to enable interaction with intelligent system interface or virtual realistic object. The invention can be widely used in training, games and entertainment, video surveillance, medical care and other fields.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: September 25, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Feng Lu, Xiaowu Chen, Qinping Zhao
  • Patent number: 10049299
    Abstract: The invention discloses a deep learning based method for three dimensional (3D) model triangular facet feature learning and classifying and an apparatus. The method includes: constructing a deep convolutional neural network (CNN) feature learning model; training the deep CNN feature learning model; extracting a feature from, and constructing a feature vector for, a 3D model triangular facet having no class label, and reconstructing a feature in the constructed feature vector using a bag-of-words algorithm; determining an output feature corresponding to the 3D model triangular facet having no class label according to the trained deep CNN feature learning model and an initial feature corresponding to the 3D model triangular facet having no class label; and performing classification. The method enhances the capability to describe 3D model triangular facets, thereby ensuring the accuracy of 3D model triangular facet feature learning and classifying results.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: August 14, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Kan Guo, Dongqing Zou, Qinping Zhao
  • Patent number: 10049297
    Abstract: The invention provides a data driven method for transferring indoor scene layout and color style, including: preprocessing images in an indoor image data set, which includes manually labeling semantic information and layout information; obtaining indoor layout and color rules on the data set by learning algorithms; performing object-level semantic segmentation on input indoor reference image, or performing object-level and component-level segmentations using color segmentation methods, to extract layout constraints and color constraints of reference images, associating the reference images with indoor 3D scene via the semantic information; constructing a graph model for indoor reference image scene and indoor 3D scene to express indoor scene layout and color; performing similarity measurement on the indoor scene and searching for similar images in the data set to obtain an image sequence with gradient layouts from reference images to input 3D scene; performing image-sequence-guided layout and color transfer g
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: August 14, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Jianwei Li, Qing Li, Dongqing Zou, Bo Gao, Qinping Zhao
  • Patent number: 10049503
    Abstract: The invention provides a line guided 3D model reshaping method, including: 1. extracting a contour of an object from an image, and selecting a contour or main skeleton to create a 2D line database; 2. extracting a 3D editable line, retrieving and suggesting an appropriate 2D contour or skeleton from 2D line database; 3. establishing point-to-point correspondence by matching 2D contour or skeleton to 3D editable line, and reshaping the model using parametric deformation method. By the method, 2D contour or skeleton appropriate for 3D model editable line is automatically suggested from 2D line database of multiple classes of objects to guide 3D model reshaping, and fewer user interactions are required in extracting from input 3D model editable lines such as axes, cross-sections and outlines and producing various types of reshaped models by using parametric deformation method, thereby helping user to design desirable 3D model with speed and ease.
    Type: Grant
    Filed: October 28, 2016
    Date of Patent: August 14, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Qiang Fu, Qinping Zhao, Xiaoyu Su
  • Patent number: 9940749
    Abstract: The present invention provides a method and a system for generating a three-dimensional garment model, where garment component composition information and attribute information corresponding to each garment component are acquired by acquiring and processing RGBD data of a dressed human body, and then a three-dimensional garment component model corresponding to the attribute information of each garment component is selected in a three-dimensional garment component model library, that is, a three-dimensional garment model can be constructed rapidly and automatically only with RGBD data of a dressed human body, and human interactions are not necessary during the process of construction, thus the efficiency of a three-dimensional garment modeling is improved, and it has significant meaning for the development of computer-aided design, three-dimensional garment modeling and virtual garment fitting technology.
    Type: Grant
    Filed: March 7, 2016
    Date of Patent: April 10, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Bin Zhou, Qinping Zhao, Feixiang Lu, Lin Wang, Lang Bi
  • Patent number: 9924144
    Abstract: The invention provides a light field illuminating method, device and system. The method includes: determining, based on position and angle of rays emitted from a projector and focal length of a lens, position and angle of projection rays obtained after the emitted rays being transmitted through a lens array; determining, based on position and angle of projection rays and light probe array of sampled scene, brightness value of projection rays; converting, based on brightness transfer function of projector, brightness value of projection rays into pixel value of projection input image, generating projection input image based on pixel value of projected input image; and performing light field illumination on target object with projection input image. A projector and a lens array are adopted to achieve light field illumination, so that pixel-level accurate lighting control can be achieved, and various complicated light field environments can be simulated vividly in practical scenario.
    Type: Grant
    Filed: January 26, 2016
    Date of Patent: March 20, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Qinping Zhao, Zhong Zhou, Tao Yu, Chang Xing
  • Patent number: 9916695
    Abstract: The invention provides a structure self-adaptive 3D model editing method, which includes: given a 3D model library, clustering 3D models of same category according to structures; learning a design knowledge prior between components of 3D models in same group; learning a structure switching rule between 3D models in different groups; after user edits a 3D model component, determining a final group of the model according to inter-group design knowledge prior, and editing other components of the model according to intra-group design knowledge prior, so that the model as a whole satisfies design knowledge priors of a category of 3D models. Through editing few components by the user, other components of the model can be optimized automatically and the edited 3D model satisfying prior designs of the model library can be obtained. The invention can be applied to fields of 3D model editing and constructing, computer aided design etc.
    Type: Grant
    Filed: January 12, 2017
    Date of Patent: March 13, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Qiang Fu, Qinping Zhao, Xiaoyu Su
  • Patent number: 9740956
    Abstract: The present invention provides a method for object segmentation in videos tagged with semantic labels, including: detecting each frame of a video sequence with an object bounding box detector from a given semantic category and an object contour detector, and obtaining a candidate object bounding box set and a candidate object contour set for each frame of the input video; building a joint assignment model for the candidate object bounding box set and the candidate object contour set and solving the model to obtain the initial object segment sequence; processing the initial object segment, to estimate a probability distribution of the object shapes; and optimizing the initial object segment sequence with a variant of graph cut algorithm that integrates the shape probability distribution, to obtain an optimal segment sequence.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: August 22, 2017
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Yu Zhang, Jia Li, Qinping Zhao, Chen Wang, Changqun Xia