Patents by Inventor Qinping Zhao

Qinping Zhao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10049503
    Abstract: The invention provides a line guided 3D model reshaping method, including: 1. extracting a contour of an object from an image, and selecting a contour or main skeleton to create a 2D line database; 2. extracting a 3D editable line, retrieving and suggesting an appropriate 2D contour or skeleton from 2D line database; 3. establishing point-to-point correspondence by matching 2D contour or skeleton to 3D editable line, and reshaping the model using parametric deformation method. By the method, 2D contour or skeleton appropriate for 3D model editable line is automatically suggested from 2D line database of multiple classes of objects to guide 3D model reshaping, and fewer user interactions are required in extracting from input 3D model editable lines such as axes, cross-sections and outlines and producing various types of reshaped models by using parametric deformation method, thereby helping user to design desirable 3D model with speed and ease.
    Type: Grant
    Filed: October 28, 2016
    Date of Patent: August 14, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Qiang Fu, Qinping Zhao, Xiaoyu Su
  • Patent number: 10049297
    Abstract: The invention provides a data driven method for transferring indoor scene layout and color style, including: preprocessing images in an indoor image data set, which includes manually labeling semantic information and layout information; obtaining indoor layout and color rules on the data set by learning algorithms; performing object-level semantic segmentation on input indoor reference image, or performing object-level and component-level segmentations using color segmentation methods, to extract layout constraints and color constraints of reference images, associating the reference images with indoor 3D scene via the semantic information; constructing a graph model for indoor reference image scene and indoor 3D scene to express indoor scene layout and color; performing similarity measurement on the indoor scene and searching for similar images in the data set to obtain an image sequence with gradient layouts from reference images to input 3D scene; performing image-sequence-guided layout and color transfer g
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: August 14, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Jianwei Li, Qing Li, Dongqing Zou, Bo Gao, Qinping Zhao
  • Patent number: 10049299
    Abstract: The invention discloses a deep learning based method for three dimensional (3D) model triangular facet feature learning and classifying and an apparatus. The method includes: constructing a deep convolutional neural network (CNN) feature learning model; training the deep CNN feature learning model; extracting a feature from, and constructing a feature vector for, a 3D model triangular facet having no class label, and reconstructing a feature in the constructed feature vector using a bag-of-words algorithm; determining an output feature corresponding to the 3D model triangular facet having no class label according to the trained deep CNN feature learning model and an initial feature corresponding to the 3D model triangular facet having no class label; and performing classification. The method enhances the capability to describe 3D model triangular facets, thereby ensuring the accuracy of 3D model triangular facet feature learning and classifying results.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: August 14, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Kan Guo, Dongqing Zou, Qinping Zhao
  • Publication number: 20180211393
    Abstract: The disclosure involves an image guided video semantic object segmentation method and apparatus, locate a target object in a sample image to obtain an object sample; extract a candidate region from each frame; match multiple candidate regions extracted from the each frame with the object sample to obtain a similarity rating of each candidate region; rank the similarity rating of each candidate region to select a predefined candidate region number of high rating candidate region ranked by the similarity; preliminarily segment a foreground and a background from the selected high rating candidate region; construct an optimization function for the preliminarily segmented foreground and background; solve the optimization function to obtain a optimal candidate region set; and propagate a preliminary foreground segmentation corresponding to the optimal candidate region to an entire video to obtain a semantic object segmentation of the input video.
    Type: Application
    Filed: September 20, 2017
    Publication date: July 26, 2018
    Inventors: XIAOWU CHEN, YU ZHANG, JIA LI, WEI TENG, HAOKUN SONG, QINPING ZHAO
  • Publication number: 20180204088
    Abstract: Provided is a method for salient object segmentation of an image by aggregating a multi-linear exemplar regressors, including: analyzing and summarizing visual attributes and features of a salient object and a non-salient object using background prior and constructing a quadratic optimization problem, calculating an initial saliency probability map, selecting a most trusted foreground and a background seed point, performing manifold preserving foreground propagation, generating a final foreground probability map; generating a candidate object set for the image via an objectness adopting proposal, using a shape feature, a foregroundness and an attention feature to characterize each candidate object, training the linear exemplar regressors for each training image to characterize a particular saliency pattern of the image; aggregating a plurality of linear exemplar regressors, calculating saliency values for the candidate object set of a test image, and forming an image salient object segmentation model capable
    Type: Application
    Filed: October 24, 2017
    Publication date: July 19, 2018
    Inventors: XIAOWU CHEN, CHANGQUN XIA, JIA LI, QINPING ZHAO
  • Publication number: 20180204378
    Abstract: Disclosed is a method for real-time cutting of digital organ based on a metaball model and a hybrid driving method, including a cutting procedure for driving a model using a position-based dynamics and a meshless method, a cutting mode which begins from a metaball driven by the position-based dynamics, proceeds to a point set driven by the meshless method and then create a new metaball. The method includes: a preprocessing procedure which performs an initialization operation while reading a model file; a deforming procedure which drives a model using a method based on the position-based dynamics; a cutting procedure which drives the model using the hybrid driving method and performs cutting using said cutting mode; and a rendering procedure which renders the model during the second and third procedures.
    Type: Application
    Filed: December 7, 2017
    Publication date: July 19, 2018
    Inventors: JUN PAN, SHIZENG YAN, QINPING ZHAO
  • Publication number: 20180101752
    Abstract: The invention discloses a deep learning based method for three dimensional (3D) model triangular facet feature learning and classifying and an apparatus. The method includes: constructing a deep convolutional neural network (CNN) feature learning model; training the deep CNN feature learning model; extracting a feature from, and constructing a feature vector for, a 3D model triangular facet having no class label, and reconstructing a feature in the constructed feature vector using a bag-of-words algorithm; determining an output feature corresponding to the 3D model triangular facet having no class label according to the trained deep CNN feature learning model and an initial feature corresponding to the 3D model triangular facet having no class label; and performing classification. The method enhances the capability to describe 3D model triangular facets, thereby ensuring the accuracy of 3D model triangular facet feature learning and classifying results.
    Type: Application
    Filed: February 22, 2017
    Publication date: April 12, 2018
    Inventors: XIAOWU CHEN, KAN GUO, DONGQING ZOU, QINPING ZHAO
  • Patent number: 9940749
    Abstract: The present invention provides a method and a system for generating a three-dimensional garment model, where garment component composition information and attribute information corresponding to each garment component are acquired by acquiring and processing RGBD data of a dressed human body, and then a three-dimensional garment component model corresponding to the attribute information of each garment component is selected in a three-dimensional garment component model library, that is, a three-dimensional garment model can be constructed rapidly and automatically only with RGBD data of a dressed human body, and human interactions are not necessary during the process of construction, thus the efficiency of a three-dimensional garment modeling is improved, and it has significant meaning for the development of computer-aided design, three-dimensional garment modeling and virtual garment fitting technology.
    Type: Grant
    Filed: March 7, 2016
    Date of Patent: April 10, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Bin Zhou, Qinping Zhao, Feixiang Lu, Lin Wang, Lang Bi
  • Patent number: 9924144
    Abstract: The invention provides a light field illuminating method, device and system. The method includes: determining, based on position and angle of rays emitted from a projector and focal length of a lens, position and angle of projection rays obtained after the emitted rays being transmitted through a lens array; determining, based on position and angle of projection rays and light probe array of sampled scene, brightness value of projection rays; converting, based on brightness transfer function of projector, brightness value of projection rays into pixel value of projection input image, generating projection input image based on pixel value of projected input image; and performing light field illumination on target object with projection input image. A projector and a lens array are adopted to achieve light field illumination, so that pixel-level accurate lighting control can be achieved, and various complicated light field environments can be simulated vividly in practical scenario.
    Type: Grant
    Filed: January 26, 2016
    Date of Patent: March 20, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Qinping Zhao, Zhong Zhou, Tao Yu, Chang Xing
  • Patent number: 9916695
    Abstract: The invention provides a structure self-adaptive 3D model editing method, which includes: given a 3D model library, clustering 3D models of same category according to structures; learning a design knowledge prior between components of 3D models in same group; learning a structure switching rule between 3D models in different groups; after user edits a 3D model component, determining a final group of the model according to inter-group design knowledge prior, and editing other components of the model according to intra-group design knowledge prior, so that the model as a whole satisfies design knowledge priors of a category of 3D models. Through editing few components by the user, other components of the model can be optimized automatically and the edited 3D model satisfying prior designs of the model library can be obtained. The invention can be applied to fields of 3D model editing and constructing, computer aided design etc.
    Type: Grant
    Filed: January 12, 2017
    Date of Patent: March 13, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Qiang Fu, Qinping Zhao, Xiaoyu Su
  • Publication number: 20180018539
    Abstract: Present invention provides a ranking convolutional neural network constructing method and an image processing method and apparatus thereof. The ranking convolutional neural network includes a ranking layer that is configured to sort an output of a previous layer of the ranking layer, generate an output of the ranking layer according to the sorted output, and output the output of the ranking layer to a next layer of the ranking layer. Using the ranking convolutional neural network enables obtaining an output feature corresponding to the input feature image through automatic learning. Compared with prior art methods that obtain features through manual calculation, the method of the present invention is superior in terms of reflecting the objective laws contained by the patterns of the actual scene. When applied to the field of image processing, the method can significantly improve the effect of image processing.
    Type: Application
    Filed: March 2, 2017
    Publication date: January 18, 2018
    Inventors: XIAOWU CHEN, YAFEI SONG, JIA LI, QINPING ZHAO, XIAOGANG WANG
  • Publication number: 20180012361
    Abstract: Presently disclosed is a method for co-segmenting three-dimensional models represented by sparse and low-rank feature, comprising: pre-segmenting each three-dimensional model of a three-dimensional model class to obtain three-dimensional model patches for the each three-dimensional model; constructing a histogram for the three-dimensional model patches of the each three-dimensional model to obtain a patch feature vector for the each three-dimensional model; performing a sparse and low-rank representation to the patch feature vector for the each three-dimensional model to obtain a representation coefficient and a representation error of the each three-dimensional model; determining a confident representation coefficient for the each three-dimensional model according to the representation coefficient and the representation error of the each three-dimensional model; and clustering the confident representation coefficient of the each three-dimensional model to co-segment the each three-dimensional model respectiv
    Type: Application
    Filed: March 2, 2017
    Publication date: January 11, 2018
    Inventors: XIAOWU CHEN, KAN GUO, QINPING ZHAO
  • Publication number: 20170293354
    Abstract: The invention provides a calculation method of line-of-sight direction based on analysis and match of iris contour in human eye image, including: a data driven method, for stable calculation of 3D line-of-sight direction via inputting human eye image to be matched with synthetic data of virtual eyeball appearance; two novel optimization matching criterions of eyeball appearance, which effectively reduce effects of uncontrollable factors, such as image scaling and noise on results; a joint optimization method, for the case of continuously shooting multiple human eye images, to further improve calculation accuracy. One application of the invention is virtual reality and human computer interaction which is under the principle that shooting eye images of a user and calculating line-of-sight direction of user to enable interaction with intelligent system interface or virtual realistic object. The invention can be widely used in training, games and entertainment, video surveillance, medical care and other fields.
    Type: Application
    Filed: January 23, 2017
    Publication date: October 12, 2017
    Inventors: FENG LU, XIAOWU CHEN, QINPING ZHAO
  • Patent number: 9740956
    Abstract: The present invention provides a method for object segmentation in videos tagged with semantic labels, including: detecting each frame of a video sequence with an object bounding box detector from a given semantic category and an object contour detector, and obtaining a candidate object bounding box set and a candidate object contour set for each frame of the input video; building a joint assignment model for the candidate object bounding box set and the candidate object contour set and solving the model to obtain the initial object segment sequence; processing the initial object segment, to estimate a probability distribution of the object shapes; and optimizing the initial object segment sequence with a variant of graph cut algorithm that integrates the shape probability distribution, to obtain an optimal segment sequence.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: August 22, 2017
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Yu Zhang, Jia Li, Qinping Zhao, Chen Wang, Changqun Xia
  • Publication number: 20170221274
    Abstract: The invention provides a structure self-adaptive 3D model editing method, which includes: given a 3D model library, clustering 3D models of same category according to structures; learning a design knowledge prior between components of 3D models in same group; learning a structure switching rule between 3D models in different groups; after user edits a 3D model component, determining a final group of the model according to inter-group design knowledge prior, and editing other components of the model according to intra-group design knowledge prior, so that the model as a whole satisfies design knowledge priors of a category of 3D models. Through editing few components by the user, other components of the model can be optimized automatically and the edited 3D model satisfying prior designs of the model library can be obtained. The invention can be applied to fields of 3D model editing and constructing, computer aided design etc.
    Type: Application
    Filed: January 12, 2017
    Publication date: August 3, 2017
    Inventors: XIAOWU CHEN, QIANG FU, QINPING ZHAO, XIAOYU SU
  • Publication number: 20170178402
    Abstract: The invention provides a line guided 3D model reshaping method, including: 1. extracting a contour of an object from an image, and selecting a contour or main skeleton to create a 2D line database; 2. extracting a 3D editable line, retrieving and suggesting an appropriate 2D contour or skeleton from 2D line database; 3. establishing point-to-point correspondence by matching 2D contour or skeleton to 3D editable line, and reshaping the model using parametric deformation method. By the method, 2D contour or skeleton appropriate for 3D model editable line is automatically suggested from 2D line database of multiple classes of objects to guide 3D model reshaping, and fewer user interactions are required in extracting from input 3D model editable lines such as axes, cross-sections and outlines and producing various types of reshaped models by using parametric deformation method, thereby helping user to design desirable 3D model with speed and ease.
    Type: Application
    Filed: October 28, 2016
    Publication date: June 22, 2017
    Inventors: XIAOWU CHEN, QIANG FU, QINPING ZHAO, XIAOYU SU
  • Publication number: 20170134701
    Abstract: The invention provides a light field illuminating method, device and system. The method includes: determining, based on position and angle of rays emitted from a projector and focal length of a lens, position and angle of projection rays obtained after the emitted rays being transmitted through a lens array; determining, based on position and angle of projection rays and light probe array of sampled scene, brightness value of projection rays; converting, based on brightness transfer function of projector, brightness value of projection rays into pixel value of projection input image, generating projection input image based on pixel value of projected input image; and performing light field illumination on target object with projection input image. A projector and a lens array are adopted to achieve light field illumination, so that pixel-level accurate lighting control can be achieved, and various complicated light field environments can be simulated vividly in practical scenario.
    Type: Application
    Filed: January 26, 2016
    Publication date: May 11, 2017
    Inventors: Qinping ZHAO, Zhong ZHOU, Tao YU, Chang XING
  • Patent number: 9613424
    Abstract: A method of constructing 3D clothing model based on single image, estimating a 3D model of human body of an inputted image and constructing 3D clothing plane according to the clothing silhouette of the inputted image. The method includes utilizing the 3D clothing plane and the 3D model of human body to generate a smooth 3D clothing model through a deformation algorithm. A decomposition algorithm of intrinsic image is utilized along with a shape-from-shading algorithm to acquire a set of detail information of clothing from the inputted image. A weighted Laplace editing algorithm is utilized to shift the acquired detail information of clothing to the smooth 3D clothing model to yield a final 3D clothing model. A 3D clothing model is used to generate the surface geometry details including folds, wrinkles.
    Type: Grant
    Filed: January 7, 2016
    Date of Patent: April 4, 2017
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Qiang Fu, Qinping Zhao, Bin Zhou, Kan Guo
  • Patent number: 9578312
    Abstract: A method of integrating binocular stereo video scenes with maintaining time consistency includes: propagating and extracting a contour of moving object of stereo video A; integrating and deformating of parallax between moving object and dynamic scene with time consistency; color blending of moving object and dynamic scene with time consistency where a method of median coordinate fusion is utilized. The method is simple and effective to utilize a small quantity of user interactions to successfully extract moving objects from stereo video which are same in time and as consistent as possible between left view and right view to develop multiple constraint conditions to guide the integration and deformation of parallax of moving object and dynamic scene and allow moving object to conform to the rules of perspective of dynamic scene. Moreover, the deformation result of moving object is smooth and consistent and can avoid effectively the occurrence of “dithering” phenomenon.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: February 21, 2017
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Dongqing Zou, Qinping Zhao, Feng Ding
  • Publication number: 20170018117
    Abstract: The present invention provides a method and a system for generating a three-dimensional garment model, where garment component composition information and attribute information corresponding to each garment component are acquired by acquiring and processing RGBD data of a dressed human body, and then a three-dimensional garment component model corresponding to the attribute information of each garment component is selected in a three-dimensional garment component model library, that is, a three-dimensional garment model can be constructed rapidly and automatically only with RGBD data of a dressed human body, and human interactions are not necessary during the process of construction, thus the efficiency of a three-dimensional garment modeling is improved, and it has significant meaning for the development of computer-aided design, three-dimensional garment modeling and virtual garment fitting technology.
    Type: Application
    Filed: March 7, 2016
    Publication date: January 19, 2017
    Inventors: XIAOWU CHEN, BIN ZHOU, QINPING ZHAO, FEIXIANG LU, LIN WANG, LANG BI