Patents by Inventor Xiaowu Chen

Xiaowu Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10963597
    Abstract: Disclosed are a method and apparatus for adaptively constructing a three-dimensional indoor scenario, the method including: establishing an object association map corresponding to different scenario categories according to an annotated indoor layout; selecting a corresponding target indoor object according to room information inputted by a user and the object association map; generating a target indoor layout according to preset room parameters inputted by the user and the annotated indoor layout; and constructing a three-dimensional indoor scenario according to the target indoor object and the target indoor layout. The disclosed method and apparatus help improving the efficiency in constructing the three-dimensional scenario.
    Type: Grant
    Filed: January 17, 2018
    Date of Patent: March 30, 2021
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Qiang Fu, Bin Zhou, Qinping Zhao, Xiaotian Wang, Sijia Wen
  • Publication number: 20210053988
    Abstract: Compounds for use in the treatment of human immunodeficiency virus (HIV) infection are disclosed. The compounds have the following Formula (I): including stereoisomers and pharmaceutically acceptable salts thereof, wherein R1, X, W, Y1, Y2, Z1, and Z4 are as defined herein. Methods associated with preparation and use of such compounds, as well as pharmaceutical compositions comprising such compounds, are also disclosed.
    Type: Application
    Filed: May 6, 2020
    Publication date: February 25, 2021
    Inventors: Elizabeth M. Bacon, Zhenhong R. Cai, Xiaowu Chen, Jeromy J. Cottell, Manoj C. Desai, Mingzhe Ji, Haolun Jin, Scott E. Lazerwith, Michael R. Mish, Philip Anthony Morganelli, Hyung-Jung Pyun, James G. Taylor, Teresa Alejandra Trejo Martin
  • Patent number: 10740897
    Abstract: Embodiments of the present invention provide a method and a device for three-dimensional feature-embedded image object component-level semantic segmentation, the method includes: acquiring three-dimensional feature information of a target two-dimensional image; performing a component-level semantic segmentation on the target two-dimensional image according to the three-dimensional feature information of the target two-dimensional image and two-dimensional feature information of the target two-dimensional image. In the technical solution of the present application, not only the two-dimensional feature information of the image but also the three-dimensional feature information of the image are taken into consideration when performing the component-level semantic segmentation on the image, thereby improving the accuracy of the image component-level semantic segmentation.
    Type: Grant
    Filed: January 15, 2018
    Date of Patent: August 11, 2020
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Jia Li, Yafei Song, Yifan Zhao, Qinping Zhao
  • Patent number: 10689399
    Abstract: Compounds for use in the treatment of human immunodeficiency virus (HIV) infection are disclosed. The compounds have the following Formula (I): including stereoisomers and pharmaceutically acceptable salts thereof, wherein R1, X, W, Y1, Y2, Z1, and Z4 are as defined herein. Methods associated with preparation and use of such compounds, as well as pharmaceutical compositions comprising such compounds, are also disclosed.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: June 23, 2020
    Assignee: Gilead Sciences, Inc.
    Inventors: Elizabeth M. Bacon, Zhenhong R. Cai, Xiaowu Chen, Jeromy J. Cottell, Manoj C. Desai, Mingzhe Ji, Haolun Jin, Scott E. Lazerwith, Michael R. Mish, Philip Anthony Morganelli, Hyung-Jung Pyun, James G. Taylor, Teresa Alejandra Trejo Martin
  • Patent number: 10672130
    Abstract: Disclosed is a co-segmentation method and apparatus for a three-dimensional model set, which includes: obtaining a super patch set for the three-dimensional model set which includes at least two three-dimensional models, each of the three-dimensional models including at least two super patches; obtaining a consistent affinity propagation model according to a first predefined condition and a conventional affinity propagation model, the consistent affinity propagation model being constraint by the first predefined condition which is position information for at least two super patches that are in the super patch set and belong to a common three-dimensional model set; converting the consistent affinity propagation model into a consistent convergence affinity propagation model; clustering the super patch set through the consistent convergence affinity propagation model to generate a co-segmentation outcome for the three-dimensional model set.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: June 2, 2020
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Dongqing Zou, Xiaogang Wang, Zongji Wang, Qinping Zhao
  • Patent number: 10521955
    Abstract: The disclosed provides a posture-guided cross-category method and device for combination modeling of cross-category 3D models, the method including: receiving a first posture model inputted by a user; calculating similarities between q first regions of the first posture model and q second regions of each of second posture models in a preset model database, respectively; where the model database includes a plurality of models partitioned into model components, and second posture models corresponding to the models; where the q first regions correspond to the q second regions one on one, where q is an integer greater than or equal to 2; selecting a corresponding plurality of model components of the q second regions of the second posture model according to the similarities; and combining the selected plurality of model components to generate a 3D model. The embodiment of the present application can combine cross-category models with large structural differences.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: December 31, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Qiang Fu, Bin Zhou, Qinping Zhao, Xiaotian Wang, Sijia Wen
  • Patent number: 10504209
    Abstract: Present invention provides a ranking convolutional neural network constructing method and an image processing method and apparatus thereof. The ranking convolutional neural network includes a ranking layer that is configured to sort an output of a previous layer of the ranking layer, generate an output of the ranking layer according to the sorted output, and output the output of the ranking layer to a next layer of the ranking layer. Using the ranking convolutional neural network enables obtaining an output feature corresponding to the input feature image through automatic learning. Compared with prior art methods that obtain features through manual calculation, the method of the present invention is superior in terms of reflecting the objective laws contained by the patterns of the actual scene. When applied to the field of image processing, the method can significantly improve the effect of image processing.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: December 10, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Yafei Song, Jia Li, Qinping Zhao, Xiaogang Wang
  • Patent number: 10489914
    Abstract: The present application provides a method and an apparatus for parsing and processing a three-dimensional CAD model, where the method includes: determining three kinds of adjacency relation information for each component in the three-dimensional model; performing aggregation processing on all components of the three-dimensional model, and generating three part hypothesis sets for the three-dimensional model; performing voxelization expression processing on each part hypothesis in each part hypothesis set, and generating voxelization information for each part hypothesis; inputting voxelization information of all part hypotheses in each part hypothesis set into an identification model to obtain a confidence score and a semantic category probability distribution for each part hypothesis; and constructing, according to the confidence score and the semantic category probability distribution of each part hypothesis in the part hypothesis sets, a high-order conditional random field model, and obtaining a semantic ca
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: November 26, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Xiaogang Wang, Bin Zhou, Haiyue Fang, Qinping Zhao
  • Patent number: 10417805
    Abstract: Provided is a method for automatic constructing a three-dimensional indoor scenario with behavior-constrained, including: determining a room, parameter of the room and a first behavioral semantics according to entered information; obtaining a corresponding first object category according to the first behavioral semantics; determining a corresponding first reference layout and a three-dimensional model of objects in the room according to the first object category and the parameter of the room; and directing and generating a three-dimensional indoor scenario in the room according to the first reference layout and the three-dimensional model.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: September 17, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Qiang Fu, Bin Zhou, Qinping Zhao, Xiaotian Wang, Bo Gao
  • Patent number: 10402680
    Abstract: A method and an apparatus for extracting a saliency map are provided in the embodiment of the present application, the method includes: conducting first convolution processing, first pooling processing and normalization processing on an original image via a prediction model to obtain eye fixation information from the original image, where the eye fixation information is used for indicating a region at which human eye gaze; conducting second convolution processing and second pooling processing on the original image via the prediction model to obtain semantic description information from the original image; fusing the eye fixation information and the semantic description information via element-wise summation function; and conducting detection processing on the fused eye fixation information and semantic description information via the prediction model to obtain a saliency map from the original image. It is used for improving the efficiency of extracting the saliency map from image.
    Type: Grant
    Filed: August 22, 2017
    Date of Patent: September 3, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Anlin Zheng, Jia Li, Qinping Zhao, Feng Lu
  • Patent number: 10387748
    Abstract: Provided is a method for salient object segmentation of an image by aggregating a multi-linear exemplar regressors, including: analyzing and summarizing visual attributes and features of a salient object and a non-salient object using background prior and constructing a quadratic optimization problem, calculating an initial saliency probability map, selecting a most trusted foreground and a background seed point, performing manifold preserving foreground propagation, generating a final foreground probability map; generating a candidate object set for the image via an objectness adopting proposal, using a shape feature, a foregroundness and an attention feature to characterize each candidate object, training the linear exemplar regressors for each training image to characterize a particular saliency pattern of the image; aggregating a plurality of linear exemplar regressors, calculating saliency values for the candidate object set of a test image, and forming an image salient object segmentation model capable
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: August 20, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Changqun Xia, Jia Li, Qinping Zhao
  • Patent number: 10354392
    Abstract: The disclosure involves an image guided video semantic object segmentation method and apparatus, locate a target object in a sample image to obtain an object sample; extract a candidate region from each frame; match multiple candidate regions extracted from the each frame with the object sample to obtain a similarity rating of each candidate region; rank the similarity rating of each candidate region to select a predefined candidate region number of high rating candidate region ranked by the similarity; preliminarily segment a foreground and a background from the selected high rating candidate region; construct an optimization function for the preliminarily segmented foreground and background; solve the optimization function to obtain a optimal candidate region set; and propagate a preliminary foreground segmentation corresponding to the optimal candidate region to an entire video to obtain a semantic object segmentation of the input video.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: July 16, 2019
    Assignee: Beihang University
    Inventors: Xiaowu Chen, Yu Zhang, Jia Li, Wei Teng, Haokun Song, Qinping Zhao
  • Patent number: 10275653
    Abstract: Provided is a method and a system for detecting and segmenting primary video objects with neighborhood reversibility, including: dividing each video frame of a video into super pixel blocks; representing each super pixel block with visual features; constructing and training a deep neural network to predict the initial foreground value for each super pixel block in the spatial domain; constructing a neighborhood reversible matrix and transmitting the initial foreground value, constructing an iterative optimization problem and resolving the final foreground value in the temporal spatial domain; performing pixel level transformation on the final foreground value; optimizing the final foreground value for the pixel using morphological smoothing operations; determining whether the pixel belongs to the primary video objects according to the final foreground value.
    Type: Grant
    Filed: September 28, 2017
    Date of Patent: April 30, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Jia Li, Xiaowu Chen, Bin Zhou, Qinping Zhao, Changqun Xia, Anlin Zheng, Yu Zhang
  • Publication number: 20190092787
    Abstract: Compounds for use in the treatment of human immunodeficiency virus (HIV) infection are disclosed. The compounds have the following Formula (I): including stereoisomers and pharmaceutically acceptable salts thereof, wherein R1, X, W, Y1, Y2, Z1, and Z4 are as defined herein. Methods associated with preparation and use of such compounds, as well as pharmaceutical compositions comprising such compounds, are also disclosed.
    Type: Application
    Filed: June 25, 2018
    Publication date: March 28, 2019
    Inventors: Elizabeth M. Bacon, Zhenhong R. Cai, Xiaowu Chen, Jeromy J. Cottell, Manoj C. Desai, Mingzhe Ji, Haolun Jin, Scott E. Lazerwith, Michael R. Mish, Philip Anthony Morganelli, Hyung-Jung Pyun, James G. Taylor, Teresa Alejandra Trejo Martin
  • Publication number: 20190087964
    Abstract: The present application provides a method and an apparatus for parsing and processing a three-dimensional CAD model, where the method includes: determining three kinds of adjacency relation information for each component in the three-dimensional model; performing aggregation processing on all components of the three-dimensional model, and generating three part hypothesis sets for the three-dimensional model; performing voxelization expression processing on each part hypothesis in each part hypothesis set, and generating voxelization information for each part hypothesis; inputting voxelization information of all part hypotheses in each part hypothesis set into an identification model to obtain a confidence score and a semantic category probability distribution for each part hypothesis; and constructing, according to the confidence score and the semantic category probability distribution of each part hypothesis in the part hypothesis sets, a high-order conditional random field model, and obtaining a semantic ca
    Type: Application
    Filed: September 14, 2018
    Publication date: March 21, 2019
    Inventors: XIAOWU CHEN, XIAOGANG WANG, BIN ZHOU, HAIYUE FANG, QINPING ZHAO
  • Patent number: 10235571
    Abstract: The present invention provides a method for video matting via sparse and low-rank representation, which firstly selects frames which represent video characteristics in input video as keyframes, then trains a dictionary according to known pixels in the keyframes, next obtains a reconstruction coefficient satisfying the restriction of low-rank, sparse and non-negative according to the dictionary, and sets the non-local relationship matrix between each pixel in the input video according to the reconstruction coefficient, meanwhile sets the Laplace matrix between multiple frames, obtains a video alpha matte of the input video, according to ? values of the known pixels of the input video and ? values of sample points in the dictionary, the non-local relationship matrix and the Laplace matrix; and finally extracts a foreground object in the input video according to the video alpha matte, therefore improving quality of the extracted foreground object.
    Type: Grant
    Filed: June 14, 2016
    Date of Patent: March 19, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Dongqing Zou, Guangying Cao, Xiaogang Wang
  • Publication number: 20190080455
    Abstract: Embodiments of the present invention provide a method and a device for three-dimensional feature-embedded image object component-level semantic segmentation, the method includes: acquiring three-dimensional feature information of a target two-dimensional image; performing a component-level semantic segmentation on the target two-dimensional image according to the three-dimensional feature information of the target two-dimensional image and two-dimensional feature information of the target two-dimensional image. In the technical solution of the present application, not only the two-dimensional feature information of the image but also the three-dimensional feature information of the image are taken into consideration when performing the component-level semantic segmentation on the image, thereby improving the accuracy of the image component-level semantic segmentation.
    Type: Application
    Filed: January 15, 2018
    Publication date: March 14, 2019
    Inventors: XIAOWU CHEN, JIA LI, YAFEI SONG, YIFAN ZHAO, QINPING ZHAO
  • Publication number: 20190057538
    Abstract: Provided is a method for automatic constructing a three-dimensional indoor scenario with behavior-constrained, including: determining a room, parameter of the room and a first behavioral semantics according to entered information; obtaining a corresponding first object category according to the first behavioral semantics; determining a corresponding first reference layout and a three-dimensional model of objects in the room according to the first object category and the parameter of the room; and directing and generating a three-dimensional indoor scenario in the room according to the first reference layout and the three-dimensional model.
    Type: Application
    Filed: December 19, 2017
    Publication date: February 21, 2019
    Inventors: XIAOWU CHEN, QIANG FU, BIN ZHOU, QINPING ZHAO, XIAOTIAN WANG, BO GAO
  • Publication number: 20190018909
    Abstract: Disclosed are a method and apparatus for adaptively constructing a three-dimensional indoor scenario, the method including: establishing an object association map corresponding to different scenario categories according to an annotated indoor layout; selecting a corresponding target indoor object according to room information inputted by a user and the object association map; generating a target indoor layout according to preset room parameters inputted by the user and the annotated indoor layout; and constructing a three-dimensional indoor scenario according to the target indoor object and the target indoor layout. The disclosed method and apparatus help improving the efficiency in constructing the three-dimensional scenario.
    Type: Application
    Filed: January 17, 2018
    Publication date: January 17, 2019
    Inventors: Xiaowu Chen, Qiang Fu, Bin Zhou, Qinping Zhao, Xiaotian Wang, Sijia Wen
  • Publication number: 20190012830
    Abstract: The disclosed provides a posture-guided cross-category method and device for combination modeling of cross-category 3D models, the method including: receiving a first posture model inputted by a user; calculating similarities between q first regions of the first posture model and q second regions of each of second posture models in a preset model database, respectively; where the model database includes a plurality of models partitioned into model components, and second posture models corresponding to the models; where the q first regions correspond to the q second regions one on one, where q is an integer greater than or equal to 2; selecting a corresponding plurality of model components of the q second regions of the second posture model according to the similarities; and combining the selected plurality of model components to generate a 3D model. The embodiment of the present application can combine cross-category models with large structural differences.
    Type: Application
    Filed: October 24, 2017
    Publication date: January 10, 2019
    Inventors: XIAOWU CHEN, QIANG FU, BIN ZHOU, QINPING ZHAO, XIAOTIAN WANG, SIJIA WEN