Patents by Inventor CHANGQUN XIA

CHANGQUN XIA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11151725
    Abstract: An image salient object segmentation method and an apparatus based on reciprocal attention between a foreground and a background, and the method includes: obtaining a feature map corresponding to a training image based on a convolutional neural backbone network, and obtaining foreground and background initial feature responses according to the feature map; obtaining a reciprocal attention weight matrix, and updating the foreground and background initial feature responses according to the reciprocal attention weight matrix, to obtain foreground and background feature maps; training the convolutional neural backbone network according to the foreground and the background feature maps based on a cross entropy loss function and a cooperative loss function, to obtain a foreground and background segmentation convolutional neural network model; and inputting an image to be segmented into the foreground and background segmentation convolutional neural network model to obtain foreground and background prediction result
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: October 19, 2021
    Assignee: BEIHANG UNIVERSITY
    Inventors: Jia Li, Changqun Xia, Jinming Su, Qinping Zhao
  • Publication number: 20200372660
    Abstract: An image salient object segmentation method and an apparatus based on reciprocal attention between a foreground and a background, and the method includes: obtaining a feature map corresponding to a training image based on a convolutional neural backbone network, and obtaining foreground and background initial feature responses according to the feature map; obtaining a reciprocal attention weight matrix, and updating the foreground and background initial feature responses according to the reciprocal attention weight matrix, to obtain foreground and background feature maps; training the convolutional neural backbone network according to the foreground and the background feature maps based on a cross entropy loss function and a cooperative loss function, to obtain a foreground and background segmentation convolutional neural network model; and inputting an image to be segmented into the foreground and background segmentation convolutional neural network model to obtain foreground and background prediction result
    Type: Application
    Filed: September 24, 2019
    Publication date: November 26, 2020
    Inventors: Jia LI, Changqun XIA, Jinming SU, Qinping ZHAO
  • Patent number: 10387748
    Abstract: Provided is a method for salient object segmentation of an image by aggregating a multi-linear exemplar regressors, including: analyzing and summarizing visual attributes and features of a salient object and a non-salient object using background prior and constructing a quadratic optimization problem, calculating an initial saliency probability map, selecting a most trusted foreground and a background seed point, performing manifold preserving foreground propagation, generating a final foreground probability map; generating a candidate object set for the image via an objectness adopting proposal, using a shape feature, a foregroundness and an attention feature to characterize each candidate object, training the linear exemplar regressors for each training image to characterize a particular saliency pattern of the image; aggregating a plurality of linear exemplar regressors, calculating saliency values for the candidate object set of a test image, and forming an image salient object segmentation model capable
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: August 20, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Changqun Xia, Jia Li, Qinping Zhao
  • Patent number: 10275653
    Abstract: Provided is a method and a system for detecting and segmenting primary video objects with neighborhood reversibility, including: dividing each video frame of a video into super pixel blocks; representing each super pixel block with visual features; constructing and training a deep neural network to predict the initial foreground value for each super pixel block in the spatial domain; constructing a neighborhood reversible matrix and transmitting the initial foreground value, constructing an iterative optimization problem and resolving the final foreground value in the temporal spatial domain; performing pixel level transformation on the final foreground value; optimizing the final foreground value for the pixel using morphological smoothing operations; determining whether the pixel belongs to the primary video objects according to the final foreground value.
    Type: Grant
    Filed: September 28, 2017
    Date of Patent: April 30, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Jia Li, Xiaowu Chen, Bin Zhou, Qinping Zhao, Changqun Xia, Anlin Zheng, Yu Zhang
  • Publication number: 20180247126
    Abstract: Provided is a method and a system for detecting and segmenting primary video objects with neighborhood reversibility, including: dividing each video frame of a video into super pixel blocks; representing each super pixel block with visual features; constructing and training a deep neural network to predict the initial foreground value for each super pixel block in the spatial domain; constructing a neighborhood reversible matrix and transmitting the initial foreground value, constructing an iterative optimization problem and resolving the final foreground value in the temporal spatial domain; performing pixel level transformation on the final foreground value; optimizing the final foreground value for the pixel using morphological smoothing operations; determining whether the pixel belongs to the primary video objects according to the final foreground value.
    Type: Application
    Filed: September 28, 2017
    Publication date: August 30, 2018
    Inventors: JIA LI, XIAOWU CHEN, BIN ZHOU, QINPING ZHAO, CHANGQUN XIA, ANLIN ZHENG, YU ZHANG
  • Publication number: 20180204088
    Abstract: Provided is a method for salient object segmentation of an image by aggregating a multi-linear exemplar regressors, including: analyzing and summarizing visual attributes and features of a salient object and a non-salient object using background prior and constructing a quadratic optimization problem, calculating an initial saliency probability map, selecting a most trusted foreground and a background seed point, performing manifold preserving foreground propagation, generating a final foreground probability map; generating a candidate object set for the image via an objectness adopting proposal, using a shape feature, a foregroundness and an attention feature to characterize each candidate object, training the linear exemplar regressors for each training image to characterize a particular saliency pattern of the image; aggregating a plurality of linear exemplar regressors, calculating saliency values for the candidate object set of a test image, and forming an image salient object segmentation model capable
    Type: Application
    Filed: October 24, 2017
    Publication date: July 19, 2018
    Inventors: XIAOWU CHEN, CHANGQUN XIA, JIA LI, QINPING ZHAO
  • Patent number: 9740956
    Abstract: The present invention provides a method for object segmentation in videos tagged with semantic labels, including: detecting each frame of a video sequence with an object bounding box detector from a given semantic category and an object contour detector, and obtaining a candidate object bounding box set and a candidate object contour set for each frame of the input video; building a joint assignment model for the candidate object bounding box set and the candidate object contour set and solving the model to obtain the initial object segment sequence; processing the initial object segment, to estimate a probability distribution of the object shapes; and optimizing the initial object segment sequence with a variant of graph cut algorithm that integrates the shape probability distribution, to obtain an optimal segment sequence.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: August 22, 2017
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Yu Zhang, Jia Li, Qinping Zhao, Chen Wang, Changqun Xia
  • Publication number: 20160379371
    Abstract: The present invention provides a method for object segmentation in videos tagged with semantic labels, including: detecting each frame of a video sequence with an object bounding box detector from a given semantic category and an object contour detector, and obtaining a candidate object bounding box set and a candidate object contour set for each frame of the input video; building a joint assignment model for the candidate object bounding box set and the candidate object contour set and solving the model to obtain the initial object segment sequence; processing the initial object segment, to estimate a probability distribution of the object shapes; and optimizing the initial object segment sequence with a variant of graph cut algorithm that integrates the shape probability distribution, to obtain an optimal segment sequence.
    Type: Application
    Filed: March 29, 2016
    Publication date: December 29, 2016
    Inventors: XIAOWU CHEN, YU ZHANG, JIA LI, QINPING ZHAO, CHEN WANG, CHANGQUN XIA