Patents by Inventor Xiangyang Xue

Xiangyang Xue has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11593957
    Abstract: A network for category-level 6D pose and size estimation, including a 3D-OCR module for 3D Orientation-Consistent Representation, a GeoReS module for Geometry-constrained Reflection Symmetry, and a MPDE module for Mirror-Paired Dimensional Estimation; wherein the 3D-OCR module and the GeoReS module are incorporated in parallel; the 3D-OCR module receives a canonical template shape including canonical category-specific keypoints; the GeoReS module receives an original input depth observation including pre-processed predicted category labels and potential masks of the target instances; the MPDE module receives the output from the GeoReS module as well as the original input depth observation; and the network outputs the estimation results based on the output of the MPDE module, the output of the 3D-OCR module, as well as the canonical template shape. Also provided are corresponding systems and methods.
    Type: Grant
    Filed: March 10, 2022
    Date of Patent: February 28, 2023
    Assignee: Fudan University
    Inventors: Yanwei Fu, Haitao Lin, Xiangyang Xue
  • Patent number: 11455776
    Abstract: A network for neural pose transfer includes a pose feature extractor, and a style transfer decoder, wherein the pose feature extractor comprises a plurality of sequential extracting stacks, each extracting stack consists of a first convolution layer and an Instance Norm layer sequential to the first convolution layer. The style transfer decoder comprises a plurality of sequential decoding stacks, a second convolution layer sequential to the plurality of decoding stacks and a tan h layer sequential to the second convolution layer. Each decoding stack consists of a third convolution layer and a SPAdaIn residual block. A source pose mesh is input to the pose feature extractor, and an identity mesh is concatenated with the output of the pose feature extractor and meanwhile fed to each SPAdaIn residual block of the style transfer decoder. A system thereof is also provided.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: September 27, 2022
    Assignee: FUDAN UNIVERSITY
    Inventors: Yanwei Fu, Xiangyang Xue, Yinda Zhang, Chao Wen, Haitao Lin, Jiashun Wang, Tianyun Zou
  • Publication number: 20220292698
    Abstract: A network for category-level 6D pose and size estimation, including a 3D-OCR module for 3D Orientation-Consistent Representation, a GeoReS module for Geometry-constrained Reflection Symmetry, and a MPDE module for Mirror-Paired Dimensional Estimation; wherein the 3D-OCR module and the GeoReS module are incorporated in parallel; the 3D-OCR module receives a canonical template shape including canonical category-specific keypoints; the GeoReS module receives an original input depth observation including pre-processed predicted category labels and potential masks of the target instances; the MPDE module receives the output from the GeoReS module as well as the original input depth observation; and the network outputs the estimation results based on the output of the MPDE module, the output of the 3D-OCR module, as well as the canonical template shape. Also provided are corresponding systems and methods.
    Type: Application
    Filed: March 10, 2022
    Publication date: September 15, 2022
    Inventors: Yanwei Fu, Haitao Lin, Xiangyang Xue
  • Publication number: 20210407200
    Abstract: A network for neural pose transfer includes a pose feature extractor, and a style transfer decoder, wherein the pose feature extractor comprises a plurality of sequential extracting stacks, each extracting stack consists of a first convolution layer and an Instance Norm layer sequential to the first convolution layer. The style transfer decoder comprises a plurality of sequential decoding stacks, a second convolution layer sequential to the plurality of decoding stacks and a tan h layer sequential to the second convolution layer. Each decoding stack consists of a third convolution layer and a SPAdaIn residual block. A source pose mesh is input to the pose feature extractor, and an identity mesh is concatenated with the output of the pose feature extractor and meanwhile fed to each SPAdaIn residual block of the style transfer decoder. A system thereof is also provided.
    Type: Application
    Filed: September 10, 2020
    Publication date: December 30, 2021
    Inventors: Yanwei Fu, Xiangyang Xue, Yinda Zhang
  • Patent number: 11055549
    Abstract: A network for image processing is provided, and more particularly, for coarse-to-fine recognition of image processing.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: July 6, 2021
    Assignee: FUDAN UNIVERSITY
    Inventors: Yugang Jiang, Yanwei Fu, Changmao Cheng, Xiangyang Xue
  • Publication number: 20200026942
    Abstract: A network for image processing is provided, and more particularly, for coarse-to-fine recognition of image processing.
    Type: Application
    Filed: May 20, 2019
    Publication date: January 23, 2020
    Inventors: Yugang Jiang, Yanwei Fu, Changmao Cheng, Xiangyang Xue
  • Publication number: 20170228618
    Abstract: A video classification method and apparatus are provided in embodiments of the present invention. The method includes: establishing a neural network classification model according to a relationship between features of video samples and a semantic relationship of the video samples; obtaining a feature combination of a to-be-classified video file; and classifying the to-be-classified video file by using the neural network classification model and the feature combination of the to-be-classified video file The neural network classification model is established according to the relationship between the features of the video samples and the semantic relationship of the video samples, and the relationship between the features and the semantic relationship are fully considered. Therefore, video classification accuracy are improved.
    Type: Application
    Filed: April 24, 2017
    Publication date: August 10, 2017
    Inventors: Yugang JIANG, Zuxuan WU, Xiangyang XUE, Zichen GU, Zhenhua CHAI
  • Patent number: 9465992
    Abstract: A scene recognition method and apparatus are provided. The method includes obtaining multiple local detectors by training a training image set, where one local detector in the multiple local detectors corresponds to one local area of a type of target, and the type of target includes at least two local areas; detecting a to-be-recognized scene by using the multiple local detectors, and acquiring a feature, which is based on a local area of the target, of the to-be-recognized scene; and recognizing the to-be-recognized scene according to the feature, which is based on the local area of the target, of the to-be-recognized scene.
    Type: Grant
    Filed: March 13, 2015
    Date of Patent: October 11, 2016
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yugang Jiang, Jie Liu, Dong Wang, Yingbin Zheng, Xiangyang Xue
  • Publication number: 20150186726
    Abstract: A scene recognition method and apparatus are provided. The method includes obtaining multiple local detectors by training a training image set, where one local detector in the multiple local detectors corresponds to one local area of a type of target, and the type of target includes at least two local areas; detecting a to-be-recognized scene by using the multiple local detectors, and acquiring a feature, which is based on a local area of the target, of the to-be-recognized scene; and recognizing the to-be-recognized scene according to the feature, which is based on the local area of the target, of the to-be-recognized scene.
    Type: Application
    Filed: March 13, 2015
    Publication date: July 2, 2015
    Inventors: Yugang Jiang, Jie Liu, Dong Wang, Yingbin Zheng, Xiangyang Xue