Patents by Inventor Yugang Jiang

Yugang Jiang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11275830
    Abstract: Systems and methods for video backdoor attack include a trigger generation module for generating a universal adversarial trigger pattern specific to a task, an adversarial perturbation module for producing videos with manipulated features; and a poisoning and inference module for injecting the generated trigger into perturbed videos as poisoned samples for training; wherein the trigger pattern is patched and optimized on videos from all non-target classes but relabeled to a target class, and the trigger pattern is a universal adversarial trigger pattern generated by minimizing the cross-entropy loss.
    Type: Grant
    Filed: January 26, 2021
    Date of Patent: March 15, 2022
    Assignee: FUDAN UNIVERSITY
    Inventors: Yugang Jiang, Shihao Zhao, Xingjun Ma, Jingjing Chen
  • Patent number: 11276207
    Abstract: An image processing method for a computer device. The method includes obtaining a to-be-processed image belonging to a first image category; inputting the to-be-processed image into a first stage image conversion model, to obtain a first intermediate image; and converting the first intermediate image into a second intermediate image through a second stage image conversion model. The method also includes determining a first weight matrix corresponding to the first intermediate image; determining a second weight matrix corresponding to the second intermediate image; and fusing the first intermediate image and the second intermediate image according to the corresponding first weight matrix and second weight matrix, to obtain a target image corresponding to the to-be-processed image and belonging to a second image category. A sum of the first weight matrix and the second weight matrix being a preset matrix.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: March 15, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Minjun Li, Haozhi Huang, Lin Ma, Wei Liu, Yugang Jiang
  • Publication number: 20220027462
    Abstract: Systems and methods for video backdoor attack include a trigger generation module for generating a universal adversarial trigger pattern specific to a task, an adversarial perturbation module for producing videos with manipulated features; and a poisoning and inference module for injecting the generated trigger into perturbed videos as poisoned samples for training; wherein the trigger pattern is patched and optimized on videos from all non-target classes but relabeled to a target class, and the trigger pattern is a universal adversarial trigger pattern generated by minimizing the cross-entropy loss.
    Type: Application
    Filed: January 26, 2021
    Publication date: January 27, 2022
    Inventors: Yugang Jiang, Shihao Zhao, Xingjun Ma, Jingjing Chen
  • Patent number: 11055549
    Abstract: A network for image processing is provided, and more particularly, for coarse-to-fine recognition of image processing.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: July 6, 2021
    Assignee: FUDAN UNIVERSITY
    Inventors: Yugang Jiang, Yanwei Fu, Changmao Cheng, Xiangyang Xue
  • Patent number: 10839223
    Abstract: A system for activity localization in videos is described, comprising a visual concept detection module, which produces a plurality of first visual concept vectors each representing a probability of containing visual concepts for one of a plurality of sampled frames sampled from an input video; wherein each of the plurality of first visual concept vectors dot-product with a second visual concept vector extracted from a given query sentence, resulting a visual-semantic correlation score; a semantic activity proposal generation module, which generates semantic activity proposals by temporally grouping frames with a high visual-semantic correlation score; and a proposal evaluation and refinement module, which takes the semantic activity proposals, the visual concept vectors and the query sentence as input, and outputs alignment scores and refined boundaries for the proposals. The disclosure also relates to methods thereof.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: November 17, 2020
    Assignee: FUDAN UNIVERSITY
    Inventors: Yugang Jiang, Shaoxiang Chen
  • Patent number: 10783401
    Abstract: A method for generating black-box adversarial attacks on video recognition models is provided, comprising a) passing input video frames into a public image model, to obtain pixel-wise tentative perturbations; b) partitioning the tentative perturbations into tentative perturbation patches; c) estimating the rectification weight required for each patch, via querying the target video model; d) applying the patch-wise rectification weight on the patches, to obtain the rectified pixel-wise perturbations; e) applying one step projected gradient descent (PGD) perturbation on the input video, according to the rectified pixel-wise perturbations; and f) iteratively performing steps a)-e) until an attack succeeds or a query limit is reached. Systems and networks therefor are also provided.
    Type: Grant
    Filed: February 23, 2020
    Date of Patent: September 22, 2020
    Assignee: FUDAN UNIVERSITY
    Inventors: Yugang Jiang, Linxi Jiang, Xingjun Ma
  • Patent number: 10783709
    Abstract: This invention is related to a network for generating 3D shape, including an image feature network, an initial ellipsoid mesh, and a cascaded mesh deformation network. The image feature network is a Visual Geometry Group Net (VGGN) containing five successive convolutional layer groups, and four pooling layers sandwiched by the five convolutional layer groups; and the cascaded mesh deformation network is a graph-based convolution network (GCN) containing three successive deformation blocks, and two graph unpooling layers sandwiched by the three successive deformation blocks. This invention is also related to a system and a method thereof.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: September 22, 2020
    Assignee: FUDAN UNIVERSITY
    Inventors: Yugang Jiang, Yanwei Fu, Nanyang Wang, Yinda Zhang, Zhuwen Li
  • Patent number: 10777003
    Abstract: This invention is related to a network for generating 3D shape, including an image feature network, an initial ellipsoid mesh, and a cascaded mesh deformation network. The image feature network is a Visual Geometry Group Net (VGGN) containing five successive convolutional layer groups, and four pooling layers sandwiched by the five convolutional layer groups; and the cascaded mesh deformation network is a graph-based convolution network (GCN) containing three successive deformation blocks, and two graph unpooling layers sandwiched by the three successive deformation blocks. This invention is also related to a system and a method thereof.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: September 15, 2020
    Assignee: FUDAN UNIVERSITY
    Inventors: Yugang Jiang, Yanwei Fu, Nanyang Wang, Yinda Zhang, Zhuwen Li
  • Publication number: 20200286263
    Abstract: An image processing method for a computer device. The method includes obtaining a to-be-processed image belonging to a first image category; inputting the to-be-processed image into a first stage image conversion model, to obtain a first intermediate image; and converting the first intermediate image into a second intermediate image through a second stage image conversion model. The method also includes determining a first weight matrix corresponding to the first intermediate image; determining a second weight matrix corresponding to the second intermediate image; and fusing the first intermediate image and the second intermediate image according to the corresponding first weight matrix and second weight matrix, to obtain a target image corresponding to the to-be-processed image and belonging to a second image category. A sum of the first weight matrix and the second weight matrix being a preset matrix.
    Type: Application
    Filed: May 21, 2020
    Publication date: September 10, 2020
    Inventors: Minjun LI, Haozhi HUANG, Lin MA, Wei LIU, Yugang JIANG
  • Patent number: 10699129
    Abstract: For video captioning, with an encoding module and a decoding module, the encoding module comprises a plurality of encoding units each receiving a set of video frames, wherein the sets of video frames received by two neighboring encoding units are in chronological order; and the encoding units each producing a spatially attended feature, so that the plurality of encoding units produce a spatially attended feature sequence; and the decoding module comprises a decoding unit chronologically receiving a temporally attended feature obtained from the spatially attended features sequence. Also disclosed is a method thereof.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: June 30, 2020
    Assignee: FUDAN UNIVERSITY
    Inventors: Yugang Jiang, Shaoxiang Chen
  • Publication number: 20200082620
    Abstract: This invention is related to a network for generating 3D shape, including an image feature network, an initial ellipsoid mesh, and a cascaded mesh deformation network. The image feature network is a Visual Geometry Group Net (VGGN) containing five successive convolutional layer groups, and four pooling layers sandwiched by the five convolutional layer groups; and the cascaded mesh deformation network is a graph-based convolution network (GCN) containing three successive deformation blocks, and two graph unpooling layers sandwiched by the three successive deformation blocks. This invention is also related to a system and a method thereof.
    Type: Application
    Filed: November 12, 2019
    Publication date: March 12, 2020
    Inventors: Yugang Jiang, Yanwei Fu, Nanyang Wang, Yinda Zhang, Zhuwen Li
  • Publication number: 20200027269
    Abstract: This invention is related to a network for generating 3D shape, including an image feature network, an initial ellipsoid mesh, and a cascaded mesh deformation network. The image feature network is a Visual Geometry Group Net (VGGN) containing five successive convolutional layer groups, and four pooling layers sandwiched by the five convolutional layer groups; and the cascaded mesh deformation network is a graph-based convolution network (GCN) containing three successive deformation blocks, and two graph unpooling layers sandwiched by the three successive deformation blocks. This invention is also related to a system and a method thereof.
    Type: Application
    Filed: July 23, 2019
    Publication date: January 23, 2020
    Inventors: Yugang Jiang, Yanwei Fu, Nanyang Wang, Yinda Zhang, Zhuwen Li
  • Publication number: 20200026942
    Abstract: A network for image processing is provided, and more particularly, for coarse-to-fine recognition of image processing.
    Type: Application
    Filed: May 20, 2019
    Publication date: January 23, 2020
    Inventors: Yugang Jiang, Yanwei Fu, Changmao Cheng, Xiangyang Xue
  • Publication number: 20190153570
    Abstract: Provided is a novel cardio-/cerebrovascular stent material of fully degradable magnesium alloy. The fully degradable magnesium alloy comprises magnesium and alloying elements, wherein the weight ratio of magnesium is not less than 85%, and the alloying elements include any one or a combination of several of gadolinium, erbium, thulium, yttrium, neodymium, holmium and zinc. The fully degradable magnesium alloy of the present invention has mechanical properties meeting the requirements of a cardio-/cerebrovascular biological stent, excellent corrosion resistance in vitro as demonstrated in in-vitro immersion corrosion test and electrochemical corrosion test, excellent biocompatibility as indicated in in-vitro cytotoxicity test, and a controllable degradation rate with good biocompatibility.
    Type: Application
    Filed: March 3, 2016
    Publication date: May 23, 2019
    Inventors: Qian ZHOU, Yugang JIANG
  • Publication number: 20170228618
    Abstract: A video classification method and apparatus are provided in embodiments of the present invention. The method includes: establishing a neural network classification model according to a relationship between features of video samples and a semantic relationship of the video samples; obtaining a feature combination of a to-be-classified video file; and classifying the to-be-classified video file by using the neural network classification model and the feature combination of the to-be-classified video file The neural network classification model is established according to the relationship between the features of the video samples and the semantic relationship of the video samples, and the relationship between the features and the semantic relationship are fully considered. Therefore, video classification accuracy are improved.
    Type: Application
    Filed: April 24, 2017
    Publication date: August 10, 2017
    Inventors: Yugang JIANG, Zuxuan WU, Xiangyang XUE, Zichen GU, Zhenhua CHAI
  • Patent number: 9465992
    Abstract: A scene recognition method and apparatus are provided. The method includes obtaining multiple local detectors by training a training image set, where one local detector in the multiple local detectors corresponds to one local area of a type of target, and the type of target includes at least two local areas; detecting a to-be-recognized scene by using the multiple local detectors, and acquiring a feature, which is based on a local area of the target, of the to-be-recognized scene; and recognizing the to-be-recognized scene according to the feature, which is based on the local area of the target, of the to-be-recognized scene.
    Type: Grant
    Filed: March 13, 2015
    Date of Patent: October 11, 2016
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yugang Jiang, Jie Liu, Dong Wang, Yingbin Zheng, Xiangyang Xue
  • Publication number: 20150186726
    Abstract: A scene recognition method and apparatus are provided. The method includes obtaining multiple local detectors by training a training image set, where one local detector in the multiple local detectors corresponds to one local area of a type of target, and the type of target includes at least two local areas; detecting a to-be-recognized scene by using the multiple local detectors, and acquiring a feature, which is based on a local area of the target, of the to-be-recognized scene; and recognizing the to-be-recognized scene according to the feature, which is based on the local area of the target, of the to-be-recognized scene.
    Type: Application
    Filed: March 13, 2015
    Publication date: July 2, 2015
    Inventors: Yugang Jiang, Jie Liu, Dong Wang, Yingbin Zheng, Xiangyang Xue