Patents by Inventor Yugang Jiang
Yugang Jiang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11275830Abstract: Systems and methods for video backdoor attack include a trigger generation module for generating a universal adversarial trigger pattern specific to a task, an adversarial perturbation module for producing videos with manipulated features; and a poisoning and inference module for injecting the generated trigger into perturbed videos as poisoned samples for training; wherein the trigger pattern is patched and optimized on videos from all non-target classes but relabeled to a target class, and the trigger pattern is a universal adversarial trigger pattern generated by minimizing the cross-entropy loss.Type: GrantFiled: January 26, 2021Date of Patent: March 15, 2022Assignee: FUDAN UNIVERSITYInventors: Yugang Jiang, Shihao Zhao, Xingjun Ma, Jingjing Chen
-
Patent number: 11276207Abstract: An image processing method for a computer device. The method includes obtaining a to-be-processed image belonging to a first image category; inputting the to-be-processed image into a first stage image conversion model, to obtain a first intermediate image; and converting the first intermediate image into a second intermediate image through a second stage image conversion model. The method also includes determining a first weight matrix corresponding to the first intermediate image; determining a second weight matrix corresponding to the second intermediate image; and fusing the first intermediate image and the second intermediate image according to the corresponding first weight matrix and second weight matrix, to obtain a target image corresponding to the to-be-processed image and belonging to a second image category. A sum of the first weight matrix and the second weight matrix being a preset matrix.Type: GrantFiled: May 21, 2020Date of Patent: March 15, 2022Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Minjun Li, Haozhi Huang, Lin Ma, Wei Liu, Yugang Jiang
-
Publication number: 20220027462Abstract: Systems and methods for video backdoor attack include a trigger generation module for generating a universal adversarial trigger pattern specific to a task, an adversarial perturbation module for producing videos with manipulated features; and a poisoning and inference module for injecting the generated trigger into perturbed videos as poisoned samples for training; wherein the trigger pattern is patched and optimized on videos from all non-target classes but relabeled to a target class, and the trigger pattern is a universal adversarial trigger pattern generated by minimizing the cross-entropy loss.Type: ApplicationFiled: January 26, 2021Publication date: January 27, 2022Inventors: Yugang Jiang, Shihao Zhao, Xingjun Ma, Jingjing Chen
-
Patent number: 11055549Abstract: A network for image processing is provided, and more particularly, for coarse-to-fine recognition of image processing.Type: GrantFiled: May 20, 2019Date of Patent: July 6, 2021Assignee: FUDAN UNIVERSITYInventors: Yugang Jiang, Yanwei Fu, Changmao Cheng, Xiangyang Xue
-
Patent number: 10839223Abstract: A system for activity localization in videos is described, comprising a visual concept detection module, which produces a plurality of first visual concept vectors each representing a probability of containing visual concepts for one of a plurality of sampled frames sampled from an input video; wherein each of the plurality of first visual concept vectors dot-product with a second visual concept vector extracted from a given query sentence, resulting a visual-semantic correlation score; a semantic activity proposal generation module, which generates semantic activity proposals by temporally grouping frames with a high visual-semantic correlation score; and a proposal evaluation and refinement module, which takes the semantic activity proposals, the visual concept vectors and the query sentence as input, and outputs alignment scores and refined boundaries for the proposals. The disclosure also relates to methods thereof.Type: GrantFiled: November 14, 2019Date of Patent: November 17, 2020Assignee: FUDAN UNIVERSITYInventors: Yugang Jiang, Shaoxiang Chen
-
Patent number: 10783709Abstract: This invention is related to a network for generating 3D shape, including an image feature network, an initial ellipsoid mesh, and a cascaded mesh deformation network. The image feature network is a Visual Geometry Group Net (VGGN) containing five successive convolutional layer groups, and four pooling layers sandwiched by the five convolutional layer groups; and the cascaded mesh deformation network is a graph-based convolution network (GCN) containing three successive deformation blocks, and two graph unpooling layers sandwiched by the three successive deformation blocks. This invention is also related to a system and a method thereof.Type: GrantFiled: November 12, 2019Date of Patent: September 22, 2020Assignee: FUDAN UNIVERSITYInventors: Yugang Jiang, Yanwei Fu, Nanyang Wang, Yinda Zhang, Zhuwen Li
-
Patent number: 10783401Abstract: A method for generating black-box adversarial attacks on video recognition models is provided, comprising a) passing input video frames into a public image model, to obtain pixel-wise tentative perturbations; b) partitioning the tentative perturbations into tentative perturbation patches; c) estimating the rectification weight required for each patch, via querying the target video model; d) applying the patch-wise rectification weight on the patches, to obtain the rectified pixel-wise perturbations; e) applying one step projected gradient descent (PGD) perturbation on the input video, according to the rectified pixel-wise perturbations; and f) iteratively performing steps a)-e) until an attack succeeds or a query limit is reached. Systems and networks therefor are also provided.Type: GrantFiled: February 23, 2020Date of Patent: September 22, 2020Assignee: FUDAN UNIVERSITYInventors: Yugang Jiang, Linxi Jiang, Xingjun Ma
-
Patent number: 10777003Abstract: This invention is related to a network for generating 3D shape, including an image feature network, an initial ellipsoid mesh, and a cascaded mesh deformation network. The image feature network is a Visual Geometry Group Net (VGGN) containing five successive convolutional layer groups, and four pooling layers sandwiched by the five convolutional layer groups; and the cascaded mesh deformation network is a graph-based convolution network (GCN) containing three successive deformation blocks, and two graph unpooling layers sandwiched by the three successive deformation blocks. This invention is also related to a system and a method thereof.Type: GrantFiled: July 23, 2019Date of Patent: September 15, 2020Assignee: FUDAN UNIVERSITYInventors: Yugang Jiang, Yanwei Fu, Nanyang Wang, Yinda Zhang, Zhuwen Li
-
Publication number: 20200286263Abstract: An image processing method for a computer device. The method includes obtaining a to-be-processed image belonging to a first image category; inputting the to-be-processed image into a first stage image conversion model, to obtain a first intermediate image; and converting the first intermediate image into a second intermediate image through a second stage image conversion model. The method also includes determining a first weight matrix corresponding to the first intermediate image; determining a second weight matrix corresponding to the second intermediate image; and fusing the first intermediate image and the second intermediate image according to the corresponding first weight matrix and second weight matrix, to obtain a target image corresponding to the to-be-processed image and belonging to a second image category. A sum of the first weight matrix and the second weight matrix being a preset matrix.Type: ApplicationFiled: May 21, 2020Publication date: September 10, 2020Inventors: Minjun LI, Haozhi HUANG, Lin MA, Wei LIU, Yugang JIANG
-
Patent number: 10699129Abstract: For video captioning, with an encoding module and a decoding module, the encoding module comprises a plurality of encoding units each receiving a set of video frames, wherein the sets of video frames received by two neighboring encoding units are in chronological order; and the encoding units each producing a spatially attended feature, so that the plurality of encoding units produce a spatially attended feature sequence; and the decoding module comprises a decoding unit chronologically receiving a temporally attended feature obtained from the spatially attended features sequence. Also disclosed is a method thereof.Type: GrantFiled: November 15, 2019Date of Patent: June 30, 2020Assignee: FUDAN UNIVERSITYInventors: Yugang Jiang, Shaoxiang Chen
-
Publication number: 20200082620Abstract: This invention is related to a network for generating 3D shape, including an image feature network, an initial ellipsoid mesh, and a cascaded mesh deformation network. The image feature network is a Visual Geometry Group Net (VGGN) containing five successive convolutional layer groups, and four pooling layers sandwiched by the five convolutional layer groups; and the cascaded mesh deformation network is a graph-based convolution network (GCN) containing three successive deformation blocks, and two graph unpooling layers sandwiched by the three successive deformation blocks. This invention is also related to a system and a method thereof.Type: ApplicationFiled: November 12, 2019Publication date: March 12, 2020Inventors: Yugang Jiang, Yanwei Fu, Nanyang Wang, Yinda Zhang, Zhuwen Li
-
Publication number: 20200027269Abstract: This invention is related to a network for generating 3D shape, including an image feature network, an initial ellipsoid mesh, and a cascaded mesh deformation network. The image feature network is a Visual Geometry Group Net (VGGN) containing five successive convolutional layer groups, and four pooling layers sandwiched by the five convolutional layer groups; and the cascaded mesh deformation network is a graph-based convolution network (GCN) containing three successive deformation blocks, and two graph unpooling layers sandwiched by the three successive deformation blocks. This invention is also related to a system and a method thereof.Type: ApplicationFiled: July 23, 2019Publication date: January 23, 2020Inventors: Yugang Jiang, Yanwei Fu, Nanyang Wang, Yinda Zhang, Zhuwen Li
-
Publication number: 20200026942Abstract: A network for image processing is provided, and more particularly, for coarse-to-fine recognition of image processing.Type: ApplicationFiled: May 20, 2019Publication date: January 23, 2020Inventors: Yugang Jiang, Yanwei Fu, Changmao Cheng, Xiangyang Xue
-
Publication number: 20190153570Abstract: Provided is a novel cardio-/cerebrovascular stent material of fully degradable magnesium alloy. The fully degradable magnesium alloy comprises magnesium and alloying elements, wherein the weight ratio of magnesium is not less than 85%, and the alloying elements include any one or a combination of several of gadolinium, erbium, thulium, yttrium, neodymium, holmium and zinc. The fully degradable magnesium alloy of the present invention has mechanical properties meeting the requirements of a cardio-/cerebrovascular biological stent, excellent corrosion resistance in vitro as demonstrated in in-vitro immersion corrosion test and electrochemical corrosion test, excellent biocompatibility as indicated in in-vitro cytotoxicity test, and a controllable degradation rate with good biocompatibility.Type: ApplicationFiled: March 3, 2016Publication date: May 23, 2019Inventors: Qian ZHOU, Yugang JIANG
-
Publication number: 20170228618Abstract: A video classification method and apparatus are provided in embodiments of the present invention. The method includes: establishing a neural network classification model according to a relationship between features of video samples and a semantic relationship of the video samples; obtaining a feature combination of a to-be-classified video file; and classifying the to-be-classified video file by using the neural network classification model and the feature combination of the to-be-classified video file The neural network classification model is established according to the relationship between the features of the video samples and the semantic relationship of the video samples, and the relationship between the features and the semantic relationship are fully considered. Therefore, video classification accuracy are improved.Type: ApplicationFiled: April 24, 2017Publication date: August 10, 2017Inventors: Yugang JIANG, Zuxuan WU, Xiangyang XUE, Zichen GU, Zhenhua CHAI
-
Patent number: 9465992Abstract: A scene recognition method and apparatus are provided. The method includes obtaining multiple local detectors by training a training image set, where one local detector in the multiple local detectors corresponds to one local area of a type of target, and the type of target includes at least two local areas; detecting a to-be-recognized scene by using the multiple local detectors, and acquiring a feature, which is based on a local area of the target, of the to-be-recognized scene; and recognizing the to-be-recognized scene according to the feature, which is based on the local area of the target, of the to-be-recognized scene.Type: GrantFiled: March 13, 2015Date of Patent: October 11, 2016Assignee: Huawei Technologies Co., Ltd.Inventors: Yugang Jiang, Jie Liu, Dong Wang, Yingbin Zheng, Xiangyang Xue
-
Publication number: 20150186726Abstract: A scene recognition method and apparatus are provided. The method includes obtaining multiple local detectors by training a training image set, where one local detector in the multiple local detectors corresponds to one local area of a type of target, and the type of target includes at least two local areas; detecting a to-be-recognized scene by using the multiple local detectors, and acquiring a feature, which is based on a local area of the target, of the to-be-recognized scene; and recognizing the to-be-recognized scene according to the feature, which is based on the local area of the target, of the to-be-recognized scene.Type: ApplicationFiled: March 13, 2015Publication date: July 2, 2015Inventors: Yugang Jiang, Jie Liu, Dong Wang, Yingbin Zheng, Xiangyang Xue