Patents by Inventor Gaoyuan Zhang
Gaoyuan Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12231702Abstract: The present disclosure provides methods and apparatuses for inserting digital contents into a multi-view video. The multi-view video may comprise a plurality of views. At least one target region in the plurality of views may be identified. At least one digital content to be inserted may be determined. The multi-view video may be updated through adding the at least one digital content into the at least one target region.Type: GrantFiled: August 31, 2021Date of Patent: February 18, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Jie Yang, Qi Zhang, Guosheng Sun, Gaoyuan Zhang
-
Publication number: 20230388563Abstract: The present disclosure provides methods and apparatuses for inserting digital contents into a multi-view video. The multi-view video may comprise a plurality of views. At least one target region in the plurality of views may be identified. At least one digital content to be inserted may be determined. The multi-view video may be updated through adding the at least one digital content into the at least one target region.Type: ApplicationFiled: August 31, 2021Publication date: November 30, 2023Inventors: Jie Yang, Qi ZHANG, Guosheng SUN, Gaoyuan ZHANG
-
Publication number: 20230004754Abstract: Adversarial patches can be inserted into sample pictures by an adversarial image generator to realistically depict adversarial images. The adversarial image generator can be utilized to train an adversarial patch generator by inserting generated patches into sample pictures, and submitting the resulting adversarial images to object detection models. This way, the adversarial patch generator can be trained to generate patches capable of defeating object detection models.Type: ApplicationFiled: June 30, 2021Publication date: January 5, 2023Inventors: Quanfu Fan, Sijia Liu, GAOYUAN ZHANG, Kaidi Xu
-
Patent number: 11443069Abstract: An illustrative embodiment includes a method for protecting a machine learning model. The method includes: determining concept-level interpretability of respective units within the model; determining sensitivity of the respective units within the model to an adversarial attack; identifying units within the model which are both interpretable and sensitive to the adversarial attack; and enhancing defense against the adversarial attack by masking at least a portion of the units identified as both interpretable and sensitive to the adversarial attack.Type: GrantFiled: September 3, 2019Date of Patent: September 13, 2022Assignee: International Business Machines CorporationInventors: Sijia Liu, Quanfu Fan, Gaoyuan Zhang, Chuang Gan
-
Publication number: 20220261626Abstract: Scalable distributed adversarial training techniques for robust deep neural networks are provided. In one aspect, a method for adversarial training of a deep neural network-based model by distributed computing machines M includes, by distributed computing machines M: obtaining adversarial perturbation-modified training examples for samples in a local dataset D(i); computing gradients of a local cost function fi with respect to parameters ? of the deep neural network-based model using the adversarial perturbation-modified training examples; transmitting the gradients of the local cost function fi to a server which aggregates the gradients of the local cost function fi and transmits an aggregated gradient to the distributed computing machines M; and updating the parameters ? of the deep neural network-based model stored at each of the distributed computing machines M based on the aggregated gradient received from the server.Type: ApplicationFiled: February 8, 2021Publication date: August 18, 2022Inventors: Sijia Liu, Gaoyuan ZHANG, Pin-Yu Chen, Chuang Gan, Songtao Lu
-
Patent number: 11397891Abstract: Embodiments relate to a system, program product, and method to support a convolutional neural network (CNN). A class-specific discriminative image region is localized to interpret a prediction of a CNN and to apply a class activation map (CAM) function to received input data. First and second attacks are generated on the CNN with respect to the received input data. The first attack generates first perturbed data and a corresponding first CAM, and the second attack generates second perturbed data and a corresponding second CAM. An interpretability discrepancy is measured to quantify one or more differences between the first CAM and the second CAM. The measured interpretability discrepancy is applied to the CNN. The application is a response to an inconsistency between the first CAM and the second CAM and functions to strengthen the CNN against an adversarial attack.Type: GrantFiled: January 14, 2020Date of Patent: July 26, 2022Assignee: International Business Machines CorporationInventors: Sijia Liu, Gaoyuan Zhang, Pin-Yu Chen, Chuang Gan, Akhilan Boopathy
-
Patent number: 11394742Abstract: One or more computer processors generate a plurality of adversarial perturbations associated with a model, wherein the plurality of adversarial perturbations comprises a universal perturbation and one or more per-sample perturbations. The one or more computer processors identify a plurality of neuron activations associated with the model and the plurality of generated adversarial perturbations. The one or more computer processors maximize the identified plurality of neuron activations. The one or more computer processors determine the model is a Trojan model by leveraging one or more similarities associated with the maximized neuron activations and the generated adversarial perturbations.Type: GrantFiled: August 17, 2020Date of Patent: July 19, 2022Assignee: International Business Machines CorporationInventors: Sijia Liu, Pin-Yu Chen, Jinjun Xiong, Gaoyuan Zhang, Meng Wang, Ren Wang
-
Publication number: 20220053005Abstract: One or more computer processors generate a plurality of adversarial perturbations associated with a model, wherein the plurality of adversarial perturbations comprises a universal perturbation and one or more per-sample perturbations. The one or more computer processors identify a plurality of neuron activations associated with the model and the plurality of generated adversarial perturbations. The one or more computer processors maximize the identified plurality of neuron activations. The one or more computer processors determine the model is a Trojan model by leveraging one or more similarities associated with the maximized neuron activations and the generated adversarial perturbations.Type: ApplicationFiled: August 17, 2020Publication date: February 17, 2022Inventors: Sijia Liu, Pin-Yu Chen, Jinjun Xiong, GAOYUAN ZHANG, Meng Wang, Ren Wang
-
Publication number: 20210334646Abstract: A method of utilizing a computing device to optimize weights within a neural network to avoid adversarial attacks includes receiving, by a computing device, a neural network for optimization. The method further includes determining, by the computing device, on a region by region basis one or more robustness bounds for weights within the neural network. The robustness bounds indicating values beyond which the neural network generates an erroneous output upon performing an adversarial attack on the neural network. The computing device further averages all robustness bounds on the region by region basis. The computing device additionally optimizes weights for adversarial proofing the neural network based at least in part on the averaged robustness bounds.Type: ApplicationFiled: April 28, 2020Publication date: October 28, 2021Inventors: Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, Chuang Gan
-
Publication number: 20210216859Abstract: Embodiments relate to a system, program product, and method to support a convolutional neural network (CNN). A class-specific discriminative image region is localized to interpret a prediction of a CNN and to apply a class activation map (CAM) function to received input data. First and second attacks are generated on the CNN with respect to the received input data. The first attack generates first perturbed data and a corresponding first CAM, and the second attack generates second perturbed data and a corresponding second CAM. An interpretability discrepancy is measured to quantify one or more differences between the first CAM and the second CAM. The measured interpretability discrepancy is applied to the CNN. The application is a response to an inconsistency between the first CAM and the second CAM and functions to strengthen the CNN against an adversarial attack.Type: ApplicationFiled: January 14, 2020Publication date: July 15, 2021Applicant: International Business Machines CorporationInventors: Sijia Liu, Gaoyuan Zhang, Pin-Yu Chen, Chuang Gan, Akhilan Boopathy
-
Publication number: 20210064785Abstract: An illustrative embodiment includes a method for protecting a machine learning model. The method includes: determining concept-level interpretability of respective units within the model; determining sensitivity of the respective units within the model to an adversarial attack; identifying units within the model which are both interpretable and sensitive to the adversarial attack; and enhancing defense against the adversarial attack by masking at least a portion of the units identified as both interpretable and sensitive to the adversarial attack.Type: ApplicationFiled: September 3, 2019Publication date: March 4, 2021Inventors: Sijia Liu, Quanfu Fan, Gaoyuan Zhang, Chuang Gan
-
Publication number: 20030108042Abstract: Known techniques for characterizing network traffic are based on comparing new traffic with lists of older, known traffic. Performance degrades when such lists are long, as they are in Internet applications. Furthermore, the comparison process often requests a database lookup and hence must take place at the application level. In contrast, a technique is presented that uses geometric regions in a low-dimensional space to characterize network traffic. A packet of new traffic is classified by mapping of the header of the packet to a point in the low-dimensional space and performing a comparison of the point to the geometric regions. Comparison is cheap, and can be carried out in the protocol layer. The approach can be applied to intrusion and novelty detection and to automatic quality of service or content determination.Type: ApplicationFiled: November 4, 2002Publication date: June 12, 2003Inventors: David Skillicorn, Gaoyuan Zhang