Patents by Inventor Gaoyuan Zhang

Gaoyuan Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230388563
    Abstract: The present disclosure provides methods and apparatuses for inserting digital contents into a multi-view video. The multi-view video may comprise a plurality of views. At least one target region in the plurality of views may be identified. At least one digital content to be inserted may be determined. The multi-view video may be updated through adding the at least one digital content into the at least one target region.
    Type: Application
    Filed: August 31, 2021
    Publication date: November 30, 2023
    Inventors: Jie Yang, Qi ZHANG, Guosheng SUN, Gaoyuan ZHANG
  • Publication number: 20230004754
    Abstract: Adversarial patches can be inserted into sample pictures by an adversarial image generator to realistically depict adversarial images. The adversarial image generator can be utilized to train an adversarial patch generator by inserting generated patches into sample pictures, and submitting the resulting adversarial images to object detection models. This way, the adversarial patch generator can be trained to generate patches capable of defeating object detection models.
    Type: Application
    Filed: June 30, 2021
    Publication date: January 5, 2023
    Inventors: Quanfu Fan, Sijia Liu, GAOYUAN ZHANG, Kaidi Xu
  • Patent number: 11443069
    Abstract: An illustrative embodiment includes a method for protecting a machine learning model. The method includes: determining concept-level interpretability of respective units within the model; determining sensitivity of the respective units within the model to an adversarial attack; identifying units within the model which are both interpretable and sensitive to the adversarial attack; and enhancing defense against the adversarial attack by masking at least a portion of the units identified as both interpretable and sensitive to the adversarial attack.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: September 13, 2022
    Assignee: International Business Machines Corporation
    Inventors: Sijia Liu, Quanfu Fan, Gaoyuan Zhang, Chuang Gan
  • Publication number: 20220261626
    Abstract: Scalable distributed adversarial training techniques for robust deep neural networks are provided. In one aspect, a method for adversarial training of a deep neural network-based model by distributed computing machines M includes, by distributed computing machines M: obtaining adversarial perturbation-modified training examples for samples in a local dataset D(i); computing gradients of a local cost function fi with respect to parameters ? of the deep neural network-based model using the adversarial perturbation-modified training examples; transmitting the gradients of the local cost function fi to a server which aggregates the gradients of the local cost function fi and transmits an aggregated gradient to the distributed computing machines M; and updating the parameters ? of the deep neural network-based model stored at each of the distributed computing machines M based on the aggregated gradient received from the server.
    Type: Application
    Filed: February 8, 2021
    Publication date: August 18, 2022
    Inventors: Sijia Liu, Gaoyuan ZHANG, Pin-Yu Chen, Chuang Gan, Songtao Lu
  • Patent number: 11397891
    Abstract: Embodiments relate to a system, program product, and method to support a convolutional neural network (CNN). A class-specific discriminative image region is localized to interpret a prediction of a CNN and to apply a class activation map (CAM) function to received input data. First and second attacks are generated on the CNN with respect to the received input data. The first attack generates first perturbed data and a corresponding first CAM, and the second attack generates second perturbed data and a corresponding second CAM. An interpretability discrepancy is measured to quantify one or more differences between the first CAM and the second CAM. The measured interpretability discrepancy is applied to the CNN. The application is a response to an inconsistency between the first CAM and the second CAM and functions to strengthen the CNN against an adversarial attack.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: July 26, 2022
    Assignee: International Business Machines Corporation
    Inventors: Sijia Liu, Gaoyuan Zhang, Pin-Yu Chen, Chuang Gan, Akhilan Boopathy
  • Patent number: 11394742
    Abstract: One or more computer processors generate a plurality of adversarial perturbations associated with a model, wherein the plurality of adversarial perturbations comprises a universal perturbation and one or more per-sample perturbations. The one or more computer processors identify a plurality of neuron activations associated with the model and the plurality of generated adversarial perturbations. The one or more computer processors maximize the identified plurality of neuron activations. The one or more computer processors determine the model is a Trojan model by leveraging one or more similarities associated with the maximized neuron activations and the generated adversarial perturbations.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: July 19, 2022
    Assignee: International Business Machines Corporation
    Inventors: Sijia Liu, Pin-Yu Chen, Jinjun Xiong, Gaoyuan Zhang, Meng Wang, Ren Wang
  • Publication number: 20220053005
    Abstract: One or more computer processors generate a plurality of adversarial perturbations associated with a model, wherein the plurality of adversarial perturbations comprises a universal perturbation and one or more per-sample perturbations. The one or more computer processors identify a plurality of neuron activations associated with the model and the plurality of generated adversarial perturbations. The one or more computer processors maximize the identified plurality of neuron activations. The one or more computer processors determine the model is a Trojan model by leveraging one or more similarities associated with the maximized neuron activations and the generated adversarial perturbations.
    Type: Application
    Filed: August 17, 2020
    Publication date: February 17, 2022
    Inventors: Sijia Liu, Pin-Yu Chen, Jinjun Xiong, GAOYUAN ZHANG, Meng Wang, Ren Wang
  • Publication number: 20210334646
    Abstract: A method of utilizing a computing device to optimize weights within a neural network to avoid adversarial attacks includes receiving, by a computing device, a neural network for optimization. The method further includes determining, by the computing device, on a region by region basis one or more robustness bounds for weights within the neural network. The robustness bounds indicating values beyond which the neural network generates an erroneous output upon performing an adversarial attack on the neural network. The computing device further averages all robustness bounds on the region by region basis. The computing device additionally optimizes weights for adversarial proofing the neural network based at least in part on the averaged robustness bounds.
    Type: Application
    Filed: April 28, 2020
    Publication date: October 28, 2021
    Inventors: Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, Chuang Gan
  • Publication number: 20210216859
    Abstract: Embodiments relate to a system, program product, and method to support a convolutional neural network (CNN). A class-specific discriminative image region is localized to interpret a prediction of a CNN and to apply a class activation map (CAM) function to received input data. First and second attacks are generated on the CNN with respect to the received input data. The first attack generates first perturbed data and a corresponding first CAM, and the second attack generates second perturbed data and a corresponding second CAM. An interpretability discrepancy is measured to quantify one or more differences between the first CAM and the second CAM. The measured interpretability discrepancy is applied to the CNN. The application is a response to an inconsistency between the first CAM and the second CAM and functions to strengthen the CNN against an adversarial attack.
    Type: Application
    Filed: January 14, 2020
    Publication date: July 15, 2021
    Applicant: International Business Machines Corporation
    Inventors: Sijia Liu, Gaoyuan Zhang, Pin-Yu Chen, Chuang Gan, Akhilan Boopathy
  • Publication number: 20210064785
    Abstract: An illustrative embodiment includes a method for protecting a machine learning model. The method includes: determining concept-level interpretability of respective units within the model; determining sensitivity of the respective units within the model to an adversarial attack; identifying units within the model which are both interpretable and sensitive to the adversarial attack; and enhancing defense against the adversarial attack by masking at least a portion of the units identified as both interpretable and sensitive to the adversarial attack.
    Type: Application
    Filed: September 3, 2019
    Publication date: March 4, 2021
    Inventors: Sijia Liu, Quanfu Fan, Gaoyuan Zhang, Chuang Gan
  • Publication number: 20030108042
    Abstract: Known techniques for characterizing network traffic are based on comparing new traffic with lists of older, known traffic. Performance degrades when such lists are long, as they are in Internet applications. Furthermore, the comparison process often requests a database lookup and hence must take place at the application level. In contrast, a technique is presented that uses geometric regions in a low-dimensional space to characterize network traffic. A packet of new traffic is classified by mapping of the header of the packet to a point in the low-dimensional space and performing a comparison of the point to the geometric regions. Comparison is cheap, and can be carried out in the protocol layer. The approach can be applied to intrusion and novelty detection and to automatic quality of service or content determination.
    Type: Application
    Filed: November 4, 2002
    Publication date: June 12, 2003
    Inventors: David Skillicorn, Gaoyuan Zhang