Patents by Inventor Maria-Irina Nicolae

Maria-Irina Nicolae has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11681796
    Abstract: Various embodiments are provided for securing machine learning models by one or more processors in a computing system. One or more hardened machine learning models that are secured against adversarial attacks are provided by applying one or more of a plurality of combinations of selected preprocessing operations from one or more machine learning models, a data set used for hardening the one or more machine learning models, a list of preprocessors, and a selected number of learners.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: June 20, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ngoc Minh Tran, Mathieu Sinn, Maria-Irina Nicolae, Martin Wistuba, Ambrish Rawat, Beat Buesser
  • Patent number: 11334671
    Abstract: One or more hardened machine learning models are secured against adversarial attacks by adding adversarial protection to one or more previously trained machine learning models. To generate the hardened machine learning models, the previously trained machine learning models are retrained and extended using preprocessing layers or using additional network layers which test model performance on benign or adversarial samples. A rollback strategy is additionally implemented to retain intermediate model states during the retraining to provide recovery if a training collapse is detected.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: May 17, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Beat Buesser, Maria-Irina Nicolae, Ambrish Rawat, Mathieu Sinn, Ngoc Minh Tran, Martin Wistuba
  • Patent number: 11288408
    Abstract: Embodiments for providing adversarial protection to computing display devices by a processor. Security defenses may be provided on one or more image display devices against automated media analysis by using adversarial noise, an adversarial patch, or a combination thereof.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: March 29, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Beat Buesser, Maria-Irina Nicolae, Ambrish Rawat, Mathieu Sinn, Ngoc Minh Tran, Martin Wistuba
  • Patent number: 11036857
    Abstract: A method for protecting a machine learning model includes: generating a first adversarial example by modifying an original input using an attack tactic, wherein the model accurately classifies the original input but does not accurately classify at least the first adversarial example; training a defender to protect the model from the first adversarial example by updating a strategy of the defender based on predictive results from classifying the first adversarial example; updating the attack tactic based on the predictive results from classifying the first adversarial example; generating a second adversarial example by modifying the original input using the updated attack tactic, wherein the trained defender does not protect the model from the second adversarial example; and training the defender to protect the model from the second adversarial example by updating the at least one strategy of the defender based on results obtained from classifying the second adversarial example.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: June 15, 2021
    Assignee: International Business Machines Corporation
    Inventors: Ngoc Minh Tran, Mathieu Sinn, Ambrish Rawat, Maria-Irina Nicolae, Martin Wistuba
  • Publication number: 20210110045
    Abstract: Various embodiments are provided for securing trained machine learning models by one or more processors in a computing system. One or more hardened machine learning models are secured against adversarial attacks by adding adversarial protection to one or more trained machine learning model.
    Type: Application
    Filed: October 14, 2019
    Publication date: April 15, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Beat BUESSER, Maria-Irina NICOLAE, Ambrish RAWAT, Mathieu SINN, Ngoc Minh TRAN, Martin WISTUBA
  • Publication number: 20210110071
    Abstract: Embodiments for providing adversarial protection to computing display devices by a processor. Security defenses may be provided on one or more image display devices against automated media analysis by using adversarial noise, an adversarial patch, or a combination thereof.
    Type: Application
    Filed: October 14, 2019
    Publication date: April 15, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Beat BUESSER, Maria-Irina NICOLAE, Ambrish RAWAT, Mathieu SINN, Ngoc Minh TRAN, Martin WISTUBA
  • Publication number: 20210073376
    Abstract: Various embodiments are provided for securing machine learning models by one or more processors in a computing system. One or more hardened machine learning models that are secured against adversarial attacks are provided by applying one or more of a plurality of combinations of selected preprocessing operations from one or more machine learning models, a data set used for hardening the one or more machine learning models, a list of preprocessors, and a selected number of learners.
    Type: Application
    Filed: September 10, 2019
    Publication date: March 11, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ngoc Minh TRAN, Mathieu SINN, Maria-Irina NICOLAE, Martin WISTUBA, Ambrish RAWAT, Beat BUESSER
  • Patent number: 10896664
    Abstract: Embodiments for providing adversarial protection of speech in audio signals by a processor. Security defenses on one or more audio devices may be provide against automated audio analysis of audio signals by using adversarial noise.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: January 19, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Beat Buesser, Maria-Irina Nicolae, Ambrish Rawat, Mathieu Sinn, Ngoc Minh Tran, Martin Wistuba
  • Publication number: 20200159924
    Abstract: A method for protecting a machine learning model includes: generating a first adversarial example by modifying an original input using an attack tactic, wherein the model accurately classifies the original input but does not accurately classify at least the first adversarial example; training a defender to protect the model from the first adversarial example by updating a strategy of the defender based on predictive results from classifying the first adversarial example; updating the attack tactic based on the predictive results from classifying the first adversarial example; generating a second adversarial example by modifying the original input using the updated attack tactic, wherein the trained defender does not protect the model from the second adversarial example; and training the defender to protect the model from the second adversarial example by updating the at least one strategy of the defender based on results obtained from classifying the second adversarial example.
    Type: Application
    Filed: November 15, 2018
    Publication date: May 21, 2020
    Inventors: Ngoc Minh Tran, Mathieu Sinn, Ambrish Rawat, Maria-Irina Nicolae, Martin Wistuba