Patents by Inventor Ambrish Rawat

Ambrish Rawat has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220188629
    Abstract: Techniques of facilitating deep learning model rescaling by computing devices. In one example, a system can comprise a processor that executes computer executable components stored in memory. The computer executable components can comprise: a rescaling component; and a forecasting component. The rescaling component can determine a scaling ratio that maps low mesh resolution predictive data output by a partial differential equation (PDE)-based model for a sub-domain to high-resolution observational or ground-truth data for a domain comprising the sub-domain. The forecasting component can generate high mesh resolution predictive data for the domain with a machine-learning model using input data of the PDE-based model and the scaling ratio.
    Type: Application
    Filed: December 15, 2020
    Publication date: June 16, 2022
    Inventors: Fearghal O'Donncha, Ambrish Rawat, Sean A. McKenna, Mathieu Sinn
  • Publication number: 20220180174
    Abstract: A computer-implemented method, a computer program product, and a computer system for optimally balancing deployment of a deep learning based surrogate model and a physics based mathematical model in simulating a complex problem. One or more computing devices or servers compare results of running the deep learning based surrogate model with results of partially running the physics based mathematical model or with observations. One or more computing devices or severs output the results of running the deep learning based surrogate model as system outputs of simulating the complex problem, in response to determining that the deep learning based surrogate model is reliable. One or more computing devices or servers output results of running the physics based mathematical model as the system outputs of simulating the complex problem, in response to determining that the deep learning based surrogate model is not reliable.
    Type: Application
    Filed: December 7, 2020
    Publication date: June 9, 2022
    Inventors: Ambrish Rawat, Fearghal O'Donncha, Mathieu Sinn, Sean A. McKenna
  • Patent number: 11334671
    Abstract: One or more hardened machine learning models are secured against adversarial attacks by adding adversarial protection to one or more previously trained machine learning models. To generate the hardened machine learning models, the previously trained machine learning models are retrained and extended using preprocessing layers or using additional network layers which test model performance on benign or adversarial samples. A rollback strategy is additionally implemented to retain intermediate model states during the retraining to provide recovery if a training collapse is detected.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: May 17, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Beat Buesser, Maria-Irina Nicolae, Ambrish Rawat, Mathieu Sinn, Ngoc Minh Tran, Martin Wistuba
  • Patent number: 11288408
    Abstract: Embodiments for providing adversarial protection to computing display devices by a processor. Security defenses may be provided on one or more image display devices against automated media analysis by using adversarial noise, an adversarial patch, or a combination thereof.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: March 29, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Beat Buesser, Maria-Irina Nicolae, Ambrish Rawat, Mathieu Sinn, Ngoc Minh Tran, Martin Wistuba
  • Publication number: 20220043859
    Abstract: Methods, computer program products and/or systems are provided that perform the following operations: obtaining a collection of objects; constructing a weighted graph based on the collection of objections, wherein the weighted graph preserves neighborhood semantics of objects of the collection of objects; generating partitions of nodes in the weighted graph of a fixed maximum size utilizing combinatorial partitioning; generating a vector for each node based on the partitions of nodes in the weighted graph; determining vector representations for objects in this collection and eventually applying this vector representation of the objects to gain efficiency (e.g., in terms of computation time and memory requirements) for use in downstream tasks such as recommendation.
    Type: Application
    Filed: August 7, 2020
    Publication date: February 10, 2022
    Inventors: Debasis Ganguly, Martin Gleize, Ambrish Rawat, Yufang Hou
  • Publication number: 20210312276
    Abstract: Various embodiments are provided for automating decision making for a neural architecture search by one or more processors in a computing system. One or more specifications may be automatically selected for a dataset, tasks, and one or more constraints for a neural architecture search. The neural architecture search may be performed based on the one or more specifications. A deep learning model may be suggested, predicted, and/or configured for the dataset, the tasks, and the one or more constraints based on the neural architecture search.
    Type: Application
    Filed: April 7, 2020
    Publication date: October 7, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ambrish RAWAT, Martin WISTUBA, Beat BUESSER, Mathieu SINN, Sharon QIAN, Suwen LIN
  • Patent number: 11036857
    Abstract: A method for protecting a machine learning model includes: generating a first adversarial example by modifying an original input using an attack tactic, wherein the model accurately classifies the original input but does not accurately classify at least the first adversarial example; training a defender to protect the model from the first adversarial example by updating a strategy of the defender based on predictive results from classifying the first adversarial example; updating the attack tactic based on the predictive results from classifying the first adversarial example; generating a second adversarial example by modifying the original input using the updated attack tactic, wherein the trained defender does not protect the model from the second adversarial example; and training the defender to protect the model from the second adversarial example by updating the at least one strategy of the defender based on results obtained from classifying the second adversarial example.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: June 15, 2021
    Assignee: International Business Machines Corporation
    Inventors: Ngoc Minh Tran, Mathieu Sinn, Ambrish Rawat, Maria-Irina Nicolae, Martin Wistuba
  • Publication number: 20210110071
    Abstract: Embodiments for providing adversarial protection to computing display devices by a processor. Security defenses may be provided on one or more image display devices against automated media analysis by using adversarial noise, an adversarial patch, or a combination thereof.
    Type: Application
    Filed: October 14, 2019
    Publication date: April 15, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Beat BUESSER, Maria-Irina NICOLAE, Ambrish RAWAT, Mathieu SINN, Ngoc Minh TRAN, Martin WISTUBA
  • Publication number: 20210110045
    Abstract: Various embodiments are provided for securing trained machine learning models by one or more processors in a computing system. One or more hardened machine learning models are secured against adversarial attacks by adding adversarial protection to one or more trained machine learning model.
    Type: Application
    Filed: October 14, 2019
    Publication date: April 15, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Beat BUESSER, Maria-Irina NICOLAE, Ambrish RAWAT, Mathieu SINN, Ngoc Minh TRAN, Martin WISTUBA
  • Publication number: 20210073376
    Abstract: Various embodiments are provided for securing machine learning models by one or more processors in a computing system. One or more hardened machine learning models that are secured against adversarial attacks are provided by applying one or more of a plurality of combinations of selected preprocessing operations from one or more machine learning models, a data set used for hardening the one or more machine learning models, a list of preprocessors, and a selected number of learners.
    Type: Application
    Filed: September 10, 2019
    Publication date: March 11, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ngoc Minh TRAN, Mathieu SINN, Maria-Irina NICOLAE, Martin WISTUBA, Ambrish RAWAT, Beat BUESSER
  • Patent number: 10896664
    Abstract: Embodiments for providing adversarial protection of speech in audio signals by a processor. Security defenses on one or more audio devices may be provide against automated audio analysis of audio signals by using adversarial noise.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: January 19, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Beat Buesser, Maria-Irina Nicolae, Ambrish Rawat, Mathieu Sinn, Ngoc Minh Tran, Martin Wistuba
  • Publication number: 20200302331
    Abstract: Embodiments for intelligent problem solving using visual input by a processor. An interactive dialog may be initiated using the one or more IoT computing devices for receiving one or more instructions, objectives, and the contextual information to define a selected task. Visual data and contextual information associated with the visual data may be collected from one or more Internet of Things (“IoT”) computing devices. One or more solutions may be for a selected task using the visual data and contextual data.
    Type: Application
    Filed: March 20, 2019
    Publication date: September 24, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Vincent LONIJ, Debasis GANGULY, Beat BUESSER, Ambrish RAWAT
  • Publication number: 20200159924
    Abstract: A method for protecting a machine learning model includes: generating a first adversarial example by modifying an original input using an attack tactic, wherein the model accurately classifies the original input but does not accurately classify at least the first adversarial example; training a defender to protect the model from the first adversarial example by updating a strategy of the defender based on predictive results from classifying the first adversarial example; updating the attack tactic based on the predictive results from classifying the first adversarial example; generating a second adversarial example by modifying the original input using the updated attack tactic, wherein the trained defender does not protect the model from the second adversarial example; and training the defender to protect the model from the second adversarial example by updating the at least one strategy of the defender based on results obtained from classifying the second adversarial example.
    Type: Application
    Filed: November 15, 2018
    Publication date: May 21, 2020
    Inventors: Ngoc Minh Tran, Mathieu Sinn, Ambrish Rawat, Maria-Irina Nicolae, Martin Wistuba