Patents by Inventor Kanthi Sarpatwar

Kanthi Sarpatwar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240013050
    Abstract: An example system includes a processor to prune a machine learning model based on an importance of neurons or weights. The processor is to further permute and pack remaining neurons or weights of the pruned machine learning model to reduce an amount of ciphertext computation under a selected constraint.
    Type: Application
    Filed: July 5, 2022
    Publication date: January 11, 2024
    Inventors: Subhankar PAL, Alper BUYUKTOSUNOGLU, Ehud AHARONI, Nir DRUCKER, Omri SOCEANU, Hayim SHAUL, Kanthi SARPATWAR, Roman VACULIN, Moran BARUCH, Pradip BOSE
  • Patent number: 11764941
    Abstract: A method, apparatus and computer program product for homomorphic inference on a decision tree (DT) model. In lieu of HE-based inferencing on the decision tree, the inferencing instead is performed on a neural network (NN), which acts as a surrogate. To this end, the neural network is trained to learn DT decision boundaries, preferably without using the original DT model data training points. During training, a random data set is applied to the DT, and expected outputs are recorded. This random data set and the expected outputs are then used to train the neural network such that the outputs of the neural network match the outputs expected from applying the original data set to the DT. Preferably, the neural network has low depth, just a few layers. HE-based inferencing on the decision tree is done using HE inferencing on the shallow neural network. The latter is computationally-efficient and is carried without the need for bootstrapping.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: September 19, 2023
    Assignee: International Business Machines Corporation
    Inventors: Kanthi Sarpatwar, Nalini K. Ratha, Karthikeyan Shanmugam, Karthik Nandakumar, Sharathchandra Pankanti, Roman Vaculin
  • Patent number: 11694110
    Abstract: An example operation may include one or more of generating, by a plurality of training participant clients, gradient calculations for machine learning model training, each of the training participant clients comprising a training dataset, converting, by a training aggregator coupled to the plurality of training participant clients, the gradient calculations to a plurality of transaction proposals, receiving, by one or more endorser nodes or peers of a blockchain network, the plurality of transaction proposals, executing, by each of the endorser nodes or peers, a verify gradient smart contract, and providing endorsements corresponding to the plurality of transaction proposals to the training aggregator.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: July 4, 2023
    Assignee: International Business Machines Corporation
    Inventors: Venkata Sitaramagiridharganesh Ganapavarapu, Kanthi Sarpatwar, Karthikeyan Shanmugam, Roman Vaculin
  • Patent number: 11599806
    Abstract: This disclosure provides a method, apparatus and computer program product to create a full homomorphic encryption (FHE)-friendly machine learning model. The approach herein leverages a knowledge distillation framework wherein the FHE-friendly (student) ML model closely mimics the predictions of a more complex (teacher) model, wherein the teacher model is one that, relative to the student model, is more complex and that is pre-trained on large datasets. In the approach herein, the distillation framework uses the more complex teacher model to facilitate training of the FHE-friendly model, but using synthetically-generated training data in lieu of the original datasets used to train the teacher.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: March 7, 2023
    Assignee: International Business Machines Corporation
    Inventors: Kanthi Sarpatwar, Nalini K. Ratha, Karthikeyan Shanmugam, Karthik Nandakumar, Sharathchandra Pankanti, Roman Vaculin, James Thomas Rayfield
  • Patent number: 11562228
    Abstract: An example operation may include one or more of generating, by a training participant client comprising a training dataset, a plurality of transaction proposals that each correspond to a training iteration for machine learning model training related to stochastic gradient descent, the machine learning model training comprising a plurality of training iterations, the transaction proposals comprising a gradient calculation performed by the training participant client, a batch from the private dataset, a loss function, and an original model parameter, receiving, by one or more endorser nodes of peers of a blockchain network, the plurality of transaction proposals, and evaluating each transaction proposal.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: January 24, 2023
    Assignee: International Business Machines Corporation
    Inventors: Venkata Sitaramagiridharganesh Ganapavarapu, Kanthi Sarpatwar, Karthikeyan Shanmugam, Roman Vaculin
  • Publication number: 20220374904
    Abstract: A method, apparatus and computer program product that provides multi-phase privacy-preserving inferencing in a high throughput data environment, e.g., to facilitate fraud prediction, detection and prevention. In one embodiment, two (2) machine learning models are used, a first model that is trained in the clear on first transaction data, and a second model that is trained in the clear but on the first transaction data, and user data. The first model is used to perform inferencing in the clear on the high throughput received data. In this manner, the first model provides a first level evaluation of whether a particular transaction might be fraudulent. If a transaction is flagged in this first phase, a second more secure inference is then carried out using the second model. The inferencing performed by the second model is done on homomorphically encrypted data. Thus, only those transactions marked by the first model are passed to the second model for secure evaluation.
    Type: Application
    Filed: May 10, 2021
    Publication date: November 24, 2022
    Applicant: International Business Machines Corporation
    Inventors: Roman Vaculin, Kanthi Sarpatwar, Hong Min
  • Publication number: 20220376888
    Abstract: Privacy-preserving homomorphic inferencing utilizes batch processing on encrypted data records. Each data record has a private data portion of interest against which the inferencing is carried out. Batch processing is enabled with respect to a set of encrypted data records by techniques that ensure that each encrypted data record has its associated private data portion in a unique location relative to the other data records. The set of encrypted data records are then summed to generate a single encrypted data record against which the inferencing is done. In a first embodiment, the private data portions of interest are selectively and uniquely positioned at runtime (when the inferencing is being applied). In a second embodiment, the private data portions of interest are initially positioned with the data-at-rest, preferably in an off-line process; thereafter, at runtime individual encrypted data records are processed as necessary to adjust the private data portions to unique positions prior to batching.
    Type: Application
    Filed: May 10, 2021
    Publication date: November 24, 2022
    Applicant: International Business Machines Corporation
    Inventors: Kanthi Sarpatwar, Roman Vaculin, Ehud Aharoni, James Thomas Rayfield, Omri Soceanu
  • Patent number: 11502820
    Abstract: A technique for computationally-efficient privacy-preserving homomorphic inferencing against a decision tree. Inferencing is carried out by a server against encrypted data points provided by a client. Fully homomorphic computation is enabled with respect to the decision tree by intelligently configuring the tree and the real number-valued features that are applied to the tree. To that end, and to the extent the decision tree is unbalanced, the server first balances the tree. A cryptographic packing scheme is then applied to the balanced decision tree and, in particular, to one or more entries in at least one of: an encrypted feature set, and a threshold data set, that are to be used during the decision tree evaluation process. Upon receipt of an encrypted data point, homomorphic inferencing on the configured decision tree is performed using a highly-accurate approximation comparator, which implements a “soft” membership recursive computation on real numbers, all in an oblivious manner.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: November 15, 2022
    Assignee: International Business Machines Corporation
    Inventors: Nalini K. Ratha, Kanthi Sarpatwar, Karthikeyan Shanmugam, Sharathchandra Pankanti, Karthik Nandakumar, Roman Vaculin
  • Patent number: 11475365
    Abstract: An example operation includes one or more of computing, by a data owner node, updated gradients on a loss function based on a batch of private data and previous parameters of a machine learning model associated with a blockchain, encrypting, by the data owner node, update information, recording, by the data owner, the encrypted update information as a new transaction on the blockchain, and providing the update information for an audit.
    Type: Grant
    Filed: April 9, 2020
    Date of Patent: October 18, 2022
    Assignee: International Business Machines Corporation
    Inventors: Kanthi Sarpatwar, Karthikeyan Shanmugam, Venkata Sitaramagiridharganesh Ganapavarapu, Roman Vaculin
  • Patent number: 11271958
    Abstract: Aspects of the present disclosure describe techniques for detecting anomalous data in an encrypted data set. An example method generally includes receiving a data set of encrypted data points. A tree data structure having a number of levels is generated for the data set. Each level of the tree data structure generally corresponds to a feature of the encrypted plurality of features, and each node in the tree data structure at a given level represents a probability distribution of a likelihood that each data point is less than or greater than a split value determined for a given feature. An encrypted data point is received for analysis, and anomaly score is calculated based on a probability identified for each of the plurality of encrypted features. Based on determining that the calculated anomaly score exceeds a threshold value, the encrypted data point is identified as potentially anomalous.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: March 8, 2022
    Assignee: International Business Machines Corporation
    Inventors: Kanthi Sarpatwar, Venkata Sitaramagiridharganesh Ganapavarapu, Saket Sathe, Roman Vaculin
  • Publication number: 20210397988
    Abstract: This disclosure provides a method, apparatus and computer program product to create a full homomorphic encryption (FHE)-friendly machine learning model. The approach herein leverages a knowledge distillation framework wherein the FHE-friendly (student) ML model closely mimics the predictions of a more complex (teacher) model, wherein the teacher model is one that, relative to the student model, is more complex and that is pre-trained on large datasets. In the approach herein, the distillation framework uses the more complex teacher model to facilitate training of the FHE-friendly model, but using synthetically-generated training data in lieu of the original datasets used to train the teacher.
    Type: Application
    Filed: June 22, 2020
    Publication date: December 23, 2021
    Applicant: International Business Machines Corporation
    Inventors: Kanthi Sarpatwar, Nalini K. Ratha, Karthikeyan Shanmugam, Karthik Nandakumar, Sharathchandra Pankanti, Roman Vaculin, James Thomas Rayfield
  • Publication number: 20210376995
    Abstract: A technique for computationally-efficient privacy-preserving homomorphic inferencing against a decision tree. Inferencing is carried out by a server against encrypted data points provided by a client. Fully homomorphic computation is enabled with respect to the decision tree by intelligently configuring the tree and the real number-valued features that are applied to the tree. To that end, and to the extent the decision tree is unbalanced, the server first balances the tree. A cryptographic packing scheme is then applied to the balanced decision tree and, in particular, to one or more entries in at least one of: an encrypted feature set, and a threshold data set, that are to be used during the decision tree evaluation process. Upon receipt of an encrypted data point, homomorphic inferencing on the configured decision tree is performed using a highly-accurate approximation comparator, which implements a “soft” membership recursive computation on real numbers, all in an oblivious manner.
    Type: Application
    Filed: May 27, 2020
    Publication date: December 2, 2021
    Applicant: International Business Machines Corporation
    Inventors: Nalini K. Ratha, Kanthi Sarpatwar, Karthikeyan Shanmugam, Sharathchandra Pankanti, Karthik Nandakumar, Roman Vaculin
  • Publication number: 20210344478
    Abstract: A method, apparatus and computer program product for homomorphic inference on a decision tree (DT) model. In lieu of HE-based inferencing on the decision tree, the inferencing instead is performed on a neural network (NN), which acts as a surrogate. To this end, the neural network is trained to learn DT decision boundaries, preferably without using the original DT model data training points. During training, a random data set is applied to the DT, and expected outputs are recorded. This random data set and the expected outputs are then used to train the neural network such that the outputs of the neural network match the outputs expected from applying the original data set to the DT. Preferably, the neural network has low depth, just a few layers. HE-based inferencing on the decision tree is done using HE inferencing on the shallow neural network. The latter is computationally-efficient and is carried without the need for bootstrapping.
    Type: Application
    Filed: April 30, 2020
    Publication date: November 4, 2021
    Applicant: International Business Machines Corporation
    Inventors: Kanthi Sarpatwar, Nalini K. Ratha, Karthikeyan Shanmugam, Karthik Nandakumar, Sharathchandra Pankanti, Roman Vaculin
  • Publication number: 20210319353
    Abstract: An example operation includes one or more of computing, by a data owner node, updated gradients on a loss function based on a batch of private data and previous parameters of a machine learning model associated with a blockchain, encrypting, by the data owner node, update information, recording, by the data owner, the encrypted update information as a new transaction on the blockchain, and providing the update information for an audit.
    Type: Application
    Filed: April 9, 2020
    Publication date: October 14, 2021
    Inventors: Kanthi Sarpatwar, Karthikeyan Shanmugam, Venkata Sitaramagiridharganesh Ganapavarapu, Roman Vaculin
  • Publication number: 20210150266
    Abstract: In an approach for training machine-learning models using encrypted data, a processor receives a set of encrypted data from a client computing device. A processor trains a machine-learning model using a boosting algorithm. A processor performs a first classification on the set of encrypted data using the machine-learning model. A processor sends a first set of encrypted results of the first classification to the client computing device. A processor receives a first set of boosting updates from the client computing device. A processor applies the first set of boosting updates to the machine-learning model.
    Type: Application
    Filed: November 15, 2019
    Publication date: May 20, 2021
    Inventors: Kanthi Sarpatwar, Roman VACULIN
  • Publication number: 20210092137
    Abstract: Aspects of the present disclosure describe techniques for detecting anomalous data in an encrypted data set. An example method generally includes receiving a data set of encrypted data points. A tree data structure having a number of levels is generated for the data set. Each level of the tree data structure generally corresponds to a feature of the encrypted plurality of features, and each node in the tree data structure at a given level represents a probability distribution of a likelihood that each data point is less than or greater than a split value determined for a given feature. An encrypted data point is received for analysis, and anomaly score is calculated based on a probability identified for each of the plurality of encrypted features. Based on determining that the calculated anomaly score exceeds a threshold value, the encrypted data point is identified as potentially anomalous.
    Type: Application
    Filed: September 20, 2019
    Publication date: March 25, 2021
    Inventors: Kanthi Sarpatwar, Venkata Sitaramagiridharganesh Ganapavarapu, Saket Sathe, Roman Vaculin
  • Publication number: 20200394470
    Abstract: An example operation may include one or more of generating, by a training participant client, a plurality of transaction proposals, each of the plurality of transaction proposals corresponding to a training iteration for machine learning model training related to stochastic gradient descent, the machine learning model training comprising a plurality of training iterations, the transaction proposals comprising a gradient calculation performed by the training participant client, transferring the plurality of transaction proposals to one or more endorser nodes or peers each comprising a verify gradient smart contract, executing, by each of the endorser nodes or peers, the verify gradient smart contract; and providing endorsements corresponding to the plurality of transaction proposals to the training participation client.
    Type: Application
    Filed: June 12, 2019
    Publication date: December 17, 2020
    Inventors: Venkata Sitaramagiridharganesh Ganapavarapu, Kanthi Sarpatwar, Karthikeyan Shanmugam, Roman Vaculin
  • Publication number: 20200394552
    Abstract: An example operation may include one or more of generating, by a plurality of training participant clients, gradient calculations for machine learning model training, each of the training participant clients comprising a training dataset, converting, by a training aggregator coupled to the plurality of training participant clients, the gradient calculations to a plurality of transaction proposals, receiving, by one or more endorser nodes or peers of a blockchain network, the plurality of transaction proposals, executing, by each of the endorser nodes or peers, a verify gradient smart contract, and providing endorsements corresponding to the plurality of transaction proposals to the training aggregator.
    Type: Application
    Filed: June 12, 2019
    Publication date: December 17, 2020
    Inventors: Venkata Sitaramagiridharganesh Ganapavarapu, Kanthi Sarpatwar, Karthikeyan Shanmugam, Roman Vaculin
  • Publication number: 20200394471
    Abstract: An example operation may include one or more of generating, by a training participant client comprising a training dataset, a plurality of transaction proposals that each correspond to a training iteration for machine learning model training related to stochastic gradient descent, the machine learning model training comprising a plurality of training iterations, the transaction proposals comprising a gradient calculation performed by the training participant client, a batch from the private dataset, a loss function, and an original model parameter, receiving, by one or more endorser nodes of peers of a blockchain network, the plurality of transaction proposals, and evaluating each transaction proposal.
    Type: Application
    Filed: June 12, 2019
    Publication date: December 17, 2020
    Inventors: Venkata Sitaramagiridharganesh Ganapavarapu, Kanthi Sarpatwar, Karthikeyan Shanmugam, Roman Vaculin
  • Publication number: 20190287027
    Abstract: An example operation may include one or more of generating a hashed summary including hashes of one or more of a validation data set and hashes of data points chosen in previous iterations from producer nodes, and exposing the hashed summary to a plurality of producer nodes, receiving, iteratively, a plurality of requests from the plurality of producer nodes, respectively, where each request identifies a marginal value provided by a hash of a data sample available to a producer node, selecting a request received from a producer node based on a marginal value associated with the request, retrieving hashed data of the producer node associated with the selected request, and aggregating the hashed data of the producer node with the summary of hashes generated at one or more previous iterations to produce an updated summary, and storing the updated summary via a data block of a distributed ledger.
    Type: Application
    Filed: March 15, 2018
    Publication date: September 19, 2019
    Inventors: Michele M. Franceschini, Ashish Jagmohan, Kanthi Sarpatwar, Karthikeyan SHANMUGAM, Roman Vaculin