Patents by Inventor Heiko H. Ludwig

Heiko H. Ludwig has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12160504
    Abstract: A plurality of public encryption keys are distributed to a plurality of participants in a federated learning system, and a first plurality of responses is received from the plurality of participants, where each respective response of the first plurality of responses was generated based on training data local to a respective participant of the plurality of participants and is encrypted using a respective public encryption key of the plurality of public encryption keys. A first aggregation vector is generated based on the first plurality of responses, and a first private encryption key is retrieved using the first aggregation vector. An aggregated model is then generated based on the first private encryption key and the first plurality of responses.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: December 3, 2024
    Assignee: International Business Machines Corporation
    Inventors: Runhua Xu, Nathalie Baracaldo Angel, Yi Zhou, Ali Anwar, Heiko H Ludwig
  • Patent number: 12093837
    Abstract: Embodiments relate to an intelligent computer platform to build a federated learning framework including creating a hierarchy of machine learning models (MLMs). The hierarchy of MLMs has a primary MLM in a primary layer. Training the primary MLM includes capturing contributing model updates across at least one communication channel. A secondary MLM is created and logically positioned in a secondary layer of the hierarchy. The secondary MLM is operatively coupled to the primary MLM across the at least one communication channel. The created secondary MLM is initialized, including cloning weights and framework of the primary MLM into the secondary MLM, and populated with secondary data. The populated data has model updates local to the created secondary MLM. The secondary MLM is logically stored local to the secondary layer, and limits access to the secondary MLM to the secondary layer.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: September 17, 2024
    Assignee: International Business Machines Corporation
    Inventors: Yi Zhou, Rui Zhang, Heiko H. Ludwig, Jonathan F. Brunn
  • Publication number: 20240249018
    Abstract: One or more systems, devices, computer program products and/or computer-implemented methods of use provided herein relate to a process for privacy-enhanced machine learning and inference. A system can comprise a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory, wherein the computer executable components can comprise a processing component that generates an access rule that modifies access to first data of a graph database, wherein the first data comprises first party information identified as private, a sampling component that executes a random walk for sampling a first graph of the graph database while employing the access rule, wherein the first graph comprises the first data, and an inference component that, based on the sampling, generates a prediction in response to a query, wherein the inference component avoids directly exposing the first party information in the prediction.
    Type: Application
    Filed: January 23, 2023
    Publication date: July 25, 2024
    Inventors: Ambrish Rawat, Naoise Holohan, Heiko H. Ludwig, Ehsan Degan, Nathalie Baracaldo Angel, Alan Jonathan King, Swanand Ravindra Kadhe, Yi Zhou, Keith Coleman Houck, Mark Purcell, Giulio Zizzo, Nir Drucker, Hayim Shaul, Eyal Kushnir, Lam Minh Nguyen
  • Publication number: 20240249153
    Abstract: Systems, devices, computer program products and/or computer-implemented methods of use provided herein relate to federated training and inferencing. A system can comprise a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory, wherein the computer executable components can comprise a modeling component that trains an inferential model using data from a plurality of parties and comprising horizontally partitioned data and vertically partitioned data, wherein the modeling component employs a random decision tree comprising the data to train the inferential model, and an inference component that responds to a query, employing the inferential model, by generating an inference, wherein first party private data, of the data, originating from a first passive party of the plurality of parties, is not directly shared with other passive parties of the plurality of parties to generate the inference.
    Type: Application
    Filed: February 8, 2023
    Publication date: July 25, 2024
    Inventors: Swanand Ravindra Kadhe, Heiko H. Ludwig, Nathalie Baracaldo Angel, Yi Zhou, Alan Jonathan King, Keith Coleman Houck, Ambrish Rawat, Mark Purcell, Naoise Holohan, Mikio Takeuchi, Ryo Kawahara, Nir Drucker, Hayim Shaul
  • Patent number: 12033094
    Abstract: Provided are a computer program product, system, and method for generation of tasks and retraining machine learning modules to generate tasks based on feedback for the generated tasks. A machine learning module processes an input text message sent in the communication channel to output task information including an intended action and a set of associated users. A task message is generated including the output task information of a task to perform. The task message is sent to a user interface panel in a user computer. Feedback is received from the user computer on the output task information in the task message. The machine learning module is retrained to output task information from the input text message based on the feedback to reinforce likelihood correct task information is outputted and reinforce lower likelihood incorrect task information is outputted.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: July 9, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan F. Brunn, Rachael Marie Huston Dickens, Rui Zhang, Ami Herrman Dewar, Heiko H. Ludwig
  • Patent number: 12019747
    Abstract: One or more computer processors determine a tolerance value, and a norm value associated with an untrusted model and an adversarial training method. The one or more computer processors generate a plurality of interpolated adversarial images ranging between a pair of images utilizing the adversarial training method, wherein each image in the pair of images is from a different class. The one or more computer processors detect a backdoor associated with the untrusted model utilizing the generated plurality of interpolated adversarial images. The one or more computer processors harden the untrusted model by training the untrusted model with the generated plurality of interpolated adversarial images.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: June 25, 2024
    Assignee: International Business Machines Corporation
    Inventors: Heiko H. Ludwig, Ebube Chuba, Bryant Chen, Benjamin James Edwards, Taesung Lee, Ian Michael Molloy
  • Publication number: 20240144026
    Abstract: A computer-implemented method, according to one approach, includes issuing a hyperparameter optimization (HPO) query to a plurality of computing devices. HPO results are received from the plurality of computing devices, and the HPO results include a set of hyperparameter (HP)/rank value pairs. The method further includes computing, based on the set of HP/rank value pairs, a global set of HPs from the HPO results for federated learning (FL) training. An indication of the global set of HPs is output to the plurality of computing devices. A computer program product, according to another approach, includes a computer readable storage medium having program instructions embodied therewith. The program instructions are readable and/or executable by a computer to cause the computer to perform the foregoing method.
    Type: Application
    Filed: February 28, 2023
    Publication date: May 2, 2024
    Inventors: Yi Zhou, Parikshit Ram, Theodoros Salonidis, Nathalie Baracaldo Angel, Horst Cornelius Samulowitz, Heiko H. Ludwig
  • Publication number: 20240089081
    Abstract: An example system includes a processor to compute a tensor of indicators indicating a presence of partial sums in an encrypted vector of indicators. The processor can also securely reorder an encrypted array based on the computed tensor of indicators to generate a reordered encrypted array.
    Type: Application
    Filed: August 25, 2022
    Publication date: March 14, 2024
    Inventors: Eyal KUSHNIR, Hayim SHAUL, Omri SOCEANU, Ehud AHARONI, Nathalie BARACALDO ANGEL, Runhua XU, Heiko H. LUDWIG
  • Patent number: 11856021
    Abstract: Computer-implemented methods, program products, and systems for provenance-based defense against poison attacks are disclosed. In one approach, a method includes: receiving observations and corresponding provenance data from data sources; determining whether the observations are poisoned based on the corresponding provenance data; and removing the poisoned observation(s) from a final training dataset used to train a final prediction model. Another implementation involves provenance-based defense against poison attacks in a fully untrusted data environment. Untrusted data points are grouped according to provenance signature, and the groups are used to train learning algorithms and generate complete and filtered prediction models. The results of applying the prediction models to an evaluation dataset are compared, and poisoned data points identified where the performance of the filtered prediction model exceeds the performance of the complete prediction model.
    Type: Grant
    Filed: March 22, 2023
    Date of Patent: December 26, 2023
    Assignee: International Business Machines Corporation
    Inventors: Nathalie Baracaldo-Angel, Bryant Chen, Evelyn Duesterwald, Heiko H. Ludwig
  • Publication number: 20230409959
    Abstract: According to one embodiment, a method, computer system, and computer program product for grouped federated learning is provided. The embodiment may include initializing a plurality of aggregation groups including a plurality of parties and a plurality of local aggregators. The embodiment may also include submitting a query to a first party from the plurality of parties. The embodiment may further include submitting an initial response to the query from the first party or a second party from the plurality of parties to a first local aggregator from the plurality of local aggregators. The embodiment may also include submitting a final response from the first local aggregator or a second local aggregator from the plurality of local aggregators to a global aggregator. The embodiment may further include building a machine learning model based on the final response.
    Type: Application
    Filed: June 21, 2022
    Publication date: December 21, 2023
    Inventors: Ali Anwar, Yi Zhou, NATHALIE BARACALDO ANGEL, Runhua Xu, YUYA JEREMY ONG, Annie K Abay, Heiko H. Ludwig, Gegi Thomas, Jayaram Kallapalayam Radhakrishnan, Laura Wynter
  • Patent number: 11824968
    Abstract: Techniques regarding privacy preservation in a federated learning environment are provided. For example, one or more embodiments described herein can comprise a system, which can comprise a memory that can store computer executable components. The system can also comprise a processor, operably coupled to the memory, and that can execute the computer executable components stored in the memory. The computer executable components can comprise a plurality of machine learning components that can execute a machine learning algorithm to generate a plurality of model parameters. The computer executable components can also comprise an aggregator component that can synthesize a machine learning model based on an aggregate of the plurality of model parameters. The aggregator component can communicate with the plurality of machine learning components via a data privacy scheme that comprises a privacy process and a homomorphic encryption process in a federated learning environment.
    Type: Grant
    Filed: September 13, 2021
    Date of Patent: November 21, 2023
    Inventors: Nathalie Baracaldo Angel, Stacey Truex, Heiko H. Ludwig, Ali Anwar, Thomas Steinke, Rui Zhang
  • Publication number: 20230231875
    Abstract: Computer-implemented methods, program products, and systems for provenance-based defense against poison attacks are disclosed. In one approach, a method includes: receiving observations and corresponding provenance data from data sources; determining whether the observations are poisoned based on the corresponding provenance data; and removing the poisoned observation(s) from a final training dataset used to train a final prediction model. Another implementation involves provenance-based defense against poison attacks in a fully untrusted data environment. Untrusted data points are grouped according to provenance signature, and the groups are used to train learning algorithms and generate complete and filtered prediction models. The results of applying the prediction models to an evaluation dataset are compared, and poisoned data points identified where the performance of the filtered prediction model exceeds the performance of the complete prediction model.
    Type: Application
    Filed: March 22, 2023
    Publication date: July 20, 2023
    Inventors: Nathalie Baracaldo-Angel, Bryant Chen, Evelyn Duesterwald, Heiko H. Ludwig
  • Patent number: 11689566
    Abstract: Computer-implemented methods, program products, and systems for provenance-based defense against poison attacks are disclosed. In one approach, a method includes: receiving observations and corresponding provenance data from data sources; determining whether the observations are poisoned based on the corresponding provenance data; and removing the poisoned observation(s) from a final training dataset used to train a final prediction model. Another implementation involves provenance-based defense against poison attacks in a fully untrusted data environment. Untrusted data points are grouped according to provenance signature, and the groups are used to train learning algorithms and generate complete and filtered prediction models. The results of applying the prediction models to an evaluation dataset are compared, and poisoned data points identified where the performance of the filtered prediction model exceeds the performance of the complete prediction model.
    Type: Grant
    Filed: July 10, 2018
    Date of Patent: June 27, 2023
    Assignee: International Business Machines Corporation
    Inventors: Nathalie Baracaldo-Angel, Bryant Chen, Evelyn Duesterwald, Heiko H. Ludwig
  • Publication number: 20230186168
    Abstract: A computer-implemented method according to one embodiment includes issuing a hyperparameter optimization (HPO) query to a plurality of computing devices; receiving HPO results from each of the plurality of computing devices; generating a unified performance metric surface utilizing the HPO results from each of the plurality of computing devices; and determining optimal global hyperparameters, utilizing the unified performance metric surface.
    Type: Application
    Filed: December 9, 2021
    Publication date: June 15, 2023
    Inventors: Yi Zhou, Parikshit Ram, Nathalie Baracaldo Angel, Theodoros Salonidis, Horst Cornelius Samulowitz, Martin Wistuba, Heiko H. Ludwig
  • Patent number: 11645515
    Abstract: Embodiments relate to a system, program product, and method for automatically determining which activation data points in a neural model have been poisoned to erroneously indicate association with a particular label or labels. A neural network is trained using potentially poisoned training data. Each of the training data points is classified using the network to retain the activations of the last hidden layer, and segment those activations by the label of corresponding training data. Clustering is applied to the retained activations of each segment, and a cluster assessment is conducted for each cluster associated with each label to distinguish clusters with potentially poisoned activations from clusters populated with legitimate activations. The assessment includes executing a set of analyses and integrating the results of the analyses into a determination as to whether a training data set is poisonous based on determining if resultant activation clusters are poisoned.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: May 9, 2023
    Assignee: International Business Machines Corporation
    Inventors: Nathalie Baracaldo Angel, Bryant Chen, Biplav Srivastava, Heiko H. Ludwig
  • Patent number: 11645582
    Abstract: One embodiment provides a method for federated learning across a plurality of data parties, comprising assigning each data party with a corresponding namespace in an object store, assigning a shared namespace in the object store, and triggering a round of federated learning by issuing a customized learning request to at least one data party. Each customized learning request issued to a data party triggers the data party to locally train a model based on training data owned by the data party and model parameters stored in the shared namespace, and upload a local model resulting from the local training to a corresponding namespace in the object store the data party is assigned with. The method further comprises retrieving, from the object store, local models uploaded to the object store during the round of federated learning, and aggregating the local models to obtain a shared model.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: May 9, 2023
    Assignee: International Business Machines Corporation
    Inventors: Shashank Rajamoni, Ali Anwar, Yi Zhou, Heiko H. Ludwig, Nathalie Baracaldo Angel
  • Patent number: 11601468
    Abstract: Systems, computer-implemented methods, and computer program products that can facilitate detection of an adversarial backdoor attack on a trained model at inference time are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a log component that records predictions and corresponding activation values generated by a trained model based on inference requests. The computer executable components can further comprise an analysis component that employs a model at an inference time to detect a backdoor trigger request based on the predictions and the corresponding activation values. In some embodiments, the log component records the predictions and the corresponding activation values from one or more layers of the trained model.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: March 7, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nathalie Baracaldo Angel, Yi Zhou, Bryant Chen, Ali Anwar, Heiko H. Ludwig
  • Patent number: 11588621
    Abstract: Systems and techniques that facilitate universal and efficient privacy-preserving vertical federated learning are provided. In various embodiments, a key distribution component can distribute respective feature-dimension public keys and respective sample-dimension public keys to respective participants in a vertical federated learning framework governed by a coordinator, wherein the respective participants can send to the coordinator respective local model updates encrypted by the respective feature-dimension public keys and respective local datasets encrypted by the respective sample-dimension public keys. In various embodiments, an inference prevention component can verify a participant-related weight vector generated by the coordinator, based on which the key distribution component can distribute to the coordinator a functional feature-dimension secret key that can aggregate the encrypted respective local model updates into a sample-related weight vector.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: February 21, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nathalie Baracaldo Angel, Runhua Xu, Yi Zhou, Ali Anwar, Heiko H. Ludwig
  • Publication number: 20230017500
    Abstract: One embodiment of the invention provides a method for federated learning (FL) comprising training a machine learning (ML) model collaboratively by initiating a round of FL across data parties. Each data party is allocated tokens to utilize during the training. The method further comprises maintaining, for each data party, a corresponding data usage profile indicative of an amount of data the data party consumed during the training and a corresponding participation profile indicative of an amount of data the data party provided during the training. The method further comprises selectively allocating new tokens to the data parties based on each participation profile maintained, selectively allocating additional new tokens to the data parties based on each data usage profile maintained, and reimbursing one or more tokens utilized during the training to the data parties based on one or more measurements of accuracy of the ML model.
    Type: Application
    Filed: July 12, 2021
    Publication date: January 19, 2023
    Inventors: Ali Anwar, Syed Amer Zawad, Yi Zhou, Nathalie Baracaldo Angel, Kamala Micaela Noelle Varma, Annie Abay, Ebube Chuba, Yuya Jeremy Ong, Heiko H. Ludwig
  • Patent number: 11538236
    Abstract: Embodiments relate to a system, program product, and method for processing an untrusted data set to automatically determine which data points there are poisonous. A neural network is trained network using potentially poisoned training data. Each of the training data points is classified using the network to retain the activations of at least one hidden layer, and segment those activations by the label of corresponding training data. Clustering is applied to the retained activations of each segment, and a clustering assessment is conducted to remove an identified cluster from the data set, form a new training set, and train a second neural model with the new training set. The removed cluster and corresponding data are applied to the trained second neural model to analyze and classify data in the removed cluster as either legitimate or poisonous.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: December 27, 2022
    Assignee: International Business Machines Corporation
    Inventors: Nathalie Baracaldo Angel, Bryant Chen, Heiko H. Ludwig