Patents by Inventor Runhua XU

Runhua XU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240089081
    Abstract: An example system includes a processor to compute a tensor of indicators indicating a presence of partial sums in an encrypted vector of indicators. The processor can also securely reorder an encrypted array based on the computed tensor of indicators to generate a reordered encrypted array.
    Type: Application
    Filed: August 25, 2022
    Publication date: March 14, 2024
    Inventors: Eyal KUSHNIR, Hayim SHAUL, Omri SOCEANU, Ehud AHARONI, Nathalie BARACALDO ANGEL, Runhua XU, Heiko H. LUDWIG
  • Publication number: 20240039692
    Abstract: A second set of data identifiers, comprising identifiers of data usable in federated model training by a second data owner, is received at a first data owner from the second data owner. An intersection set of data identifiers is determined at the first data owner. At the first data owner according to the intersection set of data identifiers, the data usable in federated model training is rearranged by the first data owner to result in a first training dataset. At the first data owner using the intersection set of data identifiers, the first training dataset, and a previous iteration of an aggregated set of model weights, a first partial set of model weights is computed. An updated aggregated set of model weights, comprising the first partial set of model weights and a second partial set of model weights from the second data owner, is received from an aggregator.
    Type: Application
    Filed: July 28, 2022
    Publication date: February 1, 2024
    Applicant: International Business Machines Corporation
    Inventors: Runhua Xu, Nathalie Baracaldo Angel, Hayim Shaul, OMRI SOCEANU
  • Publication number: 20230409959
    Abstract: According to one embodiment, a method, computer system, and computer program product for grouped federated learning is provided. The embodiment may include initializing a plurality of aggregation groups including a plurality of parties and a plurality of local aggregators. The embodiment may also include submitting a query to a first party from the plurality of parties. The embodiment may further include submitting an initial response to the query from the first party or a second party from the plurality of parties to a first local aggregator from the plurality of local aggregators. The embodiment may also include submitting a final response from the first local aggregator or a second local aggregator from the plurality of local aggregators to a global aggregator. The embodiment may further include building a machine learning model based on the final response.
    Type: Application
    Filed: June 21, 2022
    Publication date: December 21, 2023
    Inventors: Ali Anwar, Yi Zhou, NATHALIE BARACALDO ANGEL, Runhua Xu, YUYA JEREMY ONG, Annie K Abay, Heiko H. Ludwig, Gegi Thomas, Jayaram Kallapalayam Radhakrishnan, Laura Wynter
  • Publication number: 20230401439
    Abstract: The method provides for analyzing input and output connections of layers of a received neural network model configured for vertical federated learning. An undirected graph of nodes is generated in which a node having two or more child nodes includes an aggregation operation, based on the analysis of the model in which a model output corresponds to a node of the graph. A layer of the model is identified in which a sum of lower layer outputs are computed. The identified model layer is partitioned into a first part applied respectively to the multiple entities and a second part applied as an aggregator of the output of the first part. The aggregation operation is performed between pairs of lower layer outputs, and multiple forward and backward passes of the neural network model are performed that include secure aggregation and maintain model partitioning in forward and backward passes.
    Type: Application
    Filed: June 13, 2022
    Publication date: December 14, 2023
    Inventors: Shiqiang Wang, Timothy John Castiglia, Nathalie Baracaldo Angel, Stacy Elizabeth Patterson, Runhua Xu, Yi Zhou
  • Patent number: 11588621
    Abstract: Systems and techniques that facilitate universal and efficient privacy-preserving vertical federated learning are provided. In various embodiments, a key distribution component can distribute respective feature-dimension public keys and respective sample-dimension public keys to respective participants in a vertical federated learning framework governed by a coordinator, wherein the respective participants can send to the coordinator respective local model updates encrypted by the respective feature-dimension public keys and respective local datasets encrypted by the respective sample-dimension public keys. In various embodiments, an inference prevention component can verify a participant-related weight vector generated by the coordinator, based on which the key distribution component can distribute to the coordinator a functional feature-dimension secret key that can aggregate the encrypted respective local model updates into a sample-related weight vector.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: February 21, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nathalie Baracaldo Angel, Runhua Xu, Yi Zhou, Ali Anwar, Heiko H. Ludwig
  • Publication number: 20210174243
    Abstract: Systems and techniques that facilitate universal and efficient privacy-preserving vertical federated learning are provided. In various embodiments, a key distribution component can distribute respective feature-dimension public keys and respective sample-dimension public keys to respective participants in a vertical federated learning framework governed by a coordinator, wherein the respective participants can send to the coordinator respective local model updates encrypted by the respective feature-dimension public keys and respective local datasets encrypted by the respective sample-dimension public keys. In various embodiments, an inference prevention component can verify a participant-related weight vector generated by the coordinator, based on which the key distribution component can distribute to the coordinator a functional feature-dimension secret key that can aggregate the encrypted respective local model updates into a sample-related weight vector.
    Type: Application
    Filed: December 6, 2019
    Publication date: June 10, 2021
    Inventors: Nathalie Baracaldo Angel, Runhua Xu, Yi Zhou, Ali Anwar, Heiko H. Ludwig
  • Publication number: 20210143987
    Abstract: Techniques for federated learning are provided. A plurality of public encryption keys are distributed to a plurality of participants in a federated learning system, and a first plurality of responses is received from the plurality of participants, where each respective response of the first plurality of responses was generated based on training data local to a respective participant of the plurality of participants and is encrypted using a respective public encryption key of the plurality of public encryption keys. A first aggregation vector is generated based on the first plurality of responses, and a first private encryption key is retrieved using the first aggregation vector. An aggregated model is then generated based on the first private encryption key and the first plurality of responses.
    Type: Application
    Filed: November 13, 2019
    Publication date: May 13, 2021
    Inventors: Runhua XU, Nathalie BARACALDO ANGEL, Yi ZHOU, Ali ANWAR, Heiko H LUDWIG