Patents by Inventor Timothy John Castiglia

Timothy John Castiglia has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230401439
    Abstract: The method provides for analyzing input and output connections of layers of a received neural network model configured for vertical federated learning. An undirected graph of nodes is generated in which a node having two or more child nodes includes an aggregation operation, based on the analysis of the model in which a model output corresponds to a node of the graph. A layer of the model is identified in which a sum of lower layer outputs are computed. The identified model layer is partitioned into a first part applied respectively to the multiple entities and a second part applied as an aggregator of the output of the first part. The aggregation operation is performed between pairs of lower layer outputs, and multiple forward and backward passes of the neural network model are performed that include secure aggregation and maintain model partitioning in forward and backward passes.
    Type: Application
    Filed: June 13, 2022
    Publication date: December 14, 2023
    Inventors: Shiqiang Wang, Timothy John Castiglia, Nathalie Baracaldo Angel, Stacy Elizabeth Patterson, Runhua Xu, Yi Zhou
  • Publication number: 20230342655
    Abstract: A Vertical Federated Learning system with multiple parties and a server where feature space is partitioned across multiple parties, and training includes unsupervised representation learning at each party alone followed by coordinated training with all parties and the server. Parties train representation networks on unlabeled data without communicating with other parties, then send representations of their labeled feature sets to the server, and the server trains a prediction model on labeled data without further communication with parties. Parties train representation networks on unlabeled data without communicating with other parties, then the parties and server collaboratively train the representation networks and a prediction model on labeled data. Parties alternate between training their representation networks on unlabeled data without communication and sending their representations of labeled data to the server for training a prediction model.
    Type: Application
    Filed: April 26, 2022
    Publication date: October 26, 2023
    Inventors: Timothy John Castiglia, Shiqiang Wang, Stacy Elizabeth Patterson
  • Publication number: 20220383091
    Abstract: For a plurality of client computing devices of a federated learning system, obtain initial compressed embeddings, compressed by clustering, and including output of initial local models for a current minibatch, and initial cluster labels corresponding to the initial embeddings. Recreate an initial overall embedding based on the initial embeddings and the initial labels. At a server of the federated learning system, send a current version of a server model to each of the client computing devices; and obtain, from the client computing devices: updated compressed embeddings, compressed by clustering, and updated cluster labels corresponding to the updated embeddings. Based on local training by the plurality of clients with the overall embedding and the current server model, at the server, recreate an updated overall embedding based on the updated embeddings and the corresponding updated labels, and locally train the server model based on the updated overall embedding.
    Type: Application
    Filed: May 25, 2021
    Publication date: December 1, 2022
    Inventors: Anirban Das, Timothy John Castiglia, Stacy Elizabeth Patterson, Shiqiang Wang