Patents by Inventor Ivor SPENCE

Ivor SPENCE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250086474
    Abstract: Collaborative training with buffered activations is performed by partitioning a plurality of layers of a neural network model into a device partition and a server partition; transmitting, to a computation device, the device partition, training, collaboratively with the computation device through a network, the neural network model by applying the server partition to a set of activations to obtain a set of output instances, the set of activations obtained by one of receiving, from the computation device, the set of activations as output from the device partition, or reading, from an activation buffer, the set of activations as previously recorded, applying a loss function relating activations to output instances to each output instance among the current set of output instances to obtain a set of loss values, and computing a set of gradient vectors for each layer of the server partition based on the set of loss values.
    Type: Application
    Filed: December 21, 2022
    Publication date: March 13, 2025
    Inventors: Di WU, Blesson VARGHESE, Philip RODGERS, Rehmat ULLAH, Peter KILPATRICK, Ivor SPENCE
  • Publication number: 20250086483
    Abstract: Edge-masking guided node pruning is performed by masking at least one edge among a plurality of edges of a trained model to produce a masked model, initializing the masked model, training the masked model, detecting, from among a plurality of channels of the masked model, each channel among the plurality of channels including a set of edges among the plurality of edges, at least one zero channel in which each edge among the set of edges is masked; determining, from among a plurality of nodes of the masked model, each node corresponding to two channels among the plurality of channels, at least one removable node in which the corresponding two channels are zero channels; and pruning the masked model to remove the removable nodes from the masked model, resulting in a pruned model.
    Type: Application
    Filed: December 21, 2022
    Publication date: March 13, 2025
    Inventors: Bailey ECCLES, Blesson VARGHESE, Philip RODGERS, Peter KILPATRICK, Ivor SPENCE
  • Publication number: 20250077887
    Abstract: Collaborative training with compressed transmissions is performed by partitioning a plurality of layers of a neural network model into a device partition and a server partition, combining a plurality of encoding layers of an auto-encoder neural network with the device partition, wherein a largest encoding layer among the plurality of encoding layers is adjacent a layer of the device partition bordering the server partition, combining a plurality of decoding layers of the auto-encoder neural network with the server partition, wherein a largest decoding layer among the plurality of decoding layers is adjacent a layer of the server partition bordering the device partition, transmitting, to a computation device, the device partition combined with the plurality of encoding layers, and training, collaboratively with the computation device through a network, the neural network model.
    Type: Application
    Filed: December 12, 2022
    Publication date: March 6, 2025
    Inventors: Di WU, Blesson VARGHESE, Philip RODGERS, Rehmat ULLAH, Peter KILPATRICK, Ivor SPENCE
  • Publication number: 20240394555
    Abstract: Neural networks are collaboratively trained with parallel operations by performing operations in a plurality of consecutive time periods including a first plurality of consecutive time periods during which the server receives a set of activations, applies the server partition to a set of activations, applies a set of output instances to a loss function, and computes a set of gradient vectors, and a second plurality of consecutive time periods during which the server transmits a set of gradient vectors.
    Type: Application
    Filed: November 11, 2022
    Publication date: November 28, 2024
    Inventors: Zihan ZHANG, Blesson VARGHESE, Philip RODGERS, Ivor SPENCE, Peter KILPATRICK
  • Publication number: 20240303478
    Abstract: Cooperative training migration is performed by training, cooperatively with a computational device through a network, the neural network model, creating, during the iterations of training, a data checkpoint, the data checkpoint including the gradient values and the weight values of the server partition, the loss value, and an optimizer state, receiving, during the iterations of training, a migration notice, the migration notice including an identifier of a second edge server, and transferring, during the iterations of training, the data checkpoint to the second edge server.
    Type: Application
    Filed: March 23, 2022
    Publication date: September 12, 2024
    Inventors: Rehmat ULLAH, Di WU, Paul HARVEY, Peter KILPATRICK, Ivor SPENCE, Blesson VARGHESE
  • Publication number: 20230016827
    Abstract: Adaptive offloading of federated learning is performed by partitioning, for each of a plurality of computational devices, a plurality of layers of a neural network model into a device partition and a server partition based on a computational capability attribute of the computational device and a network bandwidth attribute of the computational device, training, cooperatively with respect to each computational device through the network, the neural network model, and aggregating the updated weight values of neural network model instances received from the plurality of computational devices to generate an updated neural network model.
    Type: Application
    Filed: November 11, 2021
    Publication date: January 19, 2023
    Inventors: Di WU, Rhemat ULLAH, Paul HARVEY, Peter KILPATRICK, Ivor SPENCE, Blesson VARGHESE