Patents by Inventor DEEPTHI KARKADA

DEEPTHI KARKADA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11966843
    Abstract: Methods, apparatus, systems and articles of manufacture for distributed training of a neural network are disclosed. An example apparatus includes a neural network trainer to select a plurality of training data items from a training data set based on a toggle rate of each item in the training data set. A neural network parameter memory is to store neural network training parameters. A neural network processor is to generate training data results from distributed training over multiple nodes of the neural network using the selected training data items and the neural network training parameters. The neural network trainer is to synchronize the training data results and to update the neural network training parameters.
    Type: Grant
    Filed: June 13, 2022
    Date of Patent: April 23, 2024
    Assignee: Intel Corporation
    Inventors: Meenakshi Arunachalam, Arun Tejusve Raghunath Rajan, Deepthi Karkada, Adam Procter, Vikram Saletore
  • Publication number: 20220309349
    Abstract: Methods, apparatus, systems and articles of manufacture for distributed training of a neural network are disclosed. An example apparatus includes a neural network trainer to select a plurality of training data items from a training data set based on a toggle rate of each item in the training data set. A neural network parameter memory is to store neural network training parameters. A neural network processor is to generate training data results from distributed training over multiple nodes of the neural network using the selected training data items and the neural network training parameters. The neural network trainer is to synchronize the training data results and to update the neural network training parameters.
    Type: Application
    Filed: June 13, 2022
    Publication date: September 29, 2022
    Inventors: Meenakshi Arunachalam, Arun Tejusve Raghunath Rajan, Deepthi Karkada, Adam Procter, Vikram Saletore
  • Patent number: 11029971
    Abstract: Systems, apparatuses and methods may provide for technology that identifies a first set of compute nodes and a second set of compute nodes, wherein the first set of compute nodes execute more slowly than the second set of compute nodes. The technology may also automatically determine a compute node configuration that results in a relatively low difference in completion time between the first set of compute nodes and the second set of compute nodes with respect to a neural network workload. In an example, the technology applies the compute node configuration to an execution of the neural network workload on one or more nodes in the first set of compute nodes and one or more nodes in the second set of compute nodes.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: June 8, 2021
    Assignee: Intel Corporation
    Inventors: Meenakshi Arunachalam, Kushal Datta, Vikram Saletore, Vishal Verma, Deepthi Karkada, Vamsi Sripathi, Rahul Khanna, Mohan Kumar
  • Patent number: 10922610
    Abstract: Systems, apparatuses and methods may provide for technology that conducts a first timing measurement of a blockage timing of a first window of the training of the neural network. The blockage timing measures a time that processing is impeded at layers of the neural network during the first window of the training due to synchronization of one or more synchronizing parameters of the layers. Based upon the first timing measurement, the technology is to determine whether to modify a synchronization barrier policy to include a synchronization barrier to impede synchronization of one or more synchronizing parameters of one of the layers during a second window of the training. The technology is further to impede the synchronization of the one or more synchronizing parameters of the one of the layers during the second window if the synchronization barrier policy is modified to include the synchronization barrier.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: February 16, 2021
    Assignee: Intel Corporation
    Inventors: Adam Procter, Vikram Saletore, Deepthi Karkada, Meenakshi Arunachalam
  • Publication number: 20190155620
    Abstract: Systems, apparatuses and methods may provide for technology that identifies a first set of compute nodes and a second set of compute nodes, wherein the first set of compute nodes execute more slowly than the second set of compute nodes. The technology may also automatically determine a compute node configuration that results in a relatively low difference in completion time between the first set of compute nodes and the second set of compute nodes with respect to a neural network workload. In an example, the technology applies the compute node configuration to an execution of the neural network workload on one or more nodes in the first set of compute nodes and one or more nodes in the second set of compute nodes.
    Type: Application
    Filed: January 28, 2019
    Publication date: May 23, 2019
    Inventors: Meenakshi Arunachalam, Kushal Datta, Vikram Saletore, Vishal Verma, Deepthi Karkada, Vamsi Sripathi, Rahul Khanna, Mohan Kumar
  • Publication number: 20190080233
    Abstract: Systems, apparatuses and methods may provide for technology that conducts a first timing measurement of a blockage timing of a first window of the training of the neural network. The blockage timing measures a time that processing is impeded at layers of the neural network during the first window of the training due to synchronization of one or more synchronizing parameters of the layers. Based upon the first timing measurement, the technology is to determine whether to modify a synchronization barrier policy to include a synchronization barrier to impede synchronization of one or more synchronizing parameters of one of the layers during a second window of the training. The technology is further to impede the synchronization of the one or more synchronizing parameters of the one of the layers during the second window if the synchronization barrier policy is modified to include the synchronization barrier.
    Type: Application
    Filed: September 14, 2017
    Publication date: March 14, 2019
    Inventors: Adam Procter, Vikram Saletore, Deepthi Karkada, Meenakshi Arunachalam
  • Publication number: 20190042934
    Abstract: Methods, apparatus, systems and articles of manufacture for distributed training of a neural network are disclosed. An example apparatus includes a neural network trainer to select a plurality of training data items from a training data set based on a toggle rate of each item in the training data set. A neural network parameter memory is to store neural network training parameters. A neural network processor is to generate training data results from distributed training over multiple nodes of the neural network using the selected training data items and the neural network training parameters. The neural network trainer is to synchronize the training data results and to update the neural network training parameters.
    Type: Application
    Filed: December 1, 2017
    Publication date: February 7, 2019
    Inventors: Meenakshi Arunachalam, Arun Tejusve Raghunath Rajan, Deepthi Karkada, Adam Procter, Vikram Saletore
  • Publication number: 20170286252
    Abstract: Examples may include techniques to a indicate behavior of a data center. A data center is monitored to collect operating information and one or more models to represent behavior of the data center are built based on the collected operating information. Predicted behavior of the data center to support a workload based on different operating scenarios using the one or more built models is indicated to facilitate resource allocation and scheduling for the workload supported by the data center.
    Type: Application
    Filed: April 1, 2016
    Publication date: October 5, 2017
    Inventors: RAMESHKUMAR G. ILLIKKAL, SAJAN K. GOVINDAN, DEEPTHI KARKADA, SANDEEP PAL, PATRICK J. HOLMES