Patents by Inventor Dharma Kiritkumar SHUKLA

Dharma Kiritkumar SHUKLA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230396682
    Abstract: The disclosure herein describes platform-level migration for deep learning training (DLT) jobs from a checkpointed stated between a source node and a destination node. The checkpointing is performed through capturing GPU state (e.g., device state) and CPU state (e.g., host state). The GPU state includes GPU data (e.g., model parameters, optimizer state, etc.) that is located in the GPU and GPU context (e.g., the default stream in GPU, various handles created by libraries). Restoring the DLT job on the destination node involves resumption of processing of a destination GPU at the same checkpointed state.
    Type: Application
    Filed: June 7, 2023
    Publication date: December 7, 2023
    Inventors: Dharma Kiritkumar SHUKLA, Muthian SIVATHANU, Lu XUN, Rimma Vladimirovna NEHME
  • Patent number: 11722573
    Abstract: The disclosure herein describes platform-level migration for deep learning training (DLT) jobs from a checkpointed stated between a source node and a destination node. The checkpointing is performed through capturing GPU state (e.g., device state) and CPU state (e.g., host state). The GPU state includes GPU data (e.g., model parameters, optimizer state, etc.) that is located in the GPU and GPU context (e.g., the default stream in GPU, various handles created by libraries). Restoring the DLT job on the destination node involves resumption of processing of a destination GPU at the same checkpointed state.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: August 8, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dharma Kiritkumar Shukla, Muthian Sivathanu, Lu Xun, Rimma Vladimirovna Nehme
  • Publication number: 20230236837
    Abstract: The disclosure herein describes elastically managing the execution of workers of multi-worker workloads on accelerator devices. A first worker of a workload is executed on an accelerator device during a first time interval. A first context switch point is identified when the first worker is in a first worker state. At the identified context switch point, a first memory state of the first worker is stored in a host memory and the accelerator device is configured to a second memory state of the second worker. The second worker is executed during a second time interval and a second context switch point is identified at the end of the second time interval when the second worker is in a state that is equivalent to the first worker state. During the intervals, collective communication operations between the workers are accumulated and, at the second context switch point, the accumulated operations are performed.
    Type: Application
    Filed: June 30, 2022
    Publication date: July 27, 2023
    Inventors: Muthian SIVATHANU, Srinidhi VISWANATHA, Bhargav GULAVANI, Dharma Kiritkumar SHUKLA, Rimma Vladimirovna NEHME, Amey AGRAWAL, Ramachandran RAMJEE, Kaustubh WELANKAR, Ravi Shreyas ANUPINDI
  • Publication number: 20220318674
    Abstract: The disclosure herein describes managing artificial intelligence (AI) workloads in a cloud infrastructure platform. A set of distributed infrastructure resources are integrated into the cloud infrastructure platform via native support interfaces. AI workloads are received from a plurality of tenants, wherein the AI workloads include training workloads and inferencing workloads and resource subsets of the set of distributed infrastructure resources are assigned to the received AI workloads. The received AI workloads are scheduled for execution on the assigned resource subsets and based on the scheduling of the AI workloads, they are executed on the assigned resource subsets. The described cloud infrastructure platform provides efficient, secure execution of AI workloads for many different tenants and enables the flexible use of a wide variety of both third-party and first-party infrastructure resources.
    Type: Application
    Filed: June 28, 2021
    Publication date: October 6, 2022
    Inventors: Dharma Kiritkumar SHUKLA, Rimma Vladimirovna NEHME, Pankaj SHARMA, Shreshth SINGHAL, Vipul Arunkant MODI, Muthian SIVATHANU, Atul KATIYAR
  • Publication number: 20220318052
    Abstract: The disclosure herein describes scheduling execution of artificial intelligence (AI) workloads in a cloud infrastructure platform. A global scheduler receives AI workloads associated with resource ticket values. The scheduler distributes the AI workloads to nodes based on balancing resource ticket values. Local schedulers of the nodes schedule AI workloads on resources based on the resource ticket values of the AI workloads. Based on scheduling the AI workloads, coordinator services of the local schedulers execute the distributed AI workloads on the infrastructure resources of the nodes. The disclosure further describes scheduling AI workloads based on priority tiers. A scheduler receives AI workloads, and each AI workload is associated with a priority tier indicative of a preemption priority while being executed. The AI workloads are scheduled for execution on a distributed set of nodes based on the priority tiers and then execute based on the scheduling.
    Type: Application
    Filed: June 28, 2021
    Publication date: October 6, 2022
    Inventors: Muthian SIVATHANU, Atul KATIYAR, Dharma Kiritkumar SHUKLA, Rimma Vladimirovna NEHME, Shreshth SINGHAL, Pankaj SHARMA, Nipun KWATRA, Ramachandran RAMJEE
  • Publication number: 20220308917
    Abstract: The disclosure herein describes platform-level checkpointing for deep learning (DL) jobs. The checkpointing is performed through capturing two kinds of state data: (i) GPU state (device state), and (ii) CPU state (host state). The GPU state includes GPU data (e.g., model parameters, optimizer state, etc.) that is located in the GPU and GPU context (e.g., the default stream in GPU, various handles created by the libraries such as DNN, Blas, etc.). Only a fraction of the GPU memory is copied because the checkpointing is done in a domain-aware manner. The “active” memory contains useful data like model parameters. To be able to capture the useful data, memory management is controlled to identify which parts of the memory are active. Also, to restore the destination GPU to the same context/state, a mechanism is used to capture such state-changing events on an original GPU and replayed on a destination GPU.
    Type: Application
    Filed: June 26, 2021
    Publication date: September 29, 2022
    Inventors: Muthian SIVATHANU, Srinidhi VISWANATHA, Dharma Kiritkumar SHUKLA, Nipun KWATRA, Ramachandran RAMJEE, Rimma Vladimirovna NEHME, Pankaj SHARMA, Bhalakumaaran Erode RANGANATHAN, Vaibhav SHARMA
  • Publication number: 20220311832
    Abstract: The disclosure herein describes platform-level migration for deep learning training (DLT) jobs from a checkpointed stated between a source node and a destination node. The checkpointing is performed through capturing GPU state (e.g., device state) and CPU state (e.g., host state). The GPU state includes GPU data (e.g., model parameters, optimizer state, etc.) that is located in the GPU and GPU context (e.g., the default stream in GPU, various handles created by libraries). Restoring the DLT job on the destination node involves resumption of processing of a destination GPU at the same checkpointed state.
    Type: Application
    Filed: June 25, 2021
    Publication date: September 29, 2022
    Inventors: Dharma Kiritkumar SHUKLA, Muthian SIVATHANU, Lu XUN, Rimma Vladimirovna NEHME