Patents by Inventor Rimma Vladimirovna Nehme

Rimma Vladimirovna Nehme has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12353908
    Abstract: The disclosure herein describes scheduling execution of artificial intelligence (AI) workloads in a cloud infrastructure platform. A global scheduler receives AI workloads associated with resource ticket values. The scheduler distributes the AI workloads to nodes based on balancing resource ticket values. Local schedulers of the nodes schedule AI workloads on resources based on the resource ticket values of the AI workloads. Based on scheduling the AI workloads, coordinator services of the local schedulers execute the distributed AI workloads on the infrastructure resources of the nodes. The disclosure further describes scheduling AI workloads based on priority tiers. A scheduler receives AI workloads, and each AI workload is associated with a priority tier indicative of a preemption priority while being executed. The AI workloads are scheduled for execution on a distributed set of nodes based on the priority tiers and then execute based on the scheduling.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: July 8, 2025
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Muthian Sivathanu, Atul Katiyar, Dharma Kiritkumar Shukla, Rimma Vladimirovna Nehme, Shreshth Singhal, Pankaj Sharma, Nipun Kwatra, Ramachandran Ramjee
  • Patent number: 12190147
    Abstract: The disclosure herein describes platform-level checkpointing for deep learning (DL) jobs. The checkpointing is performed through capturing two kinds of state data: (i) GPU state (device state), and (ii) CPU state (host state). The GPU state includes GPU data (e.g., model parameters, optimizer state, etc.) that is located in the GPU and GPU context (e.g., the default stream in GPU, various handles created by the libraries such as DNN, Blas, etc.). Only a fraction of the GPU memory is copied because the checkpointing is done in a domain-aware manner. The “active” memory contains useful data like model parameters. To be able to capture the useful data, memory management is controlled to identify which parts of the memory are active. Also, to restore the destination GPU to the same context/state, a mechanism is used to capture such state-changing events on an original GPU and replayed on a destination GPU.
    Type: Grant
    Filed: June 26, 2021
    Date of Patent: January 7, 2025
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Muthian Sivathanu, Srinidhi Viswanatha, Dharma Kiritkumar Shukla, Nipun Kwatra, Ramachandran Ramjee, Rimma Vladimirovna Nehme, Pankaj Sharma, Bhalakumaaran Erode Ranganathan, Vaibhav Sharma
  • Patent number: 12166829
    Abstract: The disclosure herein describes platform-level migration for deep learning training (DLT) jobs from a checkpointed stated between a source node and a destination node. The checkpointing is performed through capturing GPU state (e.g., device state) and CPU state (e.g., host state). The GPU state includes GPU data (e.g., model parameters, optimizer state, etc.) that is located in the GPU and GPU context (e.g., the default stream in GPU, various handles created by libraries). Restoring the DLT job on the destination node involves resumption of processing of a destination GPU at the same checkpointed state.
    Type: Grant
    Filed: June 7, 2023
    Date of Patent: December 10, 2024
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Dharma Kiritkumar Shukla, Muthian Sivathanu, Lu Xun, Rimma Vladimirovna Nehme
  • Patent number: 11722573
    Abstract: The disclosure herein describes platform-level migration for deep learning training (DLT) jobs from a checkpointed stated between a source node and a destination node. The checkpointing is performed through capturing GPU state (e.g., device state) and CPU state (e.g., host state). The GPU state includes GPU data (e.g., model parameters, optimizer state, etc.) that is located in the GPU and GPU context (e.g., the default stream in GPU, various handles created by libraries). Restoring the DLT job on the destination node involves resumption of processing of a destination GPU at the same checkpointed state.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: August 8, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dharma Kiritkumar Shukla, Muthian Sivathanu, Lu Xun, Rimma Vladimirovna Nehme
  • Patent number: 8326825
    Abstract: Embodiments are directed to determining optimal partition configurations for distributed database data and to implementing parallel query optimization memo data structure to improve partition configuration cost estimation efficiency. In an embodiment, a computer system accesses a portion of database data and various database queries for a given database. The computer system determines, based on the accessed database data and database queries, a partition configuration search space which includes multiple feasible partition configurations for the database data and a workload of queries expected to be executed on that data. The computer system performs a branch and bound search in the partition configuration search space to determine which data partitioning path has the lowest partitioning cost. The branch and bound search is performed according to branch and bound search policies. The computer system also outputs the partition configuration with the determined lowest partitioning cost.
    Type: Grant
    Filed: November 5, 2010
    Date of Patent: December 4, 2012
    Assignee: Microsoft Corporation
    Inventors: Rimma Vladimirovna Nehme, Nicolas Bruno
  • Publication number: 20120117065
    Abstract: Embodiments are directed to determining optimal partition configurations for distributed database data and to implementing parallel query optimization memo data structure to improve partition configuration cost estimation efficiency. In an embodiment, a computer system accesses a portion of database data and various database queries for a given database. The computer system determines, based on the accessed database data and database queries, a partition configuration search space which includes multiple feasible partition configurations for the database data and a workload of queries expected to be executed on that data. The computer system performs a branch and bound search in the partition configuration search space to determine which data partitioning path has the lowest partitioning cost. The branch and bound search is performed according to branch and bound search policies. The computer system also outputs the partition configuration with the determined lowest partitioning cost.
    Type: Application
    Filed: November 5, 2010
    Publication date: May 10, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Rimma Vladimirovna Nehme, Nicolas Bruno
  • Publication number: 20100241766
    Abstract: The min-repro finding technique described herein is designed to ease and speed-up the task of finding a min-repro, a minimum configuration that reproduces a problem in database-related products. Specifically, in one embodiment the technique simplifies transformations in order to find one or more min-repros. One embodiment provides a high-level script language to automate some sub-tasks and to guide the search for a simpler the configuration that reproduces the problem. Yet another embodiment provides record-and-replay functionality, and provides an intuitive representation of results and the search space. These tools can save hours of time for both customers and testers to isolate the problem and can result in faster fixes and large cost savings to organizations.
    Type: Application
    Filed: March 20, 2009
    Publication date: September 23, 2010
    Applicant: Microsoft Corporation
    Inventors: Nicolas Bruno, Rimma Vladimirovna Nehme