Patents by Inventor Anuj Kalia

Anuj Kalia has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230007077
    Abstract: Described are examples for deploying workloads in a cloud-computing environment. In an aspect, based on a desired number of workloads of a process to be executed in a cloud-computing environment and based on one or more failure probabilities, an actual number of workloads of the process to execute in the cloud-computing environment to provide a level of service can be determined and deployed. In another aspect, a standby workload can be executed as a second instance of the process without at least a portion of the separate configuration used by the multiple workloads, and based on detecting termination of one of multiple workloads, the standby workload can be configured to execute based on the separate configuration of the separate instance of the process corresponding to the one of the multiple workloads.
    Type: Application
    Filed: September 8, 2022
    Publication date: January 5, 2023
    Inventors: Sanjeev MEHROTRA, Paramvir BAHL, Anuj KALIA
  • Patent number: 11533376
    Abstract: Described are examples for providing cell level migration of physical layer processing in a virtualized base station. A system for operating virtualized base stations includes a plurality of physical layer (PHY) servers within a datacenter and a media access control (MAC) server. Each respective PHY server includes: a memory storing instructions and at least one processor coupled to the memory. The at least one processor is configured to perform physical layer radio access network processing for a cell at the respective PHY server. The MAC server includes a memory storing instructions and at least one processor coupled to the memory. The at least one processor is configured to migrate the physical layer radio access network processing for the cell from a first server of the plurality of PHY servers to a second server of the plurality of PHY servers within the datacenter at an inter-slot boundary.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: December 20, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Anuj Kalia, Ilias Marinos, Daehyeok Kim
  • Publication number: 20220385577
    Abstract: Aspects of the present disclosure relate to allocating workloads to vRANs via programmable switches at far-edge cloud datacenters. Traditionally, traffic allocation is handled by dedicated servers running load-balancing software. However, rerouting RAN traffic to such servers increases both energy and capital costs, degrades end-to-end performance, and requires additional physical space, all of which are undesirable or even infeasible for a RAN far-edge datacenter. Since switches are located in the path of data traffic, workflow policies can be designed to inspect packet headers of incoming traffic, evaluate real-time network information, determine available vRAN instances, and update the packet headers to steer the incoming traffic for processing. As network conditions change, the workflow policies enable the switch to dynamically redirect workloads to alternative vRANs for processing.
    Type: Application
    Filed: May 27, 2021
    Publication date: December 1, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Daehyeok KIM, Ilias MARINOS, Anuj KALIA, Manikanta KOTARU
  • Publication number: 20220386302
    Abstract: Aspects of the present disclosure relate to allocating RAN resources among RAN slices according to reinforcement learning techniques. For example, a network slice controller (NSC) may generate a RAN resource allocation and associated expected slice characteristics may be determined for each slice based on the RAN resource allocation. Resources of the RAN may be allocated accordingly, such that resulting actual slice characteristics may be observed and compared to the expected slice characteristics. A reward may be generated for the resource allocation, for example based on a difference between the expected and observed slice characteristics. RAN resource allocation and slice characteristic forecasting may be adapted according to such rewards. As a result, RAN resource allocation generation may improve, even in instances with changing or unknown network conditions.
    Type: Application
    Filed: May 28, 2021
    Publication date: December 1, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Bozidar RADUNOVIC, Xenofon FOUKAS, Manikanta KOTARU, Anuj KALIA
  • Publication number: 20220377563
    Abstract: Described are examples for providing a distributed fault-tolerant state store for a virtualized base station. In an aspect, a first server at a datacenter may perform physical layer processing for at least one virtualized base station. While performing the physical layer processing, the first server may generate inter-slot physical layer state data during a first slot. The inter-slot physical layer state data is to be used in a subsequent slot. The first server may periodically transmit the inter-slot physical layer state data to one or more other servers of the plurality of servers within the datacenter. One of the other servers may take over the physical layer processing for the at least one virtualized base station based on the inter-slot physical layer state data, for example, in response to a fault at the first server or a migration of the at least one virtualized base station.
    Type: Application
    Filed: May 18, 2021
    Publication date: November 24, 2022
    Inventors: Anuj KALIA, Ilias MARINOS, Daehyeok KIM, Paramvir BAHL
  • Publication number: 20220377145
    Abstract: Described are examples for providing cell level migration of physical layer processing in a virtualized base station. A system for operating virtualized base stations includes a plurality of physical layer (PHY) servers within a datacenter and a media access control (MAC) server. Each respective PHY server includes: a memory storing instructions and at least one processor coupled to the memory. The at least one processor is configured to perform physical layer radio access network processing for a cell at the respective PHY server. The MAC server includes a memory storing instructions and at least one processor coupled to the memory. The at least one processor is configured to migrate the physical layer radio access network processing for the cell from a first server of the plurality of PHY servers to a second server of the plurality of PHY servers within the datacenter at an inter-slot boundary.
    Type: Application
    Filed: May 18, 2021
    Publication date: November 24, 2022
    Inventors: Anuj KALIA, Ilias Marinos, Daehyeok Kim
  • Publication number: 20220374262
    Abstract: Systems and methods are provided for offloading a task from a central processor in a radio access network (RAN) server to one or more heterogeneous accelerators. For example, a task associated with one or more operational partitions (or a service application) associated with processing data traffic in the RAN is dynamically allocated for offloading from the central processor based on workload status information. One or more accelerators are dynamically allocated for executing the task, where the accelerators may be heterogeneous and may not comprise pre-programming for executing the task. The disclosed technology further enables generating specific application programs for execution on the respective heterogeneous accelerators based on a single set of program instructions.
    Type: Application
    Filed: May 18, 2021
    Publication date: November 24, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Paramvir BAHL, Daehyeok KIM, Anuj KALIA, Alastair WOLMAN
  • Publication number: 20220360624
    Abstract: Described are examples for deploying workloads in a cloud-computing environment. In an aspect, based on a desired number of workloads of a process to be executed in a cloud-computing environment and based on one or more failure probabilities, an actual number of workloads of the process to execute in the cloud-computing environment to provide a level of service can be determined and deployed. In another aspect, a standby workload can be executed as a second instance of the process without at least a portion of the separate configuration used by the multiple workloads, and based on detecting termination of one of multiple workloads, the standby workload can be configured to execute based on the separate configuration of the separate instance of the process corresponding to the one of the multiple workloads.
    Type: Application
    Filed: May 10, 2021
    Publication date: November 10, 2022
    Inventors: Sanjeev MEHROTRA, Paramvir BAHL, Anuj KALIA
  • Patent number: 11477275
    Abstract: Described are examples for deploying workloads in a cloud-computing environment. In an aspect, based on a desired number of workloads of a process to be executed in a cloud-computing environment and based on one or more failure probabilities, an actual number of workloads of the process to execute in the cloud-computing environment to provide a level of service can be determined and deployed. In another aspect, a standby workload can be executed as a second instance of the process without at least a portion of the separate configuration used by the multiple workloads, and based on detecting termination of one of multiple workloads, the standby workload can be configured to execute based on the separate configuration of the separate instance of the process corresponding to the one of the multiple workloads.
    Type: Grant
    Filed: May 10, 2021
    Date of Patent: October 18, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sanjeev Mehrotra, Paramvir Bahl, Anuj Kalia