Patents by Inventor Anuj Kalia

Anuj Kalia has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11909813
    Abstract: Described are examples for deploying workloads in a cloud-computing environment. In an aspect, based on a desired number of workloads of a process to be executed in a cloud-computing environment and based on one or more failure probabilities, an actual number of workloads of the process to execute in the cloud-computing environment to provide a level of service can be determined and deployed. In another aspect, a standby workload can be executed as a second instance of the process without at least a portion of the separate configuration used by the multiple workloads, and based on detecting termination of one of multiple workloads, the standby workload can be configured to execute based on the separate configuration of the separate instance of the process corresponding to the one of the multiple workloads.
    Type: Grant
    Filed: September 8, 2022
    Date of Patent: February 20, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sanjeev Mehrotra, Paramvir Bahl, Anuj Kalia
  • Publication number: 20230412502
    Abstract: Methods and systems for dynamically re-routing layer traffic between different servers with little user-visible disruption and without modifications to the vRAN software stack are provided. For instance, transformations on messages between the L2 and PHY, such as duplication and filtering, enable the system to maintain one or more low-overhead “hot, inactive” PHY clones. A hot, inactive PHY clone may be a duplicate of an operational PHY, where the PHY clone is primed to process a PHY workload of the operational PHY (e.g., “hot”) but is not currently responsible for processing the PHY workload (e.g., low-overhead, inactive). In this way, a PHY workload may be automatically and seamlessly migrated to the hot PHY clone in response to planned downtime (e.g., scheduled maintenance, software upgrades) or unexpected events (e.g., server failures) within the strict transmission time intervals (TTIs) required for processing the PHY workload.
    Type: Application
    Filed: May 26, 2022
    Publication date: December 21, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Anuj KALIA, Daehyeok KIM, Ilias MARINOS, Tao JI, Paramvir BAHL
  • Publication number: 20230388178
    Abstract: Data traffic is communicated between a radio unit (RU) of a cellular network and a virtualized radio access network (vRAN) instance of a vRAN. In response to determining that the vRAN instance has failed to communicate a downlink fronthaul packet to the RU within a threshold timeout interval, a failure notification is sent to a PHY layer failure response function. The failure to communicate the downlink fronthaul packet to the RU within the threshold timeout interval is indicative of a failure of the vRAN instance.
    Type: Application
    Filed: May 28, 2022
    Publication date: November 30, 2023
    Inventors: Daehyeok KIM, Anuj KALIA
  • Publication number: 20230388827
    Abstract: During a first transmission time interval (TTI) of a vRAN, data traffic between a radio unit (RU) of a cellular network and a first vRAN instance of the vRAN is monitored. The first vRAN instance executes on a first server of the vRAN and the first vRAN instance is configured to perform PHY layer processing and L2 processing of the data traffic. Based on the data traffic between the RU of the cellular network and the first vRAN instance during the first TTI, a workload at the first vRAN instance during a second TTI is estimated.
    Type: Application
    Filed: May 28, 2022
    Publication date: November 30, 2023
    Inventors: Daehyeok KIM, Anuj KALIA, Xenofon FOUKAS
  • Publication number: 20230388234
    Abstract: Methods and systems for dynamically re-routing layer traffic between different servers with little user-visible disruption and without modifications to the vRAN software stack are provided. This approach enables operators to initiate a PHY migration either on demand (e.g., during planned maintenances) or to set up automatic migration on unexpected events (e.g., server failures). It is recognized that PHY processing in cellular networks has no hard state that must be migrated. As a result, layer traffic such as the PHY-L2 traffic or L2-PHY traffic can be simply re-routed to a different server. This re-routing mechanism is realized by interposing one or more message controllers (e.g., middlebox) in a communication channel between the PHY and L2.
    Type: Application
    Filed: May 26, 2022
    Publication date: November 30, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Anuj KALIA, Daehyeok KIM, Ilias MARINOS, Tao JI, Nikita LAZAREV, Paramvir BAHL
  • Publication number: 20230388856
    Abstract: A method for utilizing computing resources in a vRAN is described. A predicted resource load is determined for data traffic processing of wireless communication channels served by the vRAN using a trained neural network model. The data traffic processing comprises at least one of PHY data processing or MAC processing for a 5G RAN. Computing resources are allocated for the data traffic processing based on the predicted resource load. Wireless parameter limits are determined for the wireless communication channels that constrain utilization of the allocated computing resources using the trained neural network model, including setting one or more of a maximum number of radio resource units per timeslot or a maximum MCS index for the wireless parameter limits. The data traffic processing is performed using the wireless parameter limits to reduce load spikes that cause a violation of real-time deadlines for the data traffic processing.
    Type: Application
    Filed: May 26, 2022
    Publication date: November 30, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Yu YAN, Anuj KALIA, Sanjeev MEHROTRA, Paramvir BAHL
  • Publication number: 20230007077
    Abstract: Described are examples for deploying workloads in a cloud-computing environment. In an aspect, based on a desired number of workloads of a process to be executed in a cloud-computing environment and based on one or more failure probabilities, an actual number of workloads of the process to execute in the cloud-computing environment to provide a level of service can be determined and deployed. In another aspect, a standby workload can be executed as a second instance of the process without at least a portion of the separate configuration used by the multiple workloads, and based on detecting termination of one of multiple workloads, the standby workload can be configured to execute based on the separate configuration of the separate instance of the process corresponding to the one of the multiple workloads.
    Type: Application
    Filed: September 8, 2022
    Publication date: January 5, 2023
    Inventors: Sanjeev MEHROTRA, Paramvir BAHL, Anuj KALIA
  • Patent number: 11533376
    Abstract: Described are examples for providing cell level migration of physical layer processing in a virtualized base station. A system for operating virtualized base stations includes a plurality of physical layer (PHY) servers within a datacenter and a media access control (MAC) server. Each respective PHY server includes: a memory storing instructions and at least one processor coupled to the memory. The at least one processor is configured to perform physical layer radio access network processing for a cell at the respective PHY server. The MAC server includes a memory storing instructions and at least one processor coupled to the memory. The at least one processor is configured to migrate the physical layer radio access network processing for the cell from a first server of the plurality of PHY servers to a second server of the plurality of PHY servers within the datacenter at an inter-slot boundary.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: December 20, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Anuj Kalia, Ilias Marinos, Daehyeok Kim
  • Publication number: 20220385577
    Abstract: Aspects of the present disclosure relate to allocating workloads to vRANs via programmable switches at far-edge cloud datacenters. Traditionally, traffic allocation is handled by dedicated servers running load-balancing software. However, rerouting RAN traffic to such servers increases both energy and capital costs, degrades end-to-end performance, and requires additional physical space, all of which are undesirable or even infeasible for a RAN far-edge datacenter. Since switches are located in the path of data traffic, workflow policies can be designed to inspect packet headers of incoming traffic, evaluate real-time network information, determine available vRAN instances, and update the packet headers to steer the incoming traffic for processing. As network conditions change, the workflow policies enable the switch to dynamically redirect workloads to alternative vRANs for processing.
    Type: Application
    Filed: May 27, 2021
    Publication date: December 1, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Daehyeok KIM, Ilias MARINOS, Anuj KALIA, Manikanta KOTARU
  • Publication number: 20220386302
    Abstract: Aspects of the present disclosure relate to allocating RAN resources among RAN slices according to reinforcement learning techniques. For example, a network slice controller (NSC) may generate a RAN resource allocation and associated expected slice characteristics may be determined for each slice based on the RAN resource allocation. Resources of the RAN may be allocated accordingly, such that resulting actual slice characteristics may be observed and compared to the expected slice characteristics. A reward may be generated for the resource allocation, for example based on a difference between the expected and observed slice characteristics. RAN resource allocation and slice characteristic forecasting may be adapted according to such rewards. As a result, RAN resource allocation generation may improve, even in instances with changing or unknown network conditions.
    Type: Application
    Filed: May 28, 2021
    Publication date: December 1, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Bozidar RADUNOVIC, Xenofon FOUKAS, Manikanta KOTARU, Anuj KALIA
  • Publication number: 20220377145
    Abstract: Described are examples for providing cell level migration of physical layer processing in a virtualized base station. A system for operating virtualized base stations includes a plurality of physical layer (PHY) servers within a datacenter and a media access control (MAC) server. Each respective PHY server includes: a memory storing instructions and at least one processor coupled to the memory. The at least one processor is configured to perform physical layer radio access network processing for a cell at the respective PHY server. The MAC server includes a memory storing instructions and at least one processor coupled to the memory. The at least one processor is configured to migrate the physical layer radio access network processing for the cell from a first server of the plurality of PHY servers to a second server of the plurality of PHY servers within the datacenter at an inter-slot boundary.
    Type: Application
    Filed: May 18, 2021
    Publication date: November 24, 2022
    Inventors: Anuj KALIA, Ilias Marinos, Daehyeok Kim
  • Publication number: 20220374262
    Abstract: Systems and methods are provided for offloading a task from a central processor in a radio access network (RAN) server to one or more heterogeneous accelerators. For example, a task associated with one or more operational partitions (or a service application) associated with processing data traffic in the RAN is dynamically allocated for offloading from the central processor based on workload status information. One or more accelerators are dynamically allocated for executing the task, where the accelerators may be heterogeneous and may not comprise pre-programming for executing the task. The disclosed technology further enables generating specific application programs for execution on the respective heterogeneous accelerators based on a single set of program instructions.
    Type: Application
    Filed: May 18, 2021
    Publication date: November 24, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Paramvir BAHL, Daehyeok KIM, Anuj KALIA, Alastair WOLMAN
  • Publication number: 20220377563
    Abstract: Described are examples for providing a distributed fault-tolerant state store for a virtualized base station. In an aspect, a first server at a datacenter may perform physical layer processing for at least one virtualized base station. While performing the physical layer processing, the first server may generate inter-slot physical layer state data during a first slot. The inter-slot physical layer state data is to be used in a subsequent slot. The first server may periodically transmit the inter-slot physical layer state data to one or more other servers of the plurality of servers within the datacenter. One of the other servers may take over the physical layer processing for the at least one virtualized base station based on the inter-slot physical layer state data, for example, in response to a fault at the first server or a migration of the at least one virtualized base station.
    Type: Application
    Filed: May 18, 2021
    Publication date: November 24, 2022
    Inventors: Anuj KALIA, Ilias MARINOS, Daehyeok KIM, Paramvir BAHL
  • Publication number: 20220360624
    Abstract: Described are examples for deploying workloads in a cloud-computing environment. In an aspect, based on a desired number of workloads of a process to be executed in a cloud-computing environment and based on one or more failure probabilities, an actual number of workloads of the process to execute in the cloud-computing environment to provide a level of service can be determined and deployed. In another aspect, a standby workload can be executed as a second instance of the process without at least a portion of the separate configuration used by the multiple workloads, and based on detecting termination of one of multiple workloads, the standby workload can be configured to execute based on the separate configuration of the separate instance of the process corresponding to the one of the multiple workloads.
    Type: Application
    Filed: May 10, 2021
    Publication date: November 10, 2022
    Inventors: Sanjeev MEHROTRA, Paramvir BAHL, Anuj KALIA
  • Patent number: 11477275
    Abstract: Described are examples for deploying workloads in a cloud-computing environment. In an aspect, based on a desired number of workloads of a process to be executed in a cloud-computing environment and based on one or more failure probabilities, an actual number of workloads of the process to execute in the cloud-computing environment to provide a level of service can be determined and deployed. In another aspect, a standby workload can be executed as a second instance of the process without at least a portion of the separate configuration used by the multiple workloads, and based on detecting termination of one of multiple workloads, the standby workload can be configured to execute based on the separate configuration of the separate instance of the process corresponding to the one of the multiple workloads.
    Type: Grant
    Filed: May 10, 2021
    Date of Patent: October 18, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sanjeev Mehrotra, Paramvir Bahl, Anuj Kalia