Patents by Inventor Srikanth Kandula

Srikanth Kandula has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250112843
    Abstract: Securing and optimizing communications for a cloud service provider includes collecting connection summary information at network interface devices associated with host computing devices for a group of resources allocated to a customer of the cloud computing environment. The connection summary information includes local address information, remote address information, and data information, each connection established via the network interface devices. At least one communication graph is generated for the group of resources using the connection summary information. The graph includes nodes that represent communication resources of the group of resources and edges extending between nodes that characterize communication between the nodes. At least one analytics process is performed on data from the graph to identify at least one of a micro-segmentation strategy, a communication pattern, and a flow prediction for the group of resources.
    Type: Application
    Filed: September 28, 2023
    Publication date: April 3, 2025
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sathiya Kumaran MANI, Tsuwang HSIEH, Ranveer CHANDRA, Srikanth KANDULA, Santiago Martin SEGARRA
  • Publication number: 20250088428
    Abstract: This document relates to automating detecting anomalies in network behavior of an application Generally, the disclosed techniques can obtain network flow data for an application. A machine learning model can be used to process the network flow data to detect anomalies. The machine learning model can be retrained over time to adapt to changing network behavior of the application. In some cases, a graph neural network is employed to detect the anomalies.
    Type: Application
    Filed: September 13, 2023
    Publication date: March 13, 2025
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Tsuwang HSIEH, Santiago Martin SEGARRA, Sathiya Kumaran MANI, Srikanth KANDULA, Michael Dean WONG
  • Publication number: 20250036375
    Abstract: This patent relates to automating network management. One example includes a graph analysis and manipulation tool configured to receive a natural language prompt relating to a network management activity. The graph analysis and manipulation tool is also configured to access a graph resource and to generate code that addresses the network management activity as a graph manipulation task.
    Type: Application
    Filed: December 22, 2023
    Publication date: January 30, 2025
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Tsuwang HSIEH, Sathiya Kumaran MANI, Ranveer CHANDRA, Srikanth KANDULA, Santiago Martin SEGARRA, Yajie ZHOU
  • Publication number: 20250004805
    Abstract: Techniques are disclosed for managing connections or bidirectional flows of a communication session in a software defined network (SDN). A virtual machine determines that the communication session meets a criterion for offloading policy enforcement of the communication session to an acceleration device. The virtual machine sends to the connection processing engine, a request to offload policy enforcement of the communication session from the virtual machine to the acceleration device.
    Type: Application
    Filed: June 29, 2023
    Publication date: January 2, 2025
    Inventors: Gerald Roy DE GRACE, Srikanth KANDULA, Avijit GUPTA, Rishabh TEWARI, Arun JEEDIGUNTA VENKATA SATYA, Zexuan ZHAO
  • Patent number: 12155554
    Abstract: A computing device is provided, including a processor that receives a network graph. The processor further receives a specification of a network traffic control heuristic for a network traffic routing problem over the network graph. The processor further constructs a gap maximization problem that has, as a maximization target, a difference between an exact solution to the network traffic routing problem and a heuristic solution generated using the network traffic control heuristic. The processor further generates a Lagrange multiplier formulation of the gap maximization problem. At a convex solver, the processor further computes an estimated maximum gap as an estimated solution to the Lagrange multiplier formulation of the gap maximization problem. The processor further performs a network traffic control action based at least in part on the estimated maximum gap.
    Type: Grant
    Filed: September 2, 2022
    Date of Patent: November 26, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Behnaz Arzani, Pooria Namyar, Ryan Andrew Beckett, Srikanth Kandula, Santiago Martin Segarra, Himanshu Raj
  • Publication number: 20240311153
    Abstract: A method for scheduling a coordinated transfer of data among a plurality of processor nodes on a network comprises operating a multi-commodity flow model subject to a plurality of predetermined constraints. The model is configured to (a) receive as input a set of demands defining, for each of the plurality of processor nodes, an amount of data to be transferred to that processor node, (b) assign a plurality of paths linking the plurality of processor nodes, and (c) emit a schedule for transfer of the data along the plurality of paths so as to minimize a predetermined cost function, wherein the schedule comprises at least one store-and-forward operation and at least one copy operation.
    Type: Application
    Filed: June 8, 2023
    Publication date: September 19, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Behnaz ARZANI, Siva Kesava Reddy KAKARLA, Miguel OOM TEMUDO DE CASTRO, Srikanth KANDULA, Saeed MALEKI, Luke Jonathon MARSHALL
  • Publication number: 20240314747
    Abstract: A method for allocating a plurality of network resources to a plurality of network-access demands of a plurality of network guests comprises (a) receiving the plurality of network-access demands; (b) for each of the plurality of network-access demands (i) dynamically computing, from among the plurality of network resources, a resorted order of resources associated with the network-access demand, and (ii) for each network resource associated with the network-access demand, increasing, in the re-sorted order, an allocation of the network resource to the network-access demand until the network-access demand is saturated, and freezing the allocation of each of the plurality of network resources to the saturated demand; and (c) outputting the frozen allocation of each of the plurality of network resources for each of the plurality of network-access demands.
    Type: Application
    Filed: May 24, 2023
    Publication date: September 19, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Behnaz ARZANI, Pooria NAMYAR, Srikanth KANDULA, Umesh KRISHNASWAMY, Himanshu RAJ, Santiago Martin SEGARRA, Daniel Stopol CRANKSHAW
  • Patent number: 12063142
    Abstract: A computing system identifies mitigation actions in response to failures within a computer network. A service level objective is obtained by the computing system for client-resource data flows traversing the computer network between client-side and resource-side nodes. Indication of a failure event at a network location of the computer network is obtained. For each mitigation action of a set of candidate mitigation actions, an estimated impact to a distribution of the service level objective is determined for the mitigation action by applying simulated client-resource data flows to a network topology model of the computer network in combination with the mitigation action and the failure event. One or more target mitigation actions are identified by the computing system from the set of candidate mitigation actions based on a comparison of the estimated impacts of the set of candidate mitigation actions.
    Type: Grant
    Filed: March 12, 2023
    Date of Patent: August 13, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Behnaz Arzani, Pooria Namyar, Daniel Stopol Crankshaw, Daniel Sebastian Berger, Tsu-wang Hsieh, Srikanth Kandula
  • Publication number: 20240080255
    Abstract: A computing device is provided, including a processor that receives a network graph. The processor further receives a specification of a network traffic control heuristic for a network traffic routing problem over the network graph. The processor further constructs a gap maximization problem that has, as a maximization target, a difference between an exact solution to the network traffic routing problem and a heuristic solution generated using the network traffic control heuristic. The processor further generates a Lagrange multiplier formulation of the gap maximization problem. At a convex solver, the processor further computes an estimated maximum gap as an estimated solution to the Lagrange multiplier formulation of the gap maximization problem. The processor further performs a network traffic control action based at least in part on the estimated maximum gap.
    Type: Application
    Filed: September 2, 2022
    Publication date: March 7, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Behnaz ARZANI, Pooria NAMYAR, Ryan Andrew BECKETT, Srikanth KANDULA, Santiago Martin SEGARRA, Himanshu RAJ
  • Publication number: 20230370322
    Abstract: A computing system identifies mitigation actions in response to failures within a computer network. A service level objective is obtained by the computing system for client-resource data flows traversing the computer network between client-side and resource-side nodes. Indication of a failure event at a network location of the computer network is obtained. For each mitigation action of a set of candidate mitigation actions, an estimated impact to a distribution of the service level objective is determined for the mitigation action by applying simulated client-resource data flows to a network topology model of the computer network in combination with the mitigation action and the failure event. One or more target mitigation actions are identified by the computing system from the set of candidate mitigation actions based on a comparison of the estimated impacts of the set of candidate mitigation actions.
    Type: Application
    Filed: March 12, 2023
    Publication date: November 16, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Behnaz ARZANI, Pooria NAMYAR, Daniel Stopol CRANKSHAW, Daniel Sebastian BERGER, Tsu-wang HSIEH, Srikanth KANDULA
  • Publication number: 20230215150
    Abstract: A method of updating a trained cardinality estimation model includes receiving a cardinality estimation model with cardinality labels and detecting a drift in underlying data or predicates of the cardinality estimation model. The type of the detected drift is determined and new test queries that mimic test queries for the detected drift are synthesized. A portion of the synthesized test queries is selected to reduce annotation cost and used to update the cardinality estimation model.
    Type: Application
    Filed: December 31, 2021
    Publication date: July 6, 2023
    Inventors: Yao LU, Srikanth KANDULA, Beibin LI
  • Patent number: 11611466
    Abstract: A computing system identifies mitigation actions in response to failures within a computer network. A service level objective is obtained by the computing system for client-resource data flows traversing the computer network between client-side and resource-side nodes. Indication of a failure event at a network location of the computer network is obtained. For each mitigation action of a set of candidate mitigation actions, an estimated impact to a distribution of the service level objective is determined for the mitigation action by applying simulated client-resource data flows to a network topology model of the computer network in combination with the mitigation action and the failure event. One or more target mitigation actions are identified by the computing system from the set of candidate mitigation actions based on a comparison of the estimated impacts of the set of candidate mitigation actions.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: March 21, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Behnaz Arzani, Pooria Namyar, Daniel Stopol Crankshaw, Daniel Sebastian Berger, Tsu-wang Hsieh, Srikanth Kandula
  • Patent number: 11474945
    Abstract: Methods, systems, apparatuses, and computer program products are provided for prefetching data. A workload analyzer may identify job characteristics for a plurality of previously executed jobs in a workload executing on a cluster of one or more compute resources. For each job, identified job characteristics may include identification of an input dataset and an input bandwidth characteristic for the input dataset. A future workload predictor may identify future jobs expected to execute on the cluster based at least on the identified job characteristics. A cache assignment determiner may determine a cache assignment that identifies a prefetch dataset for at least one of the future jobs. A network bandwidth allocator may determine a network bandwidth assignment for the prefetch dataset. A plan instructor may instruct a compute resource of the cluster to load data to a cache local to the cluster according to the cache assignment and the network bandwidth assignment.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: October 18, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Virajith Jalaparti, Sriram S. Rao, Christopher W. Douglas, Ashvin Agrawal, Avrilia Floratou, Ishai Menache, Srikanth Kandula, Mainak Ghosh, Joseph Naor
  • Publication number: 20210286728
    Abstract: Methods, systems, apparatuses, and computer program products are provided for prefetching data. A workload analyzer may identify job characteristics for a plurality of previously executed jobs in a workload executing on a cluster of one or more compute resources. For each job, identified job characteristics may include identification of an input dataset and an input bandwidth characteristic for the input dataset. A future workload predictor may identify future jobs expected to execute on the cluster based at least on the identified job characteristics. A cache assignment determiner may determine a cache assignment that identifies a prefetch dataset for at least one of the future jobs. A network bandwidth allocator may determine a network bandwidth assignment for the prefetch dataset. A plan instructor may instruct a compute resource of the cluster to load data to a cache local to the cluster according to the cache assignment and the network bandwidth assignment.
    Type: Application
    Filed: June 2, 2021
    Publication date: September 16, 2021
    Inventors: Virajith Jalaparti, Sriram S. Rao, Christopher W. Douglas, Ashvin Agrawal, Avrilia Floratou, Ishai Menache, Srikanth Kandula, Mainak Ghosh, Joseph Naor
  • Patent number: 11055225
    Abstract: Methods, systems, apparatuses, and computer program products are provided for prefetching data. A workload analyzer may identify job characteristics for a plurality of previously executed jobs in a workload executing on a cluster of one or more compute resources. For each job, identified job characteristics may include identification of an input dataset and an input bandwidth characteristic for the input dataset. A future workload predictor may identify future jobs expected to execute on the cluster based at least on the identified job characteristics. A cache assignment determiner may determine a cache assignment that identifies a prefetch dataset for at least one of the future jobs. A network bandwidth allocator may determine a network bandwidth assignment for the prefetch dataset. A plan instructor may instruct a compute resource of the cluster to load data to a cache local to the cluster according to the cache assignment and the network bandwidth assignment.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: July 6, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Virajith Jalaparti, Sriram S. Rao, Christopher W. Douglas, Ashvin Agrawal, Avrilia Floratou, Ishai Menache, Srikanth Kandula, Mainak Ghosh, Joseph Naor
  • Patent number: 11010193
    Abstract: Embodiments for efficient queue management for cluster scheduling and managing task queues for tasks which are to be executed in a distributed computing environment. Both centralized and distributed scheduling is provided. Task queues may be bound by length-based bounding or delay-based bounding. Tasks may be prioritized and task queues may be dynamically reordered based on task priorities. Job completion times and cluster resource utilization may both be improved.
    Type: Grant
    Filed: April 16, 2018
    Date of Patent: May 18, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Konstantinos Karanasos, Sriram Rao, Srikanth Kandula, Milan Vojnovic, Jeffrey Thomas Rasley, Rodrigo Lopes Cancado Fonseca
  • Publication number: 20210096996
    Abstract: Methods, systems, apparatuses, and computer program products are provided for prefetching data. A workload analyzer may identify job characteristics for a plurality of previously executed jobs in a workload executing on a cluster of one or more compute resources. For each job, identified job characteristics may include identification of an input dataset and an input bandwidth characteristic for the input dataset. A future workload predictor may identify future jobs expected to execute on the cluster based at least on the identified job characteristics. A cache assignment determiner may determine a cache assignment that identifies a prefetch dataset for at least one of the future jobs. A network bandwidth allocator may determine a network bandwidth assignment for the prefetch dataset. A plan instructor may instruct a compute resource of the cluster to load data to a cache local to the cluster according to the cache assignment and the network bandwidth assignment.
    Type: Application
    Filed: October 22, 2019
    Publication date: April 1, 2021
    Inventors: Virajith Jalaparti, Sriram S. Rao, Christopher W. Douglas, Ashvin Agrawal, Avrilia Floratou, Ishai Menache, Srikanth Kandula, Mainak Ghosh, Joseph Naor
  • Patent number: 10771332
    Abstract: The techniques and/or systems described herein are configured to determine a set of update operations to transition a network from an observed network state to a target network state and to generate an update dependency graph used to dynamically schedule the set of update operations based on constraint(s) defined to ensure reliability of the network during the transition. The techniques and/or systems dynamically schedule the set of update operations based on feedback. For example, the feedback may include an indication that a previously scheduled update operation has been delayed, has failed, or has been successfully completed.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: September 8, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ratul Mahajan, Ming Zhang, Srikanth Kandula, Hongqiang Liu, Xin Jin
  • Patent number: 10693812
    Abstract: Various technologies pertaining to scheduling network traffic in a network are described. A request to transfer data from a first computing device to a second computing device includes data that identifies a volume of the data to be transferred and a deadline, where the data is to be transferred prior to the deadline. A long-term schedule is computed based upon the request, wherein the long-term schedule defines flow of traffic through the network over a relatively long time horizon. A short-term schedule is computed based upon the long-term schedule, where devices in the network are configured based upon the short-term schedule.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: June 23, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Srikanth Kandula, Ishai Menache, Roy Schwartz
  • Patent number: 10637797
    Abstract: Latency in responding to queries directed to geographically distributed data can be reduced by allocating individual steps, of a multi-step compute operation requested by the query, among the geographically distributed computing devices so as to reduce the duration of shuffling of intermediate data among such devices, and, additionally, by pre-moving, prior to the receipt of the query, portions of the distributed data that are input to a first step of the multistep compute operation, to, again, reduce the duration of the exchange of intermediate data. The pre-moving of input data occurring, and the adaptive allocation of intermediate steps, are prioritized for high-value data sets. Additionally, a threshold increase in a quantity of data exchanged across network communications can be established to avoid incurring network communication usage without an attendant gain in latency reduction.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: April 28, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Paramvir Bahl, Ganesh Ananthanarayanan, Srikanth Kandula, Peter Bodik, Qifan Pu, Srinivasa Aditya Akella