Patents by Inventor Karthikeyan Subramanian
Karthikeyan Subramanian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12333348Abstract: The present disclosure relates to systems, methods, and computer readable media for predicting surplus capacity on a set of server nodes and determining a quantity of deferrable virtual machines (VMs) that may be scheduled over an upcoming period of time. This determination of VM quantity may be determined while minimizing risks associated with allocation failures on the set of server nodes. This disclosure described systems that facilitate features and functionality related to improving utilization of surplus resource capacity on a plurality of server nodes by implementing VMs having some flexibility in timing of deployment while also avoiding significant risk caused as a result of over-allocated storage and computing resources. In one or more embodiments, the quantity of deferrable VMs is determined and scheduled in accordance with rules of a scheduling policy.Type: GrantFiled: April 10, 2024Date of Patent: June 17, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Yuwen Yang, Gurpreet Virdi, Bo Qiao, Hang Dong, Karthikeyan Subramanian, Marko Lalic, Shandan Zhou, Si Qin, Thomas Moscibroda, Yunus Mohammed
-
Publication number: 20250080394Abstract: Interactive analytics are provided for resource allocation failure incidents, which may be tracked, diagnosed, summarized, and presented in near real-time for users and/or platform/service providers to understand the root cause(s) of failure incidents and actual and hypothetical, failed and successful, allocation scenarios. A capacity analyzer simulates an allocation process implemented by a resource allocation platform. The capacity analyzer may determine which resources were and/or were not eligible for allocation for a request, based on information about the resource allocation failure, resources in the region of interest, and constraints associated with the incident, and the resource allocation rules associated with the resource allocation platform. Users may quickly learn whether a request constraint, a requesting entity constraint, a capacity constraint, and/or a resource platform constraint caused a resource allocation incident.Type: ApplicationFiled: August 29, 2023Publication date: March 6, 2025Inventors: Di WENG, Shandan ZHOU, Jue ZHANG, Bo QIAO, Si QIN, Karthikeyan SUBRAMANIAN, Thomas MOSCIBRODA
-
Publication number: 20250045088Abstract: Described are examples for recommending increase in worker instance count for an availability zone in a cloud-based computing platform. A machine learning (ML) model can be used to predict a time series forecast of a workload for the availability zone in a future time period. A predicted number of worker instances to handle the predicted workload can be computed, and if the number of worker instances in the availability zone is less than the predicted number of worker instances, a recommendation to increase the number of worker instances in the availability zone can be generated.Type: ApplicationFiled: August 4, 2023Publication date: February 6, 2025Inventors: Neha KESHARI, Abhisek PAN, David Allen DION, Brendon MACHADO, Karthik Subramaniam HARIHARAN, Karthikeyan SUBRAMANIAN, Thomas MOSCIBRODA, Karel Trueba NOBREGAS
-
Publication number: 20250036448Abstract: The present application is directed to stranded resource recovery in a cloud computing environment. A resource utilization signal at each of a plurality of nodes that each hosts corresponding virtual machines (VMs) is measured. Based on each resource utilization signal, a set of candidate nodes is identified. Each candidate node comprises a stranded resource that is unutilized due to utilization of a bottleneck resource. The identification includes calculating an amount of the stranded resource at each candidate node. From a plurality of VMs hosted at the set of candidate nodes, a set of candidate VMs is identified for migration for stranded resource recovery. The identification includes calculating a score for each candidate VM based on a degree of imbalance between the stranded resource and the bottleneck resource at a candidate node hosting the candidate VM. Migration of at least one candidate VM in the set of candidate VMs is initiated.Type: ApplicationFiled: November 28, 2022Publication date: January 30, 2025Inventors: Saurabh AGARWAL, Bo QIAO, Chao DU, Jayden CHEN, Karthikeyan SUBRAMANIAN, Nisarg SHETH, Qingwei LIN, Si QIN, Thomas MOSCIBRODA, Luke Rafael RODRIGUEZ
-
Publication number: 20240419472Abstract: A search space for allocating a virtual machine is pruned. An allocation request for allocating a virtual machine to a plurality of clusters is received. A valid set of clusters is generated. The valid set of clusters includes clusters of the plurality of clusters that satisfy the allocation request. An attribute associated with the allocation request is identified. A truncation parameter is determined, by a trained search space classification model, based on the identified attribute. The valid set of clusters is filtered based on the truncation parameter. A server is selected from the filtered valid set of clusters. The virtual machine is allocated to the selected server. In an aspect of the disclosure, a search space pruner generates an analysis summary based on an analysis of received telemetry data. The search space pruner trains the search space classification model to determine truncation parameters based on the analysis summary.Type: ApplicationFiled: June 19, 2023Publication date: December 19, 2024Inventors: Saurabh AGARWAL, Abhisek PAN, Brendon MACHADO, David Allen DION, Ishai MENACHE, Karthikeyan SUBRAMANIAN, Luke Jonathon MARSHALL, Neha KESHARI, Thomas MOSCIBRODA, Yiran WEI
-
Patent number: 12112214Abstract: The present disclosure relates to systems, methods, and computer readable media for predicting expansion failures and implementing defragmentation instructions based on the predicted expansion failures and other signals. For example, systems disclosed herein may apply a failure prediction model to determine an expansion failure prediction associated with an estimated likelihood that deployment failures will occur on a node cluster. The systems disclosed herein may further generate defragmentation instructions indicating a severity level that a defragmentation engine may execute on a cluster level to prevent expansion failures while minimizing negative customer impacts. By uniquely generating defragmentation instructions for each node cluster, a cloud computing system can minimize expansion failures, increase resource capacity, reduce costs, and provide access to reliable services to customers.Type: GrantFiled: July 19, 2023Date of Patent: October 8, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Shandan Zhou, Saurabh Agarwal, Karthikeyan Subramanian, Thomas Moscibroda, Paul Naveen Selvaraj, Sandeep Ramji, Sorin Iftimie, Nisarg Sheth, Wanghai Gu, Ajay Mani, Si Qin, Yong Xu, Qingwei Lin
-
Publication number: 20240256362Abstract: The present disclosure relates to systems, methods, and computer readable media for predicting surplus capacity on a set of server nodes and determining a quantity of deferrable virtual machines (VMs) that may be scheduled over an upcoming period of time. This determination of VM quantity may be determined while minimizing risks associated with allocation failures on the set of server nodes. This disclosure described systems that facilitate features and functionality related to improving utilization of surplus resource capacity on a plurality of server nodes by implementing VMs having some flexibility in timing of deployment while also avoiding significant risk caused as a result of over-allocated storage and computing resources. In one or more embodiments, the quantity of deferrable VMs is determined and scheduled in accordance with rules of a scheduling policy.Type: ApplicationFiled: April 10, 2024Publication date: August 1, 2024Inventors: Yuwen YANG, Gurpreet VIRDI, Bo QIAO, Hang DONG, Karthikeyan SUBRAMANIAN, Marko LALIC, Shandan ZHOU, Si QIN, Thomas MOSCIBRODA, Yunus MOHAMMED
-
Patent number: 12028223Abstract: A computer implemented method includes receiving telemetry data corresponding to capacity health of nodes in a cloud based computing system. The received telemetry data is processed via a prediction engine to provide predictions of capacity health at multiple dimensions of the cloud based computing system. Node recoverability information is received and node recovery execution is initiated as a function of the representations of capacity health and node recoverability information.Type: GrantFiled: June 6, 2022Date of Patent: July 2, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Shandan Zhou, Sam Prakash Bheri, Karthikeyan Subramanian, Yancheng Chen, Gaurav Jagtiani, Abhay Sudhir Ketkar, Hemant Malik, Thomas Moscibroda, Shweta Balkrishna Patil, Luke Rafael Rodriguez, Dalianna Victoria Vaysman
-
Patent number: 11972301Abstract: The present disclosure relates to systems, methods, and computer readable media for predicting surplus capacity on a set of server nodes and determining a quantity of deferrable virtual machines (VMs) that may be scheduled over an upcoming period of time. This determination of VM quantity may be determined while minimizing risks associated with allocation failures on the set of server nodes. This disclosure described systems that facilitate features and functionality related to improving utilization of surplus resource capacity on a plurality of server nodes by implementing VMs having some flexibility in timing of deployment while also avoiding significant risk caused as a result of over-allocated storage and computing resources. In one or more embodiments, the quantity of deferrable VMs is determined and scheduled in accordance with rules of a scheduling policy.Type: GrantFiled: April 13, 2021Date of Patent: April 30, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Yuwen Yang, Gurpreet Virdi, Bo Qiao, Hang Dong, Karthikeyan Subramanian, Marko Lalic, Shandan Zhou, Si Qin, Thomas Moscibroda, Yunus Mohammed
-
Patent number: 11900171Abstract: A cloud computing capacity management system can include a fine-grained admission control layer, a policy engine, and an enforcement layer. The fine-grained admission control layer can be configured to ingest capacity signals and create a capacity mitigation policy, based at least in part on the capacity signals, to protect available capacity of a cloud computing system for prioritized users. The capacity mitigation policy can be directed to users of the cloud computing system. The policy engine can be configured to control how the capacity mitigation policy is applied to the cloud computing system. The enforcement layer can be configured to handle incoming resource requests and to enforce resource limits based on the capacity mitigation policy as applied by the policy engine.Type: GrantFiled: February 2, 2021Date of Patent: February 13, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Gurpreet Virdi, Fernando Gonzalez Todisco, Karthikeyan Subramanian, Sanjay Ramanujan, Sorin Iftimie, Xing wen Wang, Thomas Moscibroda, Yunus Mohammed, Vi Lam Nguyen, Rostislav Sudakov
-
Publication number: 20230396511Abstract: A computer implemented method includes receiving telemetry data corresponding to capacity health of nodes in a cloud based computing system. The received telemetry data is processed via a prediction engine to provide predictions of capacity health at multiple dimensions of the cloud based computing system. Node recoverability information is received and node recovery execution is initiated as a function of the representations of capacity health and node recoverability information.Type: ApplicationFiled: June 6, 2022Publication date: December 7, 2023Inventors: Shandan ZHOU, Sam Prakash BHERI, Karthikeyan SUBRAMANIAN, Yancheng CHEN, Gaurav JAGTIANI, Abhay Sudhir KETKAR, Hemant MALIK, Thomas MOSCIBRODA, Shweta Balkrishna PATIL, Luke Rafael RODRIGUEZ, Dalianna Victoria VAYSMAN
-
Publication number: 20230359512Abstract: The present disclosure relates to systems, methods, and computer readable media for predicting expansion failures and implementing defragmentation instructions based on the predicted expansion failures and other signals. For example, systems disclosed herein may apply a failure prediction model to determine an expansion failure prediction associated with an estimated likelihood that deployment failures will occur on a node cluster. The systems disclosed herein may further generate defragmentation instructions indicating a severity level that a defragmentation engine may execute on a cluster level to prevent expansion failures while minimizing negative customer impacts. By uniquely generating defragmentation instructions for each node cluster, a cloud computing system can minimize expansion failures, increase resource capacity, reduce costs, and provide access to reliable services to customers.Type: ApplicationFiled: July 19, 2023Publication date: November 9, 2023Inventors: Shandan ZHOU, Saurabh AGARWAL, Karthikeyan SUBRAMANIAN, Thomas MOSCIBRODA, Paul Naveen SELVARAJ, Sandeep RAMJI, Sorin IFTIMIE, Nisarg SHETH, Wanghai GU, Ajay MANI, Si QIN, Yong XU, Qingwei LIN
-
Patent number: 11726836Abstract: The present disclosure relates to systems, methods, and computer readable media for predicting expansion failures and implementing defragmentation instructions based on the predicted expansion failures and other signals. For example, systems disclosed herein may apply a failure prediction model to determine an expansion failure prediction associated with an estimated likelihood that deployment failures will occur on a node cluster. The systems disclosed herein may further generate defragmentation instructions indicating a severity level that a defragmentation engine may execute on a cluster level to prevent expansion failures while minimizing negative customer impacts. By uniquely generating defragmentation instructions for each node cluster, a cloud computing system can minimize expansion failures, increase resource capacity, reduce costs, and provide access to reliable services to customers.Type: GrantFiled: June 12, 2020Date of Patent: August 15, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Shandan Zhou, Saurabh Agarwal, Karthikeyan Subramanian, Thomas Moscibroda, Paul Naveen Selvaraj, Sandeep Ramji, Sorin Iftimie, Nisarg Sheth, Wanghai Gu, Ajay Mani, Si Qin, Yong Xu, Qingwei Lin
-
Patent number: 11652720Abstract: The present disclosure relates to systems, methods, and computer readable media for predicting deployment growth on one or more node clusters and selectively permitting deployment requests on a per cluster basis. For example, systems disclosed herein may apply tenant growth prediction system trained to output a deployment growth classification indicative of a predicted growth of deployments on a node cluster. The system disclosed herein may further utilize the deployment growth classification to determine whether a deployment request may be permitted while maintaining a sufficiently sized capacity buffer to avoid deployment failures for existing deployments previously implemented on the node cluster. By selectively permitting or denying deployments based on a variety of factors, the systems described herein can more efficiently utilize cluster resources on a per-cluster basis without causing a significant increase in deployment failures for existing customers.Type: GrantFiled: September 20, 2019Date of Patent: May 16, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Shandan Zhou, John Lawrence Miller, Christopher Cowdery, Thomas Moscibroda, Shanti Kemburu, Yong Xu, Si Qin, Qingwei Lin, Eli Cortez, Karthikeyan Subramanian
-
Patent number: 11550634Abstract: A method for minimizing allocation failures in a cloud computing system without overprovisioning may include determining a predicted supply for a virtual machine series in a system unit of the cloud computing system during an upcoming time period. The predicted supply may be based on a shared available current capacity and a shared available future added capacity for the virtual machine series in the system unit. The method may also include predicting an available capacity for the virtual machine series in the system unit during the upcoming time period. The predicted available capacity may be based at least in part on a predicted demand for the virtual machine series in the system unit during the upcoming time period and the predicted supply. The method may also include taking at least one mitigation action in response to determining that the predicted demand exceeds the predicted supply during the upcoming time period.Type: GrantFiled: March 8, 2019Date of Patent: January 10, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Saurabh Agarwal, Maitreyee Ramprasad Joshi, Vinayak Ramnath Karnataki, Neha Keshari, Gowtham Natarajan, Yash Purohit, Sanjay Ramanujan, Karthikeyan Subramanian, Ambrose Thomas Treacy, Shandan Zhou
-
Publication number: 20220327002Abstract: The present disclosure relates to systems, methods, and computer readable media for predicting surplus capacity on a set of server nodes and determining a quantity of deferrable virtual machines (VMs) that may be scheduled over an upcoming period of time. This determination of VM quantity may be determined while minimizing risks associated with allocation failures on the set of server nodes. This disclosure described systems that facilitate features and functionality related to improving utilization of surplus resource capacity on a plurality of server nodes by implementing VMs having some flexibility in timing of deployment while also avoiding significant risk caused as a result of over-allocated storage and computing resources. In one or more embodiments, the quantity of deferrable VMs is determined and scheduled in accordance with rules of a scheduling policy.Type: ApplicationFiled: April 13, 2021Publication date: October 13, 2022Inventors: Yuwen YANG, Gurpreet VIRDI, Bo QIAO, Hang DONG, Karthikeyan SUBRAMANIAN, Marko LALIC, Shandan ZHOU, Si QIN, Thomas MOSCIBRODA, Yunus MOHAMMED
-
Publication number: 20220245001Abstract: A cloud computing capacity management system can include a fine-grained admission control layer, a policy engine, and an enforcement layer. The fine-grained admission control layer can be configured to ingest capacity signals and create a capacity mitigation policy, based at least in part on the capacity signals, to protect available capacity of a cloud computing system for prioritized users. The capacity mitigation policy can be directed to users of the cloud computing system. The policy engine can be configured to control how the capacity mitigation policy is applied to the cloud computing system. The enforcement layer can be configured to handle incoming resource requests and to enforce resource limits based on the capacity mitigation policy as applied by the policy engine.Type: ApplicationFiled: February 2, 2021Publication date: August 4, 2022Inventors: Gurpreet VIRDI, Fernando GONZALEZ TODISCO, Karthikeyan SUBRAMANIAN, Sanjay RAMANUJAN, Sorin IFTIMIE, Xing wen WANG, Thomas MOSCIBRODA, Yunus MOHAMMED, Vi Lam NGUYEN, Rostislav SUDAKOV
-
Publication number: 20220206771Abstract: The present invention is directed towards a method 200 and system 100 to automate the entire lifecycle of Apigee API Management deployment via a templated approach, the method 200 comprising steps of receiving data related to application programming interfaces from the customer device 102a at the developer device 102b, validating the data for knowing the state of existing application programming interfaces (APIs), creating a template based on customer requirement, wherein the template is creating by generating an APIGEEĀ® application programming interface proxy 412 and customer integration or customer delivery (CI/CD) configuration 414 based on the data, storing the APIGEEĀ® application programming interface proxy and customer integration or customer delivery configuration at a data repository, generating a repeatable build job for deployment of the APIGEE R application programming interface proxy and customer integration or customer delivery configuration and deploying and executing the job on the on premisesType: ApplicationFiled: December 24, 2020Publication date: June 30, 2022Inventor: Karthikeyan Subramanian
-
Publication number: 20210389894Abstract: The present disclosure relates to systems, methods, and computer readable media for predicting expansion failures and implementing defragmentation instructions based on the predicted expansion failures and other signals. For example, systems disclosed herein may apply a failure prediction model to determine an expansion failure prediction associated with an estimated likelihood that deployment failures will occur on a node cluster. The systems disclosed herein may further generate defragmentation instructions indicating a severity level that a defragmentation engine may execute on a cluster level to prevent expansion failures while minimizing negative customer impacts. By uniquely generating defragmentation instructions for each node cluster, a cloud computing system can minimize expansion failures, increase resource capacity, reduce costs, and provide access to reliable services to customers.Type: ApplicationFiled: June 12, 2020Publication date: December 16, 2021Inventors: Shandan ZHOU, Saurabh AGARWAL, Karthikeyan SUBRAMANIAN, Thomas MOSCIBRODA, Paul Naveen SELVARAJ, Sandeep RAMJI, Sorin IFTIMIE, Nisarg SHETH, Wanghai GU, Ajay MANI, Si QIN, Yong XU, Qingwei LIN
-
Patent number: 11093266Abstract: A method for evaluating at least one potential policy for an IaaS system may include determining a predicted workload for the IaaS system based on at least one generative model corresponding to the IaaS system. The at least one potential policy for the IaaS system may be simulated based on the predicted workload, thereby producing one or more simulation metrics that indicate effects of the at least one potential policy. The performance of the IaaS system may be optimized based on the one or more simulation metrics.Type: GrantFiled: October 15, 2018Date of Patent: August 17, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Gowtham Natarajan, Karel Trueba Nobregas, Abhisek Pan, Karthikeyan Subramanian