Patents by Inventor Sairam Veeraswamy
Sairam Veeraswamy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240039804Abstract: Computer-implemented methods, media, and systems for automating secured deployment of containerized workloads on edge devices are disclosed. One example computer-implemented method includes receiving, by a software defined wide area network (SD-WAN) edge device and from a remote manager, resource quotas for a compute service to be enabled at the SD-WAN edge device. Pre-deployment sanity checks are performed by confirming availability of resources satisfying the resource quotas, where the resources are at the SD-WAN edge device. In response to the confirmation of the availability of resources satisfying the resource quotas, one or more security constructs are set up to isolate SD-WAN network functions at the SD-WAN edge device from the compute service at the SD-WAN edge device. The compute service is attached to a SD-WAN network by the SD-WAN edge device. An acknowledgement that the compute service is enabled at the SD-WAN edge device is sent to the remote manager.Type: ApplicationFiled: September 14, 2022Publication date: February 1, 2024Inventors: EROL AYGAR, MARGARET NATASHA DREW, MARK PEEK, DANIEL BEVERIDGE, RAUNAK RAVINDRA SINGWI, NILANJAN DAW, PRANAY PAREEK, SAIRAM VEERASWAMY, AMARNATH RAGHUNATHAN
-
Publication number: 20240039808Abstract: Computer-implemented methods, media, and systems for context based meta scheduling of containerized workloads across edge devices are disclosed. One example computer-implemented method includes receiving a manifest file that includes multiple context requirements of a workload, where the multiple context requirements include multiple runtime service level agreement (SLA) requirements of the workload. Telemetry data is received from multiple software defined wide area network (SD-WAN) edge devices, where the telemetry data includes respective context data of each of the multiple SD-WAN edge devices. A SD-WAN edge device is selected, based on the telemetry data and the multiple context requirements of the workload, from the multiple SD-WAN edge devices for placing the workload on the selected SD-WAN edge device, where the context data of the selected SD-WAN edge device meets the multiple context requirements of the workload. The workload is run on the selected SD-WAN edge device.Type: ApplicationFiled: September 15, 2022Publication date: February 1, 2024Inventors: RAUNAK RAVINDRA SINGWI, EROL AYGAR, DANIEL BEVERIDGE, MARK PEEK, NILANJAN DAW, SAIRAM VEERASWAMY, PRANAY PAREEK
-
Publication number: 20240039806Abstract: Computer-implemented methods, media, and systems for inter-cluster automated failover and migration of containerized workloads across edges devices are disclosed. One example method includes monitoring telemetry data received from a first software defined wide area network (SD-WAN) edge device that has a workload scheduled, where the telemetry data includes at least one of a health status of the workload or multiple runtime context elements at the first SD-WAN edge device. It is determined that a failure associated with either the first SD-WAN edge device or the workload occurs. A mode of the failure is determined. A remediation process based on the determined mode of the failure and a current state of the workload is performed.Type: ApplicationFiled: September 14, 2022Publication date: February 1, 2024Inventors: RAUNAK RAVINDRA SINGWI, Daniel Beveridge, Erol Aygar, Nilanjan Daw, Sairam Veeraswamy
-
Publication number: 20240015107Abstract: Disclosed are various embodiments for rate proportional scheduling to reduce packet loss in virtualized network function chains. A congestion monitor executed by a first virtual machine executed by a host computing device can detect congestion in a receive queue associated with a first virtualized network function implemented by a first virtual machine. The congestion monitor can send a pause signal to a rate controller executed by a second virtual machine executed by the host computing device. The rate controller can receive the pause signal. In response, the rate controller can pause the processing of packets by a second virtualized network function implemented by the second virtual machine to reduce congestion in the receive queue of the first virtualized network function.Type: ApplicationFiled: October 27, 2022Publication date: January 11, 2024Inventors: AVINASH KUMAR CHAURASIA, LAN VU, UDAY PUNDALIK KURKURE, HARI SIVARAMAN, SAIRAM VEERASWAMY
-
Publication number: 20230403319Abstract: Some embodiments provide a method of implementing capacity-aware load balancing across a set of data compute nodes (DCNs) by reducing latency for the set of DCNs. From the set of DCNs, the method identifies (1) a first subset of DCNs including DCNs that have a latency that is higher than an average latency computed for the set of DCNs and (2) a second subset of DCNs including DCNs that have a latency that is lower than the average latency computed for the set of DCNs. For each DCN in the first subset of DCNs, the method assigns to the DCN a weight value that corresponds to a target latency computed for the set of DCNs. Based on the assigned weight values for the first subset of DCNs, the method computes an excess weight value to be redistributed across the second subset of DCNs. The method redistributes the computed excess weight value across the second subset of DCNs.Type: ApplicationFiled: July 28, 2023Publication date: December 14, 2023Inventors: Sachin Pandey, Rohan Gandhi, Sreeram Iyer, Santosh Pallagatti Kotrabasappa, Sairam Veeraswamy
-
Publication number: 20230396670Abstract: Some embodiments provide a method of implementing context-aware routing for a software-defined wide-area network, at an SD-WAN edge forwarding element (FE) located at a branch network connected to the SD-WAN. The method receives, from an SD-WAN controller, geolocation route weights for each of multiple cloud datacenters across which a set of application resources is distributed. The application resources are all reachable at a same virtual network address. For each of the cloud datacenters, the method installs a route for the virtual network address between the branch network and the cloud datacenter. The routes have different total costs based at least in part on the geolocation metrics received from the SD-WAN controller. The SD-WAN edge FE selects between the routes to establish connections to the set of application resources.Type: ApplicationFiled: June 6, 2022Publication date: December 7, 2023Inventors: Santosh Pallagatti Kotrabasappa, Abhishek Goliya, Sajan Liyon, Sairam Veeraswamy, Sumit Mundhra
-
Publication number: 20230396538Abstract: Some embodiments provide a method for implementing context-aware routing for a software-defined wide-area network (SD-WAN). The method is performed at a particular SD-WAN edge forwarding element (FE) connected to a particular cloud datacenter. The method receives a message specifying a weight for a virtual network address associated with a set of application resources distributed across multiple cloud datacenters including the particular cloud datacenter. The method converts the specified weight into a route weight for the SD-WAN. The method provides the converted route weight to a set of SD-WAN edge FEs connected to a set of branch networks, and each SD-WAN edge FE in the set of SD-WAN edge FEs uses the provided route weight to calculate a total cost for routing data messages directed to the virtual network address to the particular cloud datacenter.Type: ApplicationFiled: June 6, 2022Publication date: December 7, 2023Inventors: Santosh Pallagatti Kotrabasappa, Abhishek Goliya, Sajan Liyon, Sairam Veeraswamy, Sumit Mundhra
-
Publication number: 20230370386Abstract: In some embodiments, a method receives a set of packets for a flow and determines a set of features for the flow from the set of packets. A classification of an elephant flow or a mice flow is selected based on the set of features. The classification is selected before assigning the flow to a network resource in a plurality of network resources. The method assigns the flow to a network resource in the plurality of network resources based on the classification for the flow and a set of classifications for flows currently assigned to the plurality of network resources. Then, the method sends the set of packets for the flow using the assigned network resource.Type: ApplicationFiled: July 25, 2023Publication date: November 16, 2023Inventors: Santosh PALLAGATTI KOTRABASAPPA, Sairam VEERASWAMY, Abhishek GOLIYA, Abbas MOHAMED
-
Patent number: 11792086Abstract: Computer-implemented methods, media, and systems for remediation of containerized workloads based on context breach at edge devices are disclosed. One example computer-implemented method includes monitoring telemetry data from a first software defined wide area network (SD-WAN) edge device, where the telemetry data includes multiple context elements at the first SD-WAN edge device. It is determined that a context change occurs for at least one of the context elements at the first SD-WAN edge device. It is determined that due to the context change, the first SD-WAN edge device does not satisfy one or more requirements for running one or more workloads scheduled to run. In response to the determination that the first SD-WAN edge device does not satisfy the one or more requirements, the at least one of the one or more workloads is offloaded from the first SD-WAN edge device to a second SD-WAN edge device.Type: GrantFiled: September 15, 2022Date of Patent: October 17, 2023Assignee: VMware, Inc.Inventors: Raunak Ravindra Singwi, Daniel Beveridge, Erol Aygar, Sairam Veeraswamy
-
Publication number: 20230315593Abstract: Techniques for implementing RDMA-based recovery of dirty data in remote memory are provided. In one set of embodiments, upon occurrence of a failure at a first (i.e., source) host system, a second (i.e., failover) host system can allocate a new memory region corresponding to a memory region of the source host system and retrieve a baseline copy of the memory region from a storage backend shared by the source and failover host systems. The failover host system can further populate the new memory region with the baseline copy and retrieve one or more dirty page lists for the memory region from the source host system via RDMA, where the one or more dirty page lists identify memory pages in the memory region that include data updates not present in the baseline copy. For each memory page identified in the one or more dirty page lists, the failover host system can then copy the content of that memory page from the memory region of the source host system to the new memory region via RDMA.Type: ApplicationFiled: June 7, 2023Publication date: October 5, 2023Inventors: Keerthi Kumar, Halesh Sadashiv, Sairam Veeraswamy, Rajesh Venkatasubramanian, Kiran Dikshit, Kiran Tati
-
Publication number: 20230297257Abstract: Disclosed are various embodiments for improving resiliency and performance of clustered memory. A computing device can acquire a chunk of byte-addressable memory from a cluster memory host. The computing device can then identify an active set of allocated memory pages and an inactive set of allocated memory pages for a process executing on the computing device. Next, the computing device can store the active set of allocated memory pages for the process in the memory of the computing device. Finally, the computing device can store the inactive set of allocated memory pages for the process in the chunk of byte-addressable memory of the cluster memory host.Type: ApplicationFiled: May 24, 2023Publication date: September 21, 2023Inventors: Marcos K. AGUILERA, Keerthi KUMAR, Pramod KUMAR, Pratap SUBRAHMANYAM, Sairam VEERASWAMY, Rajesh VENKATASUBRAMANIAN
-
Patent number: 11757983Abstract: Some embodiments provide a method of implementing capacity-aware load balancing across a set of data compute nodes (DCNs) by reducing latency for the set of DCNs. From the set of DCNs, the method identifies (1) a first subset of DCNs including DCNs that have a latency that is higher than an average latency computed for the set of DCNs and (2) a second subset of DCNs including DCNs that have a latency that is lower than the average latency computed for the set of DCNs. For each DCN in the first subset of DCNs, the method assigns to the DCN a weight value that corresponds to a target latency computed for the set of DCNs. Based on the assigned weight values for the first subset of DCNs, the method computes an excess weight value to be redistributed across the second subset of DCNs. The method redistributes the computed excess weight value across the second subset of DCNs.Type: GrantFiled: May 17, 2022Date of Patent: September 12, 2023Assignee: VMWARE, INC.Inventors: Sachin Pandey, Rohan Gandhi, Sreeram Iyer, Santosh Pallagatti Kotrabasappa, Sairam Veeraswamy
-
Patent number: 11757917Abstract: The disclosure provides an approach for detecting and preventing attacks in a network. Embodiments include receiving network traffic statistics of a system. Embodiments include determining a set of features of the system based on the network traffic statistics. Embodiments include inputting the set of features to a classification model that has been trained using historical features associated with labels indicating whether the historical features correspond to attacks. Embodiments include receiving, as output from the classification model, an indication of whether the system is a target of an attack. Embodiments include receiving additional statistics related to the system. Embodiments include analyzing, in response to the indication that the system is the target of the attack, the additional statistics to identify a source of the attack. Embodiments include performing an action to prevent the attack based on the source of the attack.Type: GrantFiled: October 23, 2020Date of Patent: September 12, 2023Assignee: VMware, Inc.Inventors: Santosh Pallagatti Kotrabasappa, Sairam Veeraswamy, Jayneeta Sinha, Suriyan S.
-
Publication number: 20230273807Abstract: A power optimization system may include a cloud management server coupled to a plurality of clusters via a network, a resource management module residing in the cloud management server, and a cloud power optimizer module residing in the resource management module. Each cluster may include a plurality of physical hosts with at least one virtual machine (VM) running on each physical host. During operation, the cloud power optimizer module may determine background and active power usages of each physical host in the plurality of clusters. Further, the cloud power optimizer module may determine power usage of each VM based on the determined background and active power usages of each physical host. Furthermore, the cloud power optimizer module may continuously balance a distribution of workload on the plurality of physical hosts based on the determined power usage of each VM.Type: ApplicationFiled: April 28, 2022Publication date: August 31, 2023Inventors: VENU MAHESH UPPALAPATI, Sairam Veeraswamy, Adarsh Jagadeeshwaran, Shalini Singh
-
Publication number: 20230273751Abstract: Disclosed are various embodiments for improving the resiliency and performance for clustered memory. A computing device can mark a page of the memory as being reclaimed. The computing device can then set the page of the memory as read-only. Next, the computing device can submit a write request for the contents of the page to individual ones of a plurality of memory hosts. Subsequently, the computing device can receive individual confirmations of a successful write of the page from the individual ones of the plurality of memory hosts. Then, the computing device can mark the page as free in response to receipt of the individual confirmations of the successful write from the individual ones of the plurality of memory hosts.Type: ApplicationFiled: May 5, 2023Publication date: August 31, 2023Inventors: MARCOS K. AGUILERA, KEERTHI KUMAR, PRAMOD KUMAR, PRATAP SUBRAHMANYAM, SAIRAM VEERASWAMY, RAJESH VENKATASUBRAMANIAN
-
Context-sensitive defragmentation and aggregation of containerized workloads running on edge devices
Patent number: 11729062Abstract: Computer-implemented methods, media, and systems for context-sensitive defragmentation and aggregation of containerized workloads running on edge devices are disclosed. One example method includes monitoring telemetry data from multiple software defined wide area network (SD-WAN) edge devices that run multiple workloads, where the telemetry data includes at least one of resource utilization at the multiple SD-WAN edge devices, inter-workload trigger dependency, or inter-workload data dependency among the multiple workloads. It is determined, based on the telemetry data, that at least two of the multiple workloads running on at least two SD-WAN edge devices have the inter-workload trigger dependency or the inter-workload data dependency.Type: GrantFiled: September 14, 2022Date of Patent: August 15, 2023Assignee: VMware, Inc.Inventors: Nilanjan Daw, Sairam Veeraswamy, Raunak Ravindra Singwi, Erol Aygar -
Patent number: 11720457Abstract: Techniques for implementing RDMA-based recovery of dirty data in remote memory are provided. In one set of embodiments, upon occurrence of a failure at a first (i.e., source) host system, a second (i.e., failover) host system can allocate a new memory region corresponding to a memory region of the source host system and retrieve a baseline copy of the memory region from a storage backend shared by the source and failover host systems. The failover host system can further populate the new memory region with the baseline copy and retrieve one or more dirty page lists for the memory region from the source host system via RDMA, where the one or more dirty page lists identify memory pages in the memory region that include data updates not present in the baseline copy. For each memory page identified in the one or more dirty page lists, the failover host system can then copy the content of that memory page from the memory region of the source host system to the new memory region via RDMA.Type: GrantFiled: July 28, 2022Date of Patent: August 8, 2023Assignee: VMware, Inc.Inventors: Keerthi Kumar, Halesh Sadashiv, Sairam Veeraswamy, Rajesh Venkatasubramanian, Kiran Dikshit, Kiran Tati
-
Patent number: 11711307Abstract: In some embodiments, a method receives a set of packets for a flow and determines a set of features for the flow from the set of packets. A classification of an elephant flow or a mice flow is selected based on the set of features. The classification is selected before assigning the flow to a network resource in a plurality of network resources. The method assigns the flow to a network resource in the plurality of network resources based on the classification for the flow and a set of classifications for flows currently assigned to the plurality of network resources. Then, the method sends the set of packets for the flow using the assigned network resource.Type: GrantFiled: September 11, 2020Date of Patent: July 25, 2023Assignee: VMware, Inc.Inventors: Santosh Pallagatti Kotrabasappa, Sairam Veeraswamy, Abhishek Goliya, Abbas Mohamed
-
Publication number: 20230229219Abstract: Described herein are systems, methods, and software to manage power consumption in a software build environment. In one implementation, a monitoring service monitors power consumption information associated with a build environment for one or more software components. The monitoring service further identifies one or more trends associated with the power consumption information based at least on the power consumption information satisfying one or more criteria and generates a summary for display that indicates at least the one or more trends. The monitoring service may also identify and display as part of the summary one or more suggestions to improve power consumption based on the one or more trends.Type: ApplicationFiled: April 25, 2022Publication date: July 20, 2023Inventors: Shalini Singh, Sairam Veeraswamy, Adarsh Jagadeeshwaran, Joshua Philip Schnee, Vijayaraghavan Soundararajan, Shiva Ds, Harsh Hirani, Priya Kalaiselvan, Shashank Rai
-
Patent number: 11704030Abstract: Disclosed are various embodiments for improving resiliency and performance of clustered memory. A computing device can acquire a chunk of byte-addressable memory from a cluster memory host. The computing device can then identify an active set of allocated memory pages and an inactive set of allocated memory pages for a process executing on the computing device. Next, the computing device can store the active set of allocated memory pages for the process in the memory of the computing device. Finally, the computing device can store the inactive set of allocated memory pages for the process in the chunk of byte-addressable memory of the cluster memory host.Type: GrantFiled: September 22, 2021Date of Patent: July 18, 2023Assignee: VMWARE, INC.Inventors: Marcos K. Aguilera, Keerthi Kumar, Pramod Kumar, Pratap Subrahmanyam, Sairam Veeraswamy, Rajesh Venkatasubramanian