Patents by Inventor Sairam Veeraswamy

Sairam Veeraswamy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11757917
    Abstract: The disclosure provides an approach for detecting and preventing attacks in a network. Embodiments include receiving network traffic statistics of a system. Embodiments include determining a set of features of the system based on the network traffic statistics. Embodiments include inputting the set of features to a classification model that has been trained using historical features associated with labels indicating whether the historical features correspond to attacks. Embodiments include receiving, as output from the classification model, an indication of whether the system is a target of an attack. Embodiments include receiving additional statistics related to the system. Embodiments include analyzing, in response to the indication that the system is the target of the attack, the additional statistics to identify a source of the attack. Embodiments include performing an action to prevent the attack based on the source of the attack.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: September 12, 2023
    Assignee: VMware, Inc.
    Inventors: Santosh Pallagatti Kotrabasappa, Sairam Veeraswamy, Jayneeta Sinha, Suriyan S.
  • Publication number: 20230273807
    Abstract: A power optimization system may include a cloud management server coupled to a plurality of clusters via a network, a resource management module residing in the cloud management server, and a cloud power optimizer module residing in the resource management module. Each cluster may include a plurality of physical hosts with at least one virtual machine (VM) running on each physical host. During operation, the cloud power optimizer module may determine background and active power usages of each physical host in the plurality of clusters. Further, the cloud power optimizer module may determine power usage of each VM based on the determined background and active power usages of each physical host. Furthermore, the cloud power optimizer module may continuously balance a distribution of workload on the plurality of physical hosts based on the determined power usage of each VM.
    Type: Application
    Filed: April 28, 2022
    Publication date: August 31, 2023
    Inventors: VENU MAHESH UPPALAPATI, Sairam Veeraswamy, Adarsh Jagadeeshwaran, Shalini Singh
  • Publication number: 20230273751
    Abstract: Disclosed are various embodiments for improving the resiliency and performance for clustered memory. A computing device can mark a page of the memory as being reclaimed. The computing device can then set the page of the memory as read-only. Next, the computing device can submit a write request for the contents of the page to individual ones of a plurality of memory hosts. Subsequently, the computing device can receive individual confirmations of a successful write of the page from the individual ones of the plurality of memory hosts. Then, the computing device can mark the page as free in response to receipt of the individual confirmations of the successful write from the individual ones of the plurality of memory hosts.
    Type: Application
    Filed: May 5, 2023
    Publication date: August 31, 2023
    Inventors: MARCOS K. AGUILERA, KEERTHI KUMAR, PRAMOD KUMAR, PRATAP SUBRAHMANYAM, SAIRAM VEERASWAMY, RAJESH VENKATASUBRAMANIAN
  • Patent number: 11729062
    Abstract: Computer-implemented methods, media, and systems for context-sensitive defragmentation and aggregation of containerized workloads running on edge devices are disclosed. One example method includes monitoring telemetry data from multiple software defined wide area network (SD-WAN) edge devices that run multiple workloads, where the telemetry data includes at least one of resource utilization at the multiple SD-WAN edge devices, inter-workload trigger dependency, or inter-workload data dependency among the multiple workloads. It is determined, based on the telemetry data, that at least two of the multiple workloads running on at least two SD-WAN edge devices have the inter-workload trigger dependency or the inter-workload data dependency.
    Type: Grant
    Filed: September 14, 2022
    Date of Patent: August 15, 2023
    Assignee: VMware, Inc.
    Inventors: Nilanjan Daw, Sairam Veeraswamy, Raunak Ravindra Singwi, Erol Aygar
  • Patent number: 11720457
    Abstract: Techniques for implementing RDMA-based recovery of dirty data in remote memory are provided. In one set of embodiments, upon occurrence of a failure at a first (i.e., source) host system, a second (i.e., failover) host system can allocate a new memory region corresponding to a memory region of the source host system and retrieve a baseline copy of the memory region from a storage backend shared by the source and failover host systems. The failover host system can further populate the new memory region with the baseline copy and retrieve one or more dirty page lists for the memory region from the source host system via RDMA, where the one or more dirty page lists identify memory pages in the memory region that include data updates not present in the baseline copy. For each memory page identified in the one or more dirty page lists, the failover host system can then copy the content of that memory page from the memory region of the source host system to the new memory region via RDMA.
    Type: Grant
    Filed: July 28, 2022
    Date of Patent: August 8, 2023
    Assignee: VMware, Inc.
    Inventors: Keerthi Kumar, Halesh Sadashiv, Sairam Veeraswamy, Rajesh Venkatasubramanian, Kiran Dikshit, Kiran Tati
  • Patent number: 11711307
    Abstract: In some embodiments, a method receives a set of packets for a flow and determines a set of features for the flow from the set of packets. A classification of an elephant flow or a mice flow is selected based on the set of features. The classification is selected before assigning the flow to a network resource in a plurality of network resources. The method assigns the flow to a network resource in the plurality of network resources based on the classification for the flow and a set of classifications for flows currently assigned to the plurality of network resources. Then, the method sends the set of packets for the flow using the assigned network resource.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: July 25, 2023
    Assignee: VMware, Inc.
    Inventors: Santosh Pallagatti Kotrabasappa, Sairam Veeraswamy, Abhishek Goliya, Abbas Mohamed
  • Publication number: 20230229219
    Abstract: Described herein are systems, methods, and software to manage power consumption in a software build environment. In one implementation, a monitoring service monitors power consumption information associated with a build environment for one or more software components. The monitoring service further identifies one or more trends associated with the power consumption information based at least on the power consumption information satisfying one or more criteria and generates a summary for display that indicates at least the one or more trends. The monitoring service may also identify and display as part of the summary one or more suggestions to improve power consumption based on the one or more trends.
    Type: Application
    Filed: April 25, 2022
    Publication date: July 20, 2023
    Inventors: Shalini Singh, Sairam Veeraswamy, Adarsh Jagadeeshwaran, Joshua Philip Schnee, Vijayaraghavan Soundararajan, Shiva Ds, Harsh Hirani, Priya Kalaiselvan, Shashank Rai
  • Patent number: 11704030
    Abstract: Disclosed are various embodiments for improving resiliency and performance of clustered memory. A computing device can acquire a chunk of byte-addressable memory from a cluster memory host. The computing device can then identify an active set of allocated memory pages and an inactive set of allocated memory pages for a process executing on the computing device. Next, the computing device can store the active set of allocated memory pages for the process in the memory of the computing device. Finally, the computing device can store the inactive set of allocated memory pages for the process in the chunk of byte-addressable memory of the cluster memory host.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: July 18, 2023
    Assignee: VMWARE, INC.
    Inventors: Marcos K. Aguilera, Keerthi Kumar, Pramod Kumar, Pratap Subrahmanyam, Sairam Veeraswamy, Rajesh Venkatasubramanian
  • Patent number: 11698760
    Abstract: Disclosed are various embodiments for improving the resiliency and performance of cluster memory. First, a computing device can submit a write request to a byte-addressable chunk of memory stored by a memory host, wherein the byte-addressable chunk of memory is read-only. Then, the computing device can determine that a page-fault occurred in response to the write request. Next, the computing device can copy a page associated with the write request from the byte-addressable chunk of memory to the memory of the computing device. Subsequently, the computing device can free the page from the memory host. Then, the computing device can update a page table entry for the page to refer to a location of the page in the memory of the computing device.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: July 11, 2023
    Assignee: VMWARE, INC.
    Inventors: Marcos K. Aguilera, Keerthi Kumar, Pramod Kumar, Pratap Subrahmanyam, Sairam Veeraswamy, Rajesh Venkatasubramanian
  • Patent number: 11687286
    Abstract: Disclosed are various embodiments for improving the resiliency and performance for clustered memory. A computing device can mark a page of the memory as being reclaimed. The computing device can then set the page of the memory as read-only. Next, the computing device can submit a write request for the contents of the page to individual ones of a plurality of memory hosts. Subsequently, the computing device can receive individual confirmations of a successful write of the page from the individual ones of the plurality of memory hosts. Then, the computing device can mark the page as free in response to receipt of the individual confirmations of the successful write from the individual ones of the plurality of memory hosts.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: June 27, 2023
    Assignee: VMWARE, INC.
    Inventors: Marcos K. Aguilera, Keerthi Kumar, Pramod Kumar, Pratap Subrahmanyam, Sairam Veeraswamy, Rajesh Venkatasubramanian
  • Publication number: 20230168965
    Abstract: Disclosed are various embodiments for improving the resiliency and performance of clustered memory. A computing device can generate at least one parity page from at least a first local page and a second local page. The computing device can then submit a first write request for the first local page to a first one of a plurality of memory hosts. The computing device can also submit a second write request for the second local page to a second one of the plurality of memory hosts. Additionally, the computing device can submit a third write request for the parity page to a third one of the plurality of memory hosts.
    Type: Application
    Filed: January 25, 2023
    Publication date: June 1, 2023
    Inventors: Marcos K. Aguilera, Keerthi Kumar, Pramod Kumar, Pratap Subrahmanyam, Sairam Veeraswamy, Rajesh Venkatasubramanian
  • Publication number: 20230023696
    Abstract: Disclosed are various embodiments for optimizing the migration of processes or virtual machines in cluster memory systems. To begin, a first computing device can identify a set of pages allocated to a process or virtual machine hosted by the first computing device. Then, the first computing device can identify a subset of the allocated pages that have been accessed with a least a predefined frequency. Next, the first computing device can copy the subset of the allocated pages to a second computing device. Subsequently, the first computing device can copy a page mapping table to the second computing device, the page mapping table specifying which pages in the set of pages allocated to the process or virtual machine are stored by a memory host. Finally, the first computing device can copy remaining ones of the allocated pages to the second computing device.
    Type: Application
    Filed: October 7, 2021
    Publication date: January 26, 2023
    Inventors: Marcos K. AGUILERA, Pratap SUBRAHMANYAM, Sairam VEERASWAMY, Praveen VEGULLA, Rajesh VENKATASUBRAMANIAN
  • Publication number: 20230021883
    Abstract: Disclosed are various embodiments for optimizing the migration of pages of memory servers in cluster memory systems. To begin, a computing device can mark in a page table of the computing device that a page stored on a first memory host is not present. Then, the computing device can flush a translation lookaside buffer of the computing device. Next, the computing device can copy the page from the first memory host to a second memory host. Moving on, the computing device can update a page mapping table to reflect that the page is stored in the second memory host. Then, the computing device can mark in the page table of the computing device that the page stored in the second memory host is present. Subsequently, the computing device can discard the page stored on the first memory host.
    Type: Application
    Filed: October 7, 2021
    Publication date: January 26, 2023
    Inventors: MARCOS K. AGUILERA, PRATAP SUBRAHMANYAM, SAIRAM VEERASWAMY, PRAVEEN VEGULLA, RAJESH VENKATASUBRAMANIAN
  • Publication number: 20230017804
    Abstract: Disclosed are various embodiments for improving the resiliency and performance of cluster memory. First, a computing device can submit a write request to a byte-addressable chunk of memory stored by a memory host, wherein the byte-addressable chunk of memory is read-only. Then, the computing device can determine that a page-fault occurred in response to the write request. Next, the computing device can copy a page associated with the write request from the byte-addressable chunk of memory to the memory of the computing device. Subsequently, the computing device can free the page from the memory host. Then, the computing device can update a page table entry for the page to refer to a location of the page in the memory of the computing device.
    Type: Application
    Filed: September 22, 2021
    Publication date: January 19, 2023
    Inventors: MARCOS K. AGUILERA, Keerthi Kumar, Pramod Kumar, Pratap Subrahmanyam, Sairam Veeraswamy, Rajesh Venkatasubramanian
  • Publication number: 20230012999
    Abstract: Disclosed are various embodiments for improving the resiliency and performance of clustered memory. A computing device can generate at least one parity page from at least a first local page and a second local page. The computing device can then submit a first write request for the first local page to a first one of a plurality of memory hosts. The computing device can also submit a second write request for the second local page to a second one of the plurality of memory hosts. Additionally, the computing device can submit a third write request for the parity page to a third one of the plurality of memory hosts.
    Type: Application
    Filed: September 22, 2021
    Publication date: January 19, 2023
    Inventors: MARCOS K. AGUILERA, KEERTHI KUMAR, PRAMOD KUMAR, PRATAP SUBRAHMANYAM, SAIRAM VEERASWAMY, RAJESH VENKATASUBRAMAN IAN
  • Publication number: 20230012693
    Abstract: Disclosed are various embodiments for optimizing hypervisor paging. A hypervisor can save a machine page to a swap device, the machine page comprising data for a physical page of a virtual machine allocated to a virtual page for a process executing within the virtual machine. The hypervisor can then catch a page fault for a subsequent access of the machine page by the virtual machine. Next, the hypervisor can determine that the physical page is currently unallocated by the virtual machine in response to the page fault. Subsequently, the hypervisor can send a command to the swap device to discard the machine page saved to the swap device in response to a determination that the physical page is currently unallocated by the virtual machine.
    Type: Application
    Filed: October 4, 2021
    Publication date: January 19, 2023
    Inventors: MARCOS K. AGUILERA, DHANTU BURAGOHAIN, KEERTHI KUMAR, PRAMOD KUMAR, PRATAP SUBRAHMANYAM, SAIRAM VEERASWAMY, RAJESH VENKATASUBRAMANIAN
  • Publication number: 20230021067
    Abstract: Disclosed are various embodiments for improving resiliency and performance of clustered memory. A computing device can acquire a chunk of byte-addressable memory from a cluster memory host. The computing device can then identify an active set of allocated memory pages and an inactive set of allocated memory pages for a process executing on the computing device. Next, the computing device can store the active set of allocated memory pages for the process in the memory of the computing device. Finally, the computing device can store the inactive set of allocated memory pages for the process in the chunk of byte-addressable memory of the cluster memory host.
    Type: Application
    Filed: September 22, 2021
    Publication date: January 19, 2023
    Inventors: MARCOS K. AGUILERA, Keerthi Kumar, Pramod Kumar, Pratap Subrahmanyam, Sairam Veeraswamy, Rajesh Venkatasubramanian
  • Publication number: 20230017224
    Abstract: Disclosed are various embodiments for improving the resiliency and performance for clustered memory. A computing device can mark a page of the memory as being reclaimed. The computing device can then set the page of the memory as read-only. Next, the computing device can submit a write request for the contents of the page to individual ones of a plurality of memory hosts. Subsequently, the computing device can receive individual confirmations of a successful write of the page from the individual ones of the plurality of memory hosts. Then, the computing device can mark the page as free in response to receipt of the individual confirmations of the successful write from the individual ones of the plurality of memory hosts.
    Type: Application
    Filed: September 22, 2021
    Publication date: January 19, 2023
    Inventors: MARCOS K. AGUILERA, KEERTHI KUMAR, PRAMOD KUMAR, PRATAP SUBRAHMANYAM, SAIRAM VEERASWAMY, RAJESH VENKATASUBRAMANIAN
  • Publication number: 20220365855
    Abstract: Techniques for implementing RDMA-based recovery of dirty data in remote memory are provided. In one set of embodiments, upon occurrence of a failure at a first (i.e., source) host system, a second (i.e., failover) host system can allocate a new memory region corresponding to a memory region of the source host system and retrieve a baseline copy of the memory region from a storage backend shared by the source and failover host systems. The failover host system can further populate the new memory region with the baseline copy and retrieve one or more dirty page lists for the memory region from the source host system via RDMA, where the one or more dirty page lists identify memory pages in the memory region that include data updates not present in the baseline copy. For each memory page identified in the one or more dirty page lists, the failover host system can then copy the content of that memory page from the memory region of the source host system to the new memory region via RDMA.
    Type: Application
    Filed: July 28, 2022
    Publication date: November 17, 2022
    Inventors: Keerthi Kumar, Halesh Sadashiv, Sairam Veeraswamy, Rajesh Venkatasubramanian, Kiran Dikshit, Kiran Tati
  • Patent number: 11436112
    Abstract: Techniques for implementing RDMA-based recovery of dirty data in remote memory are provided. In one set of embodiments, upon occurrence of a failure at a first (i.e., source) host system, a second (i.e., failover) host system can allocate a new memory region corresponding to a memory region of the source host system and retrieve a baseline copy of the memory region from a storage backend shared by the source and failover host systems. The failover host system can further populate the new memory region with the baseline copy and retrieve one or more dirty page lists for the memory region from the source host system via RDMA, where the one or more dirty page lists identify memory pages in the memory region that include data updates not present in the baseline copy. For each memory page identified in the one or more dirty page lists, the failover host system can then copy the content of that memory page from the memory region of the source host system to the new memory region via RDMA.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: September 6, 2022
    Assignee: VMware, Inc.
    Inventors: Keerthi Kumar, Halesh Sadashiv, Sairam Veeraswamy, Rajesh Venkatasubramanian, Kiran Dikshit, Kiran Tati