Patents by Inventor Pratap Subrahmanyam

Pratap Subrahmanyam has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11586545
    Abstract: Memory pages of a local application program are prefetched from a memory of a remote host. A method of prefetching the memory pages from the remote memory includes detecting that a cache-line access made by a processor executing the local application program is an access to a cache line containing page table data of the local application program, identifying data pages that are referenced by the page table data, and fetching the identified data pages from the remote memory and storing the fetched data pages in a local memory.
    Type: Grant
    Filed: July 2, 2021
    Date of Patent: February 21, 2023
    Assignee: VMware, Inc.
    Inventors: Irina Calciu, Andreas Nowatzyk, Isam Wadih Akkawi, Venkata Subhash Reddy Peddamallu, Pratap Subrahmanyam
  • Publication number: 20230033029
    Abstract: Disclosed are various embodiments for optimized memory tiering. A first page can be allocated in a first memory for a process, the first memory being associated with a first memory tier. Accesses of the first page by the process during execution of the process can be monitored. Then, accesses of the first page by the process during execution of the process can be compared to an allocation policy to make a first determination to move the contents of the first page from the first memory to a second memory associated with a second memory tier. Next, the contents of the first page can be copied from the first memory to a second page in the second memory in response to the first determination.
    Type: Application
    Filed: July 22, 2021
    Publication date: February 2, 2023
    Inventors: Marcos Kawazoe Aguilera, Renu Raman, Pratap Subrahmanyam, Praveen Vegulla, Rajesh Venkatasubramanian
  • Publication number: 20230031304
    Abstract: Disclosed are various embodiments for optimized memory tiering. An ideal tier size for a first memory and an ideal tier size for a second memory can be determined for a process. Then, a host computing device can be identified that can accommodate the ideal tier size for the first memory and the second memory. Subsequently, the process can be assigned to the host computing device.
    Type: Application
    Filed: July 22, 2021
    Publication date: February 2, 2023
    Inventors: Marcos Kawazoe Aguilera, Renu Raman, Pratap Subrahmanyam, Praveen Vegulla, Rajesh Venkatasubramanian
  • Publication number: 20230028825
    Abstract: A device tracks accesses to pages of code executed by processors and modifies a portion of the code without terminating the execution of the code. The device is connected to the processors via a coherence interconnect and a local memory of the device stores the code pages. As a result, any requests to access cache lines of the code pages made by the processors will be placed on the coherence interconnect, and the device is able to track any cache-line accesses of the code pages by monitoring the coherence interconnect. In response to a request to read a cache line having a particular address, a modified code portion is returned in place of the code portion stored in the code pages.
    Type: Application
    Filed: November 19, 2021
    Publication date: January 26, 2023
    Inventors: Irina CALCIU, Andreas NOWATZYK, Pratap SUBRAHMANYAM
  • Publication number: 20230021883
    Abstract: Disclosed are various embodiments for optimizing the migration of pages of memory servers in cluster memory systems. To begin, a computing device can mark in a page table of the computing device that a page stored on a first memory host is not present. Then, the computing device can flush a translation lookaside buffer of the computing device. Next, the computing device can copy the page from the first memory host to a second memory host. Moving on, the computing device can update a page mapping table to reflect that the page is stored in the second memory host. Then, the computing device can mark in the page table of the computing device that the page stored in the second memory host is present. Subsequently, the computing device can discard the page stored on the first memory host.
    Type: Application
    Filed: October 7, 2021
    Publication date: January 26, 2023
    Inventors: MARCOS K. AGUILERA, PRATAP SUBRAHMANYAM, SAIRAM VEERASWAMY, PRAVEEN VEGULLA, RAJESH VENKATASUBRAMANIAN
  • Publication number: 20230023256
    Abstract: A method of performing a copy-on-write on a shared memory page is carried out by a device communicating with a processor via a coherence interconnect. The method includes: adding a page table entry so that a request to read a first cache line of the shared memory page includes a cache-line address of the shared memory page and a request to write to a second cache line of the shared memory page includes a cache-line address of a new memory page; in response to the request to write to the second cache line, storing new data of the second cache line in a second memory and associating the second cache-line address with the new data stored in the second memory; and in response to a request to read the second cache line, reading the new data of the second cache line from the second memory.
    Type: Application
    Filed: September 28, 2021
    Publication date: January 26, 2023
    Inventors: Irina CALCIU, Andreas NOWATZYK, Pratap SUBRAHMANYAM
  • Publication number: 20230022096
    Abstract: While an application or a virtual machine (VM) is running, a device tracks accesses to cache lines to detect access patterns that indicate security attacks, such as cache-based side channel attacks or row hammer attacks. To enable the device to detect accesses to cache lines, the device is connected to processors via a coherence interconnect, and the application/VM data is stored in a local memory of the device. The device collects the cache lines of the application/VM data that are accessed while the application/VM is running into a buffer and the buffer is analyzed for access patterns that indicate security attacks.
    Type: Application
    Filed: July 22, 2021
    Publication date: January 26, 2023
    Inventors: Irina CALCIU, Andreas NOWATZYK, Pratap SUBRAHMANYAM
  • Publication number: 20230023696
    Abstract: Disclosed are various embodiments for optimizing the migration of processes or virtual machines in cluster memory systems. To begin, a first computing device can identify a set of pages allocated to a process or virtual machine hosted by the first computing device. Then, the first computing device can identify a subset of the allocated pages that have been accessed with a least a predefined frequency. Next, the first computing device can copy the subset of the allocated pages to a second computing device. Subsequently, the first computing device can copy a page mapping table to the second computing device, the page mapping table specifying which pages in the set of pages allocated to the process or virtual machine are stored by a memory host. Finally, the first computing device can copy remaining ones of the allocated pages to the second computing device.
    Type: Application
    Filed: October 7, 2021
    Publication date: January 26, 2023
    Inventors: Marcos K. AGUILERA, Pratap SUBRAHMANYAM, Sairam VEERASWAMY, Praveen VEGULLA, Rajesh VENKATASUBRAMANIAN
  • Publication number: 20230021067
    Abstract: Disclosed are various embodiments for improving resiliency and performance of clustered memory. A computing device can acquire a chunk of byte-addressable memory from a cluster memory host. The computing device can then identify an active set of allocated memory pages and an inactive set of allocated memory pages for a process executing on the computing device. Next, the computing device can store the active set of allocated memory pages for the process in the memory of the computing device. Finally, the computing device can store the inactive set of allocated memory pages for the process in the chunk of byte-addressable memory of the cluster memory host.
    Type: Application
    Filed: September 22, 2021
    Publication date: January 19, 2023
    Inventors: MARCOS K. AGUILERA, Keerthi Kumar, Pramod Kumar, Pratap Subrahmanyam, Sairam Veeraswamy, Rajesh Venkatasubramanian
  • Publication number: 20230017224
    Abstract: Disclosed are various embodiments for improving the resiliency and performance for clustered memory. A computing device can mark a page of the memory as being reclaimed. The computing device can then set the page of the memory as read-only. Next, the computing device can submit a write request for the contents of the page to individual ones of a plurality of memory hosts. Subsequently, the computing device can receive individual confirmations of a successful write of the page from the individual ones of the plurality of memory hosts. Then, the computing device can mark the page as free in response to receipt of the individual confirmations of the successful write from the individual ones of the plurality of memory hosts.
    Type: Application
    Filed: September 22, 2021
    Publication date: January 19, 2023
    Inventors: MARCOS K. AGUILERA, KEERTHI KUMAR, PRAMOD KUMAR, PRATAP SUBRAHMANYAM, SAIRAM VEERASWAMY, RAJESH VENKATASUBRAMANIAN
  • Publication number: 20230017804
    Abstract: Disclosed are various embodiments for improving the resiliency and performance of cluster memory. First, a computing device can submit a write request to a byte-addressable chunk of memory stored by a memory host, wherein the byte-addressable chunk of memory is read-only. Then, the computing device can determine that a page-fault occurred in response to the write request. Next, the computing device can copy a page associated with the write request from the byte-addressable chunk of memory to the memory of the computing device. Subsequently, the computing device can free the page from the memory host. Then, the computing device can update a page table entry for the page to refer to a location of the page in the memory of the computing device.
    Type: Application
    Filed: September 22, 2021
    Publication date: January 19, 2023
    Inventors: MARCOS K. AGUILERA, Keerthi Kumar, Pramod Kumar, Pratap Subrahmanyam, Sairam Veeraswamy, Rajesh Venkatasubramanian
  • Publication number: 20230012999
    Abstract: Disclosed are various embodiments for improving the resiliency and performance of clustered memory. A computing device can generate at least one parity page from at least a first local page and a second local page. The computing device can then submit a first write request for the first local page to a first one of a plurality of memory hosts. The computing device can also submit a second write request for the second local page to a second one of the plurality of memory hosts. Additionally, the computing device can submit a third write request for the parity page to a third one of the plurality of memory hosts.
    Type: Application
    Filed: September 22, 2021
    Publication date: January 19, 2023
    Inventors: MARCOS K. AGUILERA, KEERTHI KUMAR, PRAMOD KUMAR, PRATAP SUBRAHMANYAM, SAIRAM VEERASWAMY, RAJESH VENKATASUBRAMAN IAN
  • Publication number: 20230012693
    Abstract: Disclosed are various embodiments for optimizing hypervisor paging. A hypervisor can save a machine page to a swap device, the machine page comprising data for a physical page of a virtual machine allocated to a virtual page for a process executing within the virtual machine. The hypervisor can then catch a page fault for a subsequent access of the machine page by the virtual machine. Next, the hypervisor can determine that the physical page is currently unallocated by the virtual machine in response to the page fault. Subsequently, the hypervisor can send a command to the swap device to discard the machine page saved to the swap device in response to a determination that the physical page is currently unallocated by the virtual machine.
    Type: Application
    Filed: October 4, 2021
    Publication date: January 19, 2023
    Inventors: MARCOS K. AGUILERA, DHANTU BURAGOHAIN, KEERTHI KUMAR, PRAMOD KUMAR, PRATAP SUBRAHMANYAM, SAIRAM VEERASWAMY, RAJESH VENKATASUBRAMANIAN
  • Publication number: 20230004496
    Abstract: Memory pages of a local application program are prefetched from a memory of a remote host. A method of prefetching the memory pages from the remote memory includes detecting that a cache-line access made by a processor executing the local application program is an access to a cache line containing page table data of the local application program, identifying data pages that are referenced by the page table data, and fetching the identified data pages from the remote memory and storing the fetched data pages in a local memory.
    Type: Application
    Filed: July 2, 2021
    Publication date: January 5, 2023
    Inventors: Irina CALCIU, Andreas NOWATZYK, Isam Wadih AKKAWI, Venkata Subhash Reddy PEDDAMALLU, Pratap SUBRAHMANYAM
  • Publication number: 20230004497
    Abstract: A method of prefetching memory pages from remote memory includes detecting that a cache-line access made by a processor executing an application program is an access to a cache line containing page table data of the application program, identifying data pages that are referenced by the page table data, initiating a fetch of a data page, which is one of the identified data pages, and starting a timer. If the fetch completes prior to expiration of the timer, the data page is stored in a local memory. On the other hand, if the fetch does not complete prior to expiration of timer, a presence bit of the data page in the page table data is set to indicate that the data page is not present.
    Type: Application
    Filed: July 25, 2022
    Publication date: January 5, 2023
    Inventors: Irina CALCIU, Andreas NOWATZYK, Isam Wadih AKKAWI, Venkata Subhash Reddy PEDDAMALLU, Pratap SUBRAHMANYAM
  • Patent number: 11544194
    Abstract: A method of performing a copy-on-write on a shared memory page is carried out by a device communicating with a processor via a coherence interconnect. The method includes: adding a page table entry so that a request to read a first cache line of the shared memory page includes a cache-line address of the shared memory page and a request to write to a second cache line of the shared memory page includes a cache-line address of a new memory page; in response to the request to write to the second cache line, storing new data of the second cache line in a second memory and associating the second cache-line address with the new data stored in the second memory; and in response to a request to read the second cache line, reading the new data of the second cache line from the second memory.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: January 3, 2023
    Assignee: VMware, Inc.
    Inventors: Irina Calciu, Andreas Nowatzyk, Pratap Subrahmanyam
  • Publication number: 20220414017
    Abstract: The state of cache lines transferred into an out of caches of processing hardware is tracked by monitoring hardware. The method of tracking includes monitoring the processing hardware for cache coherence events on a coherence interconnect between the processing hardware and monitoring hardware, determining that the state of a cache line has changed, and updating a hierarchical data structure to indicate the change in the state of said cache line. The hierarchical data structure includes a first level data structure including first bits, and a second level data structure including second bits, each of the first bits associated with a group of second bits. The step of updating includes setting one of the first bits and one of the second bits in the group corresponding to the first bit that is being set, according to an address of said cache line.
    Type: Application
    Filed: June 23, 2021
    Publication date: December 29, 2022
    Inventors: Nishchay DUA, Andreas NOWATZYK, Isam Wadih AKKAWI, Pratap SUBRAHMANYAM, Venkata Subhash Reddy PEDDAMALLU, Adarsh Seethanadi NAYAK
  • Publication number: 20220405121
    Abstract: Disclosed are various embodiments for decreasing the amount of time spent processing interrupts by switching contexts in parallel with processing an interrupt. An interrupt request can be received during execution of a process in a less privileged user mode. Then, the current state of the process can be saved. Next, a switch from the less privileged mode to a more privileged mode can be made. The interrupt request is then processed while in the more privileged mode. Subsequently or in parallel, and possibly prior to completion of the processing the interrupt request, another switch from the more privileged mode to the less privileged mode can be made.
    Type: Application
    Filed: June 18, 2021
    Publication date: December 22, 2022
    Inventors: Yizhou Shan, Marcos Kawazoe Aguilera, Pratap Subrahmanyam, Rajesh Venkatasubramanian
  • Publication number: 20220398014
    Abstract: Disclosed are various embodiments for high throughput reclamation of pages in memory. A first plurality of pages in a memory of the computing device are identified to reclaim. In addition, a second plurality of pages in the memory of the computing device are identified to reclaim. The first plurality of pages are prepared for storage on a swap device of the computing device. Then, a write request is submitted to a swap device to store the first plurality of pages. After submission of the write request, the second plurality of pages are prepared for storage on the swap device while the swap device completes the write request.
    Type: Application
    Filed: June 10, 2021
    Publication date: December 15, 2022
    Inventors: Emmanuel Amaro Ramirez, Marcos Kawazoe Aguilera, Pratap Subrahmanyam, Rajesh Venkatasubramanian
  • Publication number: 20220334774
    Abstract: Disclosed are various approaches for decreasing the latency involved in reading pages from swap devices. These approaches can include setting a first queue in the plurality of queues as a highest priority queue and a second queue in the plurality of queues as a low priority queue. Then, an input/output (I/O) request for an address in memory can be received.
    Type: Application
    Filed: July 9, 2021
    Publication date: October 20, 2022
    Inventors: Emmanuel Amaro Ramirez, Marcos Kawazoe Aguilera, Pratap Subrahmanyam, Rajesh Venkatasubramanian