Patents by Inventor Raviprasad Mummidi
Raviprasad Mummidi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11429694Abstract: Techniques for managing customer license agreements are described. In one embodiment, a user-specified resource metric of a license model and a user-specified limit of the user-specified resource metric are obtained. A request for permission to launch a new compute resource at a computing device of the provider network is obtained from a service within a provider network. The new compute resource having a property that is an amount of the user-specified metric. A determination is made that a launch of the new compute resource would cause the user-specified limit to be exceeded, and the request the request to launch the new compute resource is denied.Type: GrantFiled: August 17, 2018Date of Patent: August 30, 2022Assignee: Amazon Technologies, Inc.Inventors: Murtaza Chowdhury, Malcom Featonby, Adnan Ijaz, Anup P. Pandya, Anupama Anand, Niti Khadapkar, Ramapulla Reddy Chennuru, Raviprasad Mummidi, Srivasan Ramkumar, Jagruti Patil, Yupeng Zhang
-
Publication number: 20200057841Abstract: Techniques for managing customer license agreements are described. In one embodiment, a user-specified resource metric of a license model and a user-specified limit of the user-specified resource metric are obtained. A request for permission to launch a new compute resource at a computing device of the provider network is obtained from a service within a provider network. The new compute resource having a property that is an amount of the user-specified metric. A determination is made that a launch of the new compute resource would cause the user-specified limit to be exceeded, and the request the request to launch the new compute resource is denied.Type: ApplicationFiled: August 17, 2018Publication date: February 20, 2020Inventors: Murtaza CHOWDHURY, Malcom FEATONBY, Adnan IJAZ, Anup P. PANDYA, Anupama ANAND, Niti KHADAPKAR, Ramapulla Reddy CHENNURU, Raviprasad MUMMIDI, Srivasan RAMKUMAR, Jagruti PATIL, Yupeng ZHANG
-
Patent number: 9116829Abstract: The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization.Type: GrantFiled: July 1, 2014Date of Patent: August 25, 2015Assignee: VMware, Inc.Inventors: Qasim Ali, Vivek Pandey, Raviprasad Mummidi, Kiran Tati
-
Patent number: 8898518Abstract: A checkpointing fault tolerance network architecture enables a backup computer system to be remotely located from a primary computer system. An intermediary computer system is situated between the primary computer system and the backup computer system to manage the transmission of checkpoint information to the backup VM in an efficient manner. The intermediary computer system is networked to the primary VM through a first connection and is networked to the backup VM through a second connection. The intermediary computer system identifies updated data corresponding to memory pages that have been least recently modified by the primary VM and transmits such updated data to the backup VM through the first connection. In such manner, the intermediary computer system holds back updated data corresponding to more recently modified memory pages, since such memory pages may be more likely to be updated again in the future.Type: GrantFiled: April 18, 2012Date of Patent: November 25, 2014Assignee: VMware, Inc.Inventors: Ole Agesen, Raviprasad Mummidi, Pratap Subrahmanyam
-
Patent number: 8898509Abstract: Embodiments include a checkpointing fault tolerance network architecture enables a first computer system to be remotely located from a second computer system. An intermediary computer system is situated between the first computer system and the second computer system to manage the transmission of checkpoint information from the first computer system to the second computer system in an efficient manner. The intermediary computer system responds to requests from the second computer system for updated data corresponding to memory pages selected by the second computer system, or memory pages identified through application of policy information defined by the second computer system.Type: GrantFiled: December 12, 2012Date of Patent: November 25, 2014Assignee: VMware, Inc.Inventor: Raviprasad Mummidi
-
Patent number: 8898508Abstract: A checkpointing fault tolerance network architecture enables a backup computer system to be remotely located from a primary computer system. An intermediary computer system is situated between the primary computer system and the backup computer system to manage the transmission of checkpoint information to the backup VM in an efficient manner. The intermediary computer system is networked to the primary VM through a first connection and is networked to the backup VM through a second connection. The intermediary computer system identifies updated data corresponding to memory pages that have been less frequently modified by the primary VM and transmits such updated data to the backup VM through the first connection. In such manner, the intermediary computer system holds back updated data corresponding to more frequently modified memory pages, since such memory pages may be more likely to be updated again in the future.Type: GrantFiled: November 6, 2012Date of Patent: November 25, 2014Assignee: VMware, Inc.Inventor: Raviprasad Mummidi
-
Publication number: 20140317375Abstract: The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization.Type: ApplicationFiled: July 1, 2014Publication date: October 23, 2014Inventors: Qasim ALI, Vivek PANDEY, Raviprasad MUMMIDI, Kiran TATI
-
Patent number: 8793428Abstract: A system for identifying an exiting process and removing traces and shadow page table pages corresponding to the process' page table pages. An accessed minimum virtual address is maintained corresponding to an address space. In one embodiment, whenever a page table entry corresponding to the accessed minimum virtual address changes from present to not present, the process is determined to be exiting and removal of corresponding trace and shadow page table pages is begun. In a second embodiment, consecutive present to not-present PTE transitions are tracked for guest page tables on a per address space basis. When at least two guest page tables each has at least four consecutive present to not-present PTE transitions, a next present to not-present PTE transition event in the address space leads to the corresponding guest page table trace being dropped and the shadow page table page being removed.Type: GrantFiled: January 22, 2013Date of Patent: July 29, 2014Assignee: VMware, Inc.Inventors: Qasim Ali, Raviprasad Mummidi, Kiran Tati
-
Patent number: 8612659Abstract: Hardware interrupts are routed to one of multiple processors of a virtualized computer system based on priority values assigned to the codes being executed by the processors. Each processor dynamically updates a priority value associated with code being executed thereby, and when a hardware interrupt is generated, the hardware interrupt is routed to the processor that is executing a code with the lowest priority value to handle the hardware interrupt. As a result, routing of the interrupts can be biased away from processors that are executing high priority tasks or where context switch might be computationally expensive.Type: GrantFiled: December 14, 2010Date of Patent: December 17, 2013Assignee: VMware, Inc.Inventors: Benjamin C. Serebrin, Raviprasad Mummidi
-
Patent number: 8364932Abstract: The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization.Type: GrantFiled: October 29, 2010Date of Patent: January 29, 2013Assignee: VMware, Inc.Inventors: Qasim Ali, Raviprasad Mummidi, Vivek Pandey, Kiran Tati
-
Patent number: 8359422Abstract: A system for identifying an exiting process and removing traces and shadow page table pages corresponding to the process' page table pages. An accessed minimum virtual address is maintained corresponding to an address space. In one embodiment, whenever a page table entry corresponding to the accessed minimum virtual address changes from present to not present, the process is determined to be exiting and removal of corresponding trace and shadow page table pages is begun. In a second embodiment, consecutive present to not-present PTE transitions are tracked for guest page tables on a per address space basis. When at least two guest page tables each has at least four consecutive present to not-present PTE transitions, a next present to not-present PTE transition event in the address space leads to the corresponding guest page table trace being dropped and the shadow page table page being removed.Type: GrantFiled: June 26, 2009Date of Patent: January 22, 2013Assignee: VMware, Inc.Inventors: Qasim Ali, Raviprasad Mummidi, Kiran Tati
-
Publication number: 20120204061Abstract: A checkpointing fault tolerance network architecture enables a backup computer system to be remotely located from a primary computer system. An intermediary computer system is situated between the primary computer system and the backup computer system to manage the transmission of checkpoint information to the backup VM in an efficient manner. The intermediary computer system is networked to the primary VM through a first connection and is networked to the backup VM through a second connection. The intermediary computer system identifies updated data corresponding to memory pages that have been least recently modified by the primary VM and transmits such updated data to the backup VM through the first connection. In such manner, the intermediary computer system holds back updated data corresponding to more recently modified memory pages, since such memory pages may be more likely to be updated again in the future.Type: ApplicationFiled: April 18, 2012Publication date: August 9, 2012Applicant: VMware, Inc.Inventors: Ole AGESEN, Raviprasad MUMMIDI, Pratap SUBRAHMANYAM
-
Publication number: 20120110236Abstract: The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization.Type: ApplicationFiled: October 29, 2010Publication date: May 3, 2012Applicant: VMWARE, INC.Inventors: Qasim ALI, Raviprasad MUMMIDI, Vivek PANDEY, Kiran TATI
-
Patent number: 8171338Abstract: A checkpointing fault tolerance network architecture enables a backup computer system to be remotely located from a primary computer system. An intermediary computer system is situated between the primary computer system and the backup computer system to manage the transmission of checkpoint information to the backup VM in an efficient manner. The intermediary computer system is networked to the primary VM through a high bandwidth connection but is networked to the backup VM through a lower bandwidth connection. The intermediary computer system identifies updated data corresponding to memory pages that have been least recently modified by the primary VM and transmits such updated data to the backup VM through the low bandwidth connection. In such manner, the intermediary computer system economizes the bandwidth capacity of the low bandwidth connection, holding back updated data corresponding to more recently modified memory pages, since such memory pages may be more likely to be updated again in the future.Type: GrantFiled: May 18, 2010Date of Patent: May 1, 2012Assignee: VMware, Inc.Inventors: Ole Agesen, Raviprasad Mummidi, Pratap Subrahmanyam
-
Publication number: 20110289345Abstract: A checkpointing fault tolerance network architecture enables a backup computer system to be remotely located from a primary computer system. An intermediary computer system is situated between the primary computer system and the backup computer system to manage the transmission of checkpoint information to the backup VM in an efficient manner. The intermediary computer system is networked to the primary VM through a high bandwidth connection but is networked to the backup VM through a lower bandwidth connection. The intermediary computer system identifies updated data corresponding to memory pages that have been least recently modified by the primary VM and transmits such updated data to the backup VM through the low bandwidth connection. In such manner, the intermediary computer system economizes the bandwidth capacity of the low bandwidth connection, holding back updated data corresponding to more recently modified memory pages, since such memory pages may be more likely to be updated again in the future.Type: ApplicationFiled: May 18, 2010Publication date: November 24, 2011Applicant: VMWARE, INC.Inventors: Ole Agesen, Raviprasad Mummidi, Pratap Subrahmanyam
-
Publication number: 20100332910Abstract: A system for identifying an exiting process and removing traces and shadow page table pages corresponding to the process' page table pages. An accessed minimum virtual address is maintained corresponding to an address space. In one embodiment, whenever a page table entry corresponding to the accessed minimum virtual address changes from present to not present, the process is determined to be exiting and removal of corresponding trace and shadow page table pages is begun. In a second embodiment, consecutive present to not-present PTE transitions are tracked for guest page tables on a per address space basis. When at least two guest page tables each has at least four consecutive present to not-present PTE transitions, a next present to not-present PTE transition event in the address space leads to the corresponding guest page table trace being dropped and the shadow page table page being removed.Type: ApplicationFiled: June 26, 2009Publication date: December 30, 2010Applicant: VMware, Inc.Inventors: Qasim ALI, Raviprasad MUMMIDI, Kiran TATI