Patents by Inventor Raviprasad Mummidi

Raviprasad Mummidi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11429694
    Abstract: Techniques for managing customer license agreements are described. In one embodiment, a user-specified resource metric of a license model and a user-specified limit of the user-specified resource metric are obtained. A request for permission to launch a new compute resource at a computing device of the provider network is obtained from a service within a provider network. The new compute resource having a property that is an amount of the user-specified metric. A determination is made that a launch of the new compute resource would cause the user-specified limit to be exceeded, and the request the request to launch the new compute resource is denied.
    Type: Grant
    Filed: August 17, 2018
    Date of Patent: August 30, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Murtaza Chowdhury, Malcom Featonby, Adnan Ijaz, Anup P. Pandya, Anupama Anand, Niti Khadapkar, Ramapulla Reddy Chennuru, Raviprasad Mummidi, Srivasan Ramkumar, Jagruti Patil, Yupeng Zhang
  • Publication number: 20200057841
    Abstract: Techniques for managing customer license agreements are described. In one embodiment, a user-specified resource metric of a license model and a user-specified limit of the user-specified resource metric are obtained. A request for permission to launch a new compute resource at a computing device of the provider network is obtained from a service within a provider network. The new compute resource having a property that is an amount of the user-specified metric. A determination is made that a launch of the new compute resource would cause the user-specified limit to be exceeded, and the request the request to launch the new compute resource is denied.
    Type: Application
    Filed: August 17, 2018
    Publication date: February 20, 2020
    Inventors: Murtaza CHOWDHURY, Malcom FEATONBY, Adnan IJAZ, Anup P. PANDYA, Anupama ANAND, Niti KHADAPKAR, Ramapulla Reddy CHENNURU, Raviprasad MUMMIDI, Srivasan RAMKUMAR, Jagruti PATIL, Yupeng ZHANG
  • Patent number: 9116829
    Abstract: The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization.
    Type: Grant
    Filed: July 1, 2014
    Date of Patent: August 25, 2015
    Assignee: VMware, Inc.
    Inventors: Qasim Ali, Vivek Pandey, Raviprasad Mummidi, Kiran Tati
  • Patent number: 8898518
    Abstract: A checkpointing fault tolerance network architecture enables a backup computer system to be remotely located from a primary computer system. An intermediary computer system is situated between the primary computer system and the backup computer system to manage the transmission of checkpoint information to the backup VM in an efficient manner. The intermediary computer system is networked to the primary VM through a first connection and is networked to the backup VM through a second connection. The intermediary computer system identifies updated data corresponding to memory pages that have been least recently modified by the primary VM and transmits such updated data to the backup VM through the first connection. In such manner, the intermediary computer system holds back updated data corresponding to more recently modified memory pages, since such memory pages may be more likely to be updated again in the future.
    Type: Grant
    Filed: April 18, 2012
    Date of Patent: November 25, 2014
    Assignee: VMware, Inc.
    Inventors: Ole Agesen, Raviprasad Mummidi, Pratap Subrahmanyam
  • Patent number: 8898509
    Abstract: Embodiments include a checkpointing fault tolerance network architecture enables a first computer system to be remotely located from a second computer system. An intermediary computer system is situated between the first computer system and the second computer system to manage the transmission of checkpoint information from the first computer system to the second computer system in an efficient manner. The intermediary computer system responds to requests from the second computer system for updated data corresponding to memory pages selected by the second computer system, or memory pages identified through application of policy information defined by the second computer system.
    Type: Grant
    Filed: December 12, 2012
    Date of Patent: November 25, 2014
    Assignee: VMware, Inc.
    Inventor: Raviprasad Mummidi
  • Patent number: 8898508
    Abstract: A checkpointing fault tolerance network architecture enables a backup computer system to be remotely located from a primary computer system. An intermediary computer system is situated between the primary computer system and the backup computer system to manage the transmission of checkpoint information to the backup VM in an efficient manner. The intermediary computer system is networked to the primary VM through a first connection and is networked to the backup VM through a second connection. The intermediary computer system identifies updated data corresponding to memory pages that have been less frequently modified by the primary VM and transmits such updated data to the backup VM through the first connection. In such manner, the intermediary computer system holds back updated data corresponding to more frequently modified memory pages, since such memory pages may be more likely to be updated again in the future.
    Type: Grant
    Filed: November 6, 2012
    Date of Patent: November 25, 2014
    Assignee: VMware, Inc.
    Inventor: Raviprasad Mummidi
  • Publication number: 20140317375
    Abstract: The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization.
    Type: Application
    Filed: July 1, 2014
    Publication date: October 23, 2014
    Inventors: Qasim ALI, Vivek PANDEY, Raviprasad MUMMIDI, Kiran TATI
  • Patent number: 8793428
    Abstract: A system for identifying an exiting process and removing traces and shadow page table pages corresponding to the process' page table pages. An accessed minimum virtual address is maintained corresponding to an address space. In one embodiment, whenever a page table entry corresponding to the accessed minimum virtual address changes from present to not present, the process is determined to be exiting and removal of corresponding trace and shadow page table pages is begun. In a second embodiment, consecutive present to not-present PTE transitions are tracked for guest page tables on a per address space basis. When at least two guest page tables each has at least four consecutive present to not-present PTE transitions, a next present to not-present PTE transition event in the address space leads to the corresponding guest page table trace being dropped and the shadow page table page being removed.
    Type: Grant
    Filed: January 22, 2013
    Date of Patent: July 29, 2014
    Assignee: VMware, Inc.
    Inventors: Qasim Ali, Raviprasad Mummidi, Kiran Tati
  • Patent number: 8612659
    Abstract: Hardware interrupts are routed to one of multiple processors of a virtualized computer system based on priority values assigned to the codes being executed by the processors. Each processor dynamically updates a priority value associated with code being executed thereby, and when a hardware interrupt is generated, the hardware interrupt is routed to the processor that is executing a code with the lowest priority value to handle the hardware interrupt. As a result, routing of the interrupts can be biased away from processors that are executing high priority tasks or where context switch might be computationally expensive.
    Type: Grant
    Filed: December 14, 2010
    Date of Patent: December 17, 2013
    Assignee: VMware, Inc.
    Inventors: Benjamin C. Serebrin, Raviprasad Mummidi
  • Patent number: 8364932
    Abstract: The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization.
    Type: Grant
    Filed: October 29, 2010
    Date of Patent: January 29, 2013
    Assignee: VMware, Inc.
    Inventors: Qasim Ali, Raviprasad Mummidi, Vivek Pandey, Kiran Tati
  • Patent number: 8359422
    Abstract: A system for identifying an exiting process and removing traces and shadow page table pages corresponding to the process' page table pages. An accessed minimum virtual address is maintained corresponding to an address space. In one embodiment, whenever a page table entry corresponding to the accessed minimum virtual address changes from present to not present, the process is determined to be exiting and removal of corresponding trace and shadow page table pages is begun. In a second embodiment, consecutive present to not-present PTE transitions are tracked for guest page tables on a per address space basis. When at least two guest page tables each has at least four consecutive present to not-present PTE transitions, a next present to not-present PTE transition event in the address space leads to the corresponding guest page table trace being dropped and the shadow page table page being removed.
    Type: Grant
    Filed: June 26, 2009
    Date of Patent: January 22, 2013
    Assignee: VMware, Inc.
    Inventors: Qasim Ali, Raviprasad Mummidi, Kiran Tati
  • Publication number: 20120204061
    Abstract: A checkpointing fault tolerance network architecture enables a backup computer system to be remotely located from a primary computer system. An intermediary computer system is situated between the primary computer system and the backup computer system to manage the transmission of checkpoint information to the backup VM in an efficient manner. The intermediary computer system is networked to the primary VM through a first connection and is networked to the backup VM through a second connection. The intermediary computer system identifies updated data corresponding to memory pages that have been least recently modified by the primary VM and transmits such updated data to the backup VM through the first connection. In such manner, the intermediary computer system holds back updated data corresponding to more recently modified memory pages, since such memory pages may be more likely to be updated again in the future.
    Type: Application
    Filed: April 18, 2012
    Publication date: August 9, 2012
    Applicant: VMware, Inc.
    Inventors: Ole AGESEN, Raviprasad MUMMIDI, Pratap SUBRAHMANYAM
  • Publication number: 20120110236
    Abstract: The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization.
    Type: Application
    Filed: October 29, 2010
    Publication date: May 3, 2012
    Applicant: VMWARE, INC.
    Inventors: Qasim ALI, Raviprasad MUMMIDI, Vivek PANDEY, Kiran TATI
  • Patent number: 8171338
    Abstract: A checkpointing fault tolerance network architecture enables a backup computer system to be remotely located from a primary computer system. An intermediary computer system is situated between the primary computer system and the backup computer system to manage the transmission of checkpoint information to the backup VM in an efficient manner. The intermediary computer system is networked to the primary VM through a high bandwidth connection but is networked to the backup VM through a lower bandwidth connection. The intermediary computer system identifies updated data corresponding to memory pages that have been least recently modified by the primary VM and transmits such updated data to the backup VM through the low bandwidth connection. In such manner, the intermediary computer system economizes the bandwidth capacity of the low bandwidth connection, holding back updated data corresponding to more recently modified memory pages, since such memory pages may be more likely to be updated again in the future.
    Type: Grant
    Filed: May 18, 2010
    Date of Patent: May 1, 2012
    Assignee: VMware, Inc.
    Inventors: Ole Agesen, Raviprasad Mummidi, Pratap Subrahmanyam
  • Publication number: 20110289345
    Abstract: A checkpointing fault tolerance network architecture enables a backup computer system to be remotely located from a primary computer system. An intermediary computer system is situated between the primary computer system and the backup computer system to manage the transmission of checkpoint information to the backup VM in an efficient manner. The intermediary computer system is networked to the primary VM through a high bandwidth connection but is networked to the backup VM through a lower bandwidth connection. The intermediary computer system identifies updated data corresponding to memory pages that have been least recently modified by the primary VM and transmits such updated data to the backup VM through the low bandwidth connection. In such manner, the intermediary computer system economizes the bandwidth capacity of the low bandwidth connection, holding back updated data corresponding to more recently modified memory pages, since such memory pages may be more likely to be updated again in the future.
    Type: Application
    Filed: May 18, 2010
    Publication date: November 24, 2011
    Applicant: VMWARE, INC.
    Inventors: Ole Agesen, Raviprasad Mummidi, Pratap Subrahmanyam
  • Publication number: 20100332910
    Abstract: A system for identifying an exiting process and removing traces and shadow page table pages corresponding to the process' page table pages. An accessed minimum virtual address is maintained corresponding to an address space. In one embodiment, whenever a page table entry corresponding to the accessed minimum virtual address changes from present to not present, the process is determined to be exiting and removal of corresponding trace and shadow page table pages is begun. In a second embodiment, consecutive present to not-present PTE transitions are tracked for guest page tables on a per address space basis. When at least two guest page tables each has at least four consecutive present to not-present PTE transitions, a next present to not-present PTE transition event in the address space leads to the corresponding guest page table trace being dropped and the shadow page table page being removed.
    Type: Application
    Filed: June 26, 2009
    Publication date: December 30, 2010
    Applicant: VMware, Inc.
    Inventors: Qasim ALI, Raviprasad MUMMIDI, Kiran TATI