Patents by Inventor VASANTHA KUMAR BANDUR PUTTAPPA

VASANTHA KUMAR BANDUR PUTTAPPA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250104758
    Abstract: Various embodiments include methods for implementing a multi-bank memory refresh command on memory devices. A memory controller may select a number of memory banks to refresh in a refresh cycle, select a first memory bank to refresh, and send the memory device a multi-bank of memory refresh command that encodes the number of memory banks and the first memory bank to refresh. The memory device may recognize the multi-bank memory refresh command based on signals received over a number of clock cycles, decode the command to identify the number of memory banks to refreshed and the first memory bank to refresh, and then refresh the identified number of memory banks starting from the identified first memory bank.
    Type: Application
    Filed: September 22, 2023
    Publication date: March 27, 2025
    Inventors: Saurabh SETHI, Vasantha Kumar Bandur PUTTAPPA, Amulya Srinivasan MARGASAHAYAM, Madhukar Reddy N, Abhay RAJ
  • Patent number: 12106793
    Abstract: Aspects of the present disclosure are directed to techniques and procedures for reducing memory (e.g., DRAM) access latency (e.g., read latency, write latency) due to memory refreshes. In some aspects, a memory refresh scheduling algorithm can take into account of memory access batching (e.g., read batch, write batch). In some aspects, a refresh scheduling algorithm can schedule more or prioritize refreshes to occur during a write batch to reduce memory read access latency because fewer refreshes are scheduled during memory read access. The techniques can be adapted to reduce write latency.
    Type: Grant
    Filed: December 14, 2022
    Date of Patent: October 1, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Saurabh Sethi, Madhukar Reddy N, Vasantha Kumar Bandur Puttappa, Amulya Srinivasan Margasahayam
  • Publication number: 20240203476
    Abstract: Aspects of the present disclosure are directed to techniques and procedures for reducing memory (e.g., DRAM) access latency (e.g., read latency, write latency) due to memory refreshes. In some aspects, a memory refresh scheduling algorithm can take into account of memory access batching (e.g., read batch, write batch). In some aspects, a refresh scheduling algorithm can schedule more or prioritize refreshes to occur during a write batch to reduce memory read access latency because fewer refreshes are scheduled during memory read access. The techniques can be adapted to reduce write latency.
    Type: Application
    Filed: December 14, 2022
    Publication date: June 20, 2024
    Inventors: Saurabh SETHI, Madhukar Reddy N, Vasantha Kumar Bandur PUTTAPPA, Amulya Srinivasan MARGASAHAYAM
  • Patent number: 10713189
    Abstract: Methods and systems for dynamically controlling buffer size in a computing device in a computing device (“PCD”) are disclosed. A monitor module determines a first use case for defining a first activity level for a plurality of components of the PCD. Based on the first use case, a plurality of buffers are set to a first buffer size. Each of the buffers is associated with one of the plurality of components, and the first buffer size for each buffer is based on the first activity level of the associated component. A second use case for the PCD, different from the first use case, is determined. The second use case defines a second activity level for the plurality of components. At least one of the buffers is set to a second buffer size different from the first buffer size based on the second use case.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: July 14, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Vasantha Kumar Bandur Puttappa, Umesh Rao, Kunal Desai
  • Publication number: 20190205264
    Abstract: A MMU may read page descriptors (using virtual addresses as an index) in a burst mode from page tables in a system memory. The page descriptors may include intermediate physical addresses (“IPAs”, in stage 1) and corresponding physical addresses (“PAs”, in stage 2). The virtual address in conjunction with page table base address register is used to index page descriptors into main memory. The MMU may identify a first group of contiguous IPAs beginning at a base IPA and a second group of contiguous IPAs beginning at an offset from the base IPA. The first and second groups may be separated by at least one IPA not contiguous with either the first or second group. The MMU may read a first PA from the page tables that corresponds to the base IPA. The MMU may store an entry in a buffer that includes the PA and a first linearity tag.
    Type: Application
    Filed: December 28, 2017
    Publication date: July 4, 2019
    Inventors: FELIX VARGHESE, ZHENBIAO MA, MARTIN JACOB, KUMAR SAKET, VASANTHA KUMAR BANDUR PUTTAPPA, SUJEET KUMAR
  • Publication number: 20180373652
    Abstract: Methods and systems for dynamically controlling buffer size in a computing device in a computing device (“PCD”) are disclosed. A monitor module determines a first use case for defining a first activity level for a plurality of components of the PCD. Based on the first use case, a plurality of buffers are set to a fist buffer size. Each of the buffers is associated with one of the plurality of components, and the first buffer size for each buffer is based on the first activity level of the associated component. A second use case for the PCD, different from the first use case, is determined. The second use case defines a second activity level for the plurality of components. At least one of the buffers is set to a second buffer size different from the first buffer size based on the second use case.
    Type: Application
    Filed: June 27, 2017
    Publication date: December 27, 2018
    Inventors: Vasantha Kumar Bandur Puttappa, Umesh Rao, Kunal Desai
  • Publication number: 20180336141
    Abstract: Systems, methods, and computer programs are disclosed for reducing worst-case memory latency in a system comprising a system memory and a cache memory. One embodiment is a method comprising receiving a translation request from a memory client for a translation of a virtual address to a physical address. If the translation is not available at a translation buffer unit and a translation control unit in a system memory management unit, the translation control unit initiates a page table walk. During the page table walk, the method determines a page table entry for an intermediate physical address in the system memory. In response to determining the page table entry for the intermediate physical address, the method preloads data at the intermediate physical address to the system cache before the page table walk for a final physical address corresponding to the intermediate physical address is completed.
    Type: Application
    Filed: May 16, 2017
    Publication date: November 22, 2018
    Inventors: KUNAL DESAI, FELIX VARGHESE, VASANTHA KUMAR BANDUR PUTTAPPA