Patents by Inventor Vinod Chamarty

Vinod Chamarty has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12287688
    Abstract: The processor-based system includes a throttle request accumulate circuit to receive throttle requests, determine a highest or most aggressive throttle value among the throttle requests, and generate a throttle control signal configured to throttle activity in the plurality of processing circuits. Throttle requests have throttle values corresponding to a reduction in activity in at least a portion of the plurality of processing circuits and may correspond to a particular number of cycles of reduced activity in a window of cycles. In addition to reducing response time to local events or conditions compared to waiting for a hierarchical response, the throttle request accumulate circuit accumulates throttle requests from all circuits that adjust or throttle activity in the plurality of processing circuits, and ensures that the net effective throttle controlling activity in the processing circuits at any given time is based on the highest throttle value of those accumulated throttle requests.
    Type: Grant
    Filed: April 1, 2024
    Date of Patent: April 29, 2025
    Assignee: QUALCOMM Incorporated
    Inventors: Sagar Koorapati, Vinod Chamarty, Gaurav Sanjeev Kirtane, Pushkin Raj Pari, Nitin Makhija, Alon Naveh
  • Publication number: 20240427392
    Abstract: Hierarchical power estimation and throttling in a processor-based system in an integrated circuit (IC) chip, and related power management and power throttling methods are disclosed. The IC chip includes a processor as well as integrated supporting processing devices for the processor. The hierarchical power management system controls power consumption of devices in the IC chip to achieve the desired performance in the processor-based system based on activity power events generated from local activity monitoring of devices in the IC chip. The circuit levels in the hierarchical power management systems are configured to be time synchronized with each other for the synchronized monitoring and reporting of activity samples and activity power events, and the generation of power limiting management responses to throttle power consumption in the IC chip.
    Type: Application
    Filed: June 22, 2023
    Publication date: December 26, 2024
    Inventors: Vinod Chamarty, Sagar Koorapati, Sreeram Jayadev, Alon Naveh
  • Publication number: 20240428024
    Abstract: Broadcasting power limiting management responses in a processor-based system in an integrated circuit (IC) chip is disclosed herein. In one aspect, an IC chip comprises a processor-based system that includes a power estimation and limiting (PEL) circuit, a Limit Management Throughput Throttle (LMTT) source circuit, a plurality of activity management (AM) circuits, and an LMTT bus communicatively coupling the LMTT source circuit with each AM circuit of the plurality of AM circuits. The LMTT source circuit receives a power limiting management response from a PEL circuit via a communications network of the processor-based system, and generates an LMTT command based on the power limiting management response. The LMTT source circuit broadcasts the LMTT command to each AM circuit of the plurality of AM circuits via the LMTT bus.
    Type: Application
    Filed: June 22, 2023
    Publication date: December 26, 2024
    Inventors: Sagar Koorapati, Vinod Chamarty, Alon Naveh
  • Publication number: 20240427397
    Abstract: The processor-based system includes a throttle request accumulate circuit to receive throttle requests, determine a highest or most aggressive throttle value among the throttle requests, and generate a throttle control signal configured to throttle activity in the plurality of processing circuits. Throttle requests have throttle values corresponding to a reduction in activity in at least a portion of the plurality of processing circuits and may correspond to a particular number of cycles of reduced activity in a window of cycles. In addition to reducing response time to local events or conditions compared to waiting for a hierarchical response, the throttle request accumulate circuit accumulates throttle requests from all circuits that adjust or throttle activity in the plurality of processing circuits, and ensures that the net effective throttle controlling activity in the processing circuits at any given time is based on the highest throttle value of those accumulated throttle requests.
    Type: Application
    Filed: June 22, 2023
    Publication date: December 26, 2024
    Inventors: Sagar Koorapati, Vinod Chamarty, Gaurav Sanjeev Kirtane, Pushkin Raj Pari, Nitin Makhija, Alon Naveh
  • Publication number: 20240427400
    Abstract: The processor-based system includes a throttle request accumulate circuit to receive throttle requests, determine a highest or most aggressive throttle value among the throttle requests, and generate a throttle control signal configured to throttle activity in the plurality of processing circuits. Throttle requests have throttle values corresponding to a reduction in activity in at least a portion of the plurality of processing circuits and may correspond to a particular number of cycles of reduced activity in a window of cycles. In addition to reducing response time to local events or conditions compared to waiting for a hierarchical response, the throttle request accumulate circuit accumulates throttle requests from all circuits that adjust or throttle activity in the plurality of processing circuits, and ensures that the net effective throttle controlling activity in the processing circuits at any given time is based on the highest throttle value of those accumulated throttle requests.
    Type: Application
    Filed: April 1, 2024
    Publication date: December 26, 2024
    Inventors: Sagar Koorapati, Vinod Chamarty, Gaurav Sanjeev Kirtane, Pushkin Raj Pari, Nitin Makhija, Alon Naveh
  • Publication number: 20240427393
    Abstract: Hierarchical power estimation and throttling in a processor-based system in an integrated circuit (IC) chip, and related power management and power throttling methods are disclosed. The IC chip includes a processor as well as integrated supporting processing devices for the processor. The hierarchical power management system controls power consumption of devices in the IC chip to achieve the desired performance in the processor-based system based on activity power events generated from local activity monitoring of devices in the IC chip. The circuit levels in the hierarchical power management systems are configured to be time synchronized with each other for the synchronized monitoring and reporting of activity samples and activity power events, and the generation of power limiting management responses to throttle power consumption in the IC chip.
    Type: Application
    Filed: April 4, 2024
    Publication date: December 26, 2024
    Inventors: Vinod Chamarty, Sagar Koorapati, Sreeram Jayadev, Alon Naveh
  • Publication number: 20240427411
    Abstract: Hierarchical power estimation and throttling in a processor-based system in an integrated circuit (IC) chip, and related power management and power throttling methods are disclosed. The IC chip includes a processor as well as integrated supporting processing devices for the processor. The hierarchical power management system controls power consumption of devices in the IC chip to achieve the desired performance in the processor-based system based on activity power events generated from local activity monitoring of devices in the IC chip. To mitigate the delay in the reporting of activity power event of a monitored processing device that may affect throttling of power consumption for the IC chip, the power consumption of a monitored processing device can also be throttled locally throttle its power consumption. This gives reaction time for the power management system to receive and process activity power events to throttle power consumption in the IC chip.
    Type: Application
    Filed: April 4, 2024
    Publication date: December 26, 2024
    Inventors: Vinod Chamarty, Sagar Koorapati, Alon Naveh
  • Publication number: 20240427410
    Abstract: Hierarchical power estimation and throttling in a processor-based system in an integrated circuit (IC) chip, and related power management and power throttling methods are disclosed. The IC chip includes a processor as well as integrated supporting processing devices for the processor. The hierarchical power management system controls power consumption of devices in the IC chip to achieve the desired performance in the processor-based system based on activity power events generated from local activity monitoring of devices in the IC chip. To mitigate the delay in the reporting of activity power event of a monitored processing device that may affect throttling of power consumption for the IC chip, the power consumption of a monitored processing device can also be throttled locally throttle its power consumption. This gives reaction time for the power management system to receive and process activity power events to throttle power consumption in the IC chip.
    Type: Application
    Filed: June 22, 2023
    Publication date: December 26, 2024
    Inventors: Vinod Chamarty, Sagar Koorapati, Alon Naveh
  • Publication number: 20240362103
    Abstract: Detecting and recovering from timeouts in scalable mesh circuits in processor-based devices is disclosed herein. In some exemplary aspects, a processor-based device provides an integrated circuit (IC) that includes an interconnect comprising a scalable mesh network communicatively coupled to a plurality of agents via a respective plurality of bridge devices. The plurality of agents includes a source agent and a target agent that communicate with a source bridge device and a target bridge device, respectively. The target bridge device receives a transaction directed to the target agent from the source agent via the interconnect. Upon receiving the transaction, the target bridge device initiates a timeout counter. If no response to the transaction received by the target bridge device from the target agent by the time the timeout counter expires, the target bridge device transmits to the source bridge device an indication that no response to the transaction was received.
    Type: Application
    Filed: March 4, 2024
    Publication date: October 31, 2024
    Inventors: Gaurav Sanjeev Kirtane, Vinod Chamarty, Vignesh Deivaraj
  • Publication number: 20240320125
    Abstract: Tracing circuits are disposed within each node circuit in a mesh network to debug problems found during development. The tracing circuit disclosed includes a trace read interface for accessing trace packets stored in a trace buffer at entries that are mapped to system memory addresses. Processing circuits coupled to the trace read interface may access the stored trace packets using memory instructions. The trace packets include trace information generated from packets that are detected on selected ports of a node circuit in a node of the mesh network. A filter circuit compares the transaction units to a trace criteria and stores the trace information of the matching packets in the form of trace packets in the memory-mapped entries of the trace buffer. The trace packets can include the transaction units of a packet or just packet header information for more efficient use of the trace buffer.
    Type: Application
    Filed: December 4, 2023
    Publication date: September 26, 2024
    Inventors: Gaurav Sanjeev Kirtane, Vinod Chamarty
  • Publication number: 20240202087
    Abstract: Routing raw debug data using trace infrastructure in processor-based devices is disclosed. In some aspects, a processor-based device comprises a trace interconnect bus, a subsystem circuit comprising a debug transmit circuit, and an input/output (I/O) endpoint circuit. The debug transmit circuit is configured to receive raw debug data from the subsystem circuit, and generate a debug trace packet comprising the raw debug data in lieu of formatted trace data. The debug transmit circuit is also configured to transmit the debug trace packet comprising the raw debug data to the I/O endpoint circuit via the trace interconnect bus during a period of trace interconnect bus inactivity. In this manner, an existing trace infrastructure can be employed to transmit raw debug data without incurring expense in terms of overhead and monetary cost due to the need for industry-standard, infrastructure-compliant tools to decode conventionally packetized trace data for analysis.
    Type: Application
    Filed: October 31, 2023
    Publication date: June 20, 2024
    Inventors: Sreeram Jayadev, Vinod Chamarty
  • Patent number: 11853223
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for allocating cache resources according to page-level attribute values. In one implementation, the system includes one or more integrated client devices and a cache. Each client device is configured to generate at least a memory request. Each memory request has a respective physical address and a respective page descriptor of a page to which the physical address belongs. The cache is configured to cache memory requests for each of the one or more integrated client devices. The cache comprises a cache memory having multiple ways. The cache is configured to distinguish different memory requests using page-level attributes of respective page descriptors of the memory requests, and to allocate different portions of the cache memory to different respective memory requests.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: December 26, 2023
    Assignee: Google LLC
    Inventors: Vinod Chamarty, Joao Dias
  • Patent number: 11803479
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for allocating cache resources according to page-level attribute values. In one implementation, the system includes one or more integrated client devices and a cache. Each client device is configured to generate at least a memory request. Each memory request has a respective physical address and a respective page descriptor of a page to which the physical address belongs. The cache is configured to cache memory requests for each of the one or more integrated client devices. The cache comprises a cache memory having multiple ways. The cache is configured to distinguish different memory requests using page-level attributes of respective page descriptors of the memory requests, and to allocate different portions of the cache memory to different respective memory requests.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: October 31, 2023
    Assignee: Google LLC
    Inventors: Vinod Chamarty, Joao Dias
  • Patent number: 11620243
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for a system-level cache to allocate cache resources by a way-partitioning process. One of the methods includes maintaining a mapping between partitions and priority levels and allocating primary ways to respective enabled partitions in an order corresponding to the respective priority levels assigned to the enabled partitions.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: April 4, 2023
    Assignee: Google LLC
    Inventors: Vinod Chamarty, Xiaoyu Ma, Hongil Yoon, Keith Robert Pflederer, Weiping Liao, Benjamin Dodge, Albert Meixner, Allan Douglas Knies, Manu Gulati, Rahul Jagdish Thakur, Jason Rupert Redgrave
  • Patent number: 11599471
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing a prefetch processing to prepare an ambient computing device to operate in a low-power state without waking a memory device. One of the methods includes performing, by an ambient computing device, a prefetch process that populates a cache with prefetched instructions and data required for the ambient computing device to process inputs to the system while in the low-power state, and entering the low-power state, and processing, by the ambient computing device in the low-power state, inputs to the system using the prefetched instructions and data stored in the cache.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: March 7, 2023
    Assignee: Google LLC
    Inventors: Vinod Chamarty, Lawrence J. Madar, III
  • Publication number: 20220156198
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for allocating cache resources according to page-level attribute values. In one implementation, the system includes one or more integrated client devices and a cache. Each client device is configured to generate at least a memory request. Each memory request has a respective physical address and a respective page descriptor of a page to which the physical address belongs. The cache is configured to cache memory requests for each of the one or more integrated client devices. The cache comprises a cache memory having multiple ways. The cache is configured to distinguish different memory requests using page-level attributes of respective page descriptors of the memory requests, and to allocate different portions of the cache memory to different respective memory requests.
    Type: Application
    Filed: November 17, 2021
    Publication date: May 19, 2022
    Inventors: Vinod Chamarty, Joao Dias
  • Patent number: 11188472
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for allocating cache resources according to page-level attribute values. In one implementation, the system includes one or more integrated client devices and a cache. Each client device is configured to generate at least a memory request. Each memory request has a respective physical address and a respective page descriptor of a page to which the physical address belongs. The cache is configured to cache memory requests for each of the one or more integrated client devices. The cache comprises a cache memory having multiple ways. The cache is configured to distinguish different memory requests using page-level attributes of respective page descriptors of the memory requests, and to allocate different portions of the cache memory to different respective memory requests.
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: November 30, 2021
    Assignee: Google LLC
    Inventors: Vinod Chamarty, Joao Dias
  • Publication number: 20210342269
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing a prefetch processing to prepare an ambient computing device to operate in a low-power state without waking a memory device. One of the methods includes performing, by an ambient computing device, a prefetch process that populates a cache with prefetched instructions and data required for the ambient computing device to process inputs to the system while in the low-power state, and entering the low-power state, and processing, by the ambient computing device in the low-power state, inputs to the system using the prefetched instructions and data stored in the cache.
    Type: Application
    Filed: May 20, 2021
    Publication date: November 4, 2021
    Inventors: Vinod Chamarty, Lawrence J. Madar, III
  • Publication number: 20210255972
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for a system-level cache to allocate cache resources by a way-partitioning process. One of the methods includes maintaining a mapping between partitions and priority levels and allocating primary ways to respective enabled partitions in an order corresponding to the respective priority levels assigned to the enabled partitions.
    Type: Application
    Filed: December 31, 2020
    Publication date: August 19, 2021
    Inventors: Vinod Chamarty, Xiaoyu Ma, Hongil Yoon, Keith Robert Pflederer, Weiping Liao, Benjamin Dodge, Albert Meixner, Allan Douglas Knies, Manu Gulati, Rahul Jagdish Thakur, Jason Rupert Redgrave
  • Patent number: 11023379
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing a prefetch processing to prepare an ambient computing device to operate in a low-power state without waking a memory device. One of the methods includes performing, by an ambient computing device, a prefetch process that populates a cache with prefetched instructions and data required for the ambient computing device to process inputs to the system while in the low-power state, and entering the low-power state, and processing, by the ambient computing device in the low-power state, inputs to the system using the prefetched instructions and data stored in the cache.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: June 1, 2021
    Assignee: Google LLC
    Inventors: Vinod Chamarty, Lawrence J. Madar, III