Patents by Inventor Narendra Kamat

Narendra Kamat has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12113712
    Abstract: Dynamic network-on-chip traffic throttling, including: determining, by a detector module of a network-on-chip, that a predefined condition is met; sending, by the detector module, a signal to a mediator module of the network-on-chip; and sending, in response to the signal, by the mediator module, an indication to a plurality of agents to implement a traffic throttling policy.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: October 8, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Narendra Kamat, Vydhyanathan Kalyanasundharam, Gregg Donley, Ashwin Chincholi
  • Patent number: 12066960
    Abstract: Systems, devices, and methods for direct memory access. A system direct memory access (SDMA) device disposed on a processor die sends a message which includes physical addresses of a source buffer and a destination buffer, and a size of a data transfer, to a data fabric device. The data fabric device sends an instruction which includes the physical addresses of the source and destination buffer, and the size of the data transfer, to first agent devices. Each of the first agent devices reads a portion of the source buffer from a memory device at the physical address of the source buffer. Each of the first agent devices sends the portion of the source buffer to one of second agent devices. Each of the second agent devices writes the portion of the source buffer to the destination buffer.
    Type: Grant
    Filed: December 27, 2021
    Date of Patent: August 20, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Narendra Kamat
  • Publication number: 20240111442
    Abstract: Systems, apparatuses, and methods for prefetching data by a display controller. From time to time, a performance-state change of a memory are performed. During such changes, a memory clock frequency is changed for a memory subsystem storing frame buffer(s) used to drive pixels to a display device. During the performance-state change, memory accesses may be temporarily blocked. To sustain a desired quality of service for the display, a display controller is configured to prefetch data in advance of the performance-state change. In order to ensure the display controller has sufficient memory bandwidth to accomplish the prefetch, bandwidth reduction circuitry in clients of the system are configured to temporarily reduce memory bandwidth of corresponding clients.
    Type: Application
    Filed: September 29, 2022
    Publication date: April 4, 2024
    Inventors: Ashish Jain, Shang Yang, Jun Lei, Gia Tung Phan, Oswin Hall, Benjamin Tsien, Narendra Kamat
  • Patent number: 11876718
    Abstract: Graded throttling for network-on-chip traffic, including: calculating, by an agent of a network-on-chip, a number of outstanding transactions issued by the agent; determining that the number of outstanding transactions meets a threshold; and implementing, by the agent, in response to the number of outstanding transactions meeting the threshold, a traffic throttling policy.
    Type: Grant
    Filed: October 6, 2022
    Date of Patent: January 16, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Narendra Kamat
  • Publication number: 20230036142
    Abstract: Graded throttling for network-on-chip traffic, including: calculating, by an agent of a network-on-chip, a number of outstanding transactions issued by the agent; determining that the number of outstanding transactions meets a threshold; and implementing, by the agent, in response to the number of outstanding transactions meeting the threshold, a traffic throttling policy.
    Type: Application
    Filed: October 6, 2022
    Publication date: February 2, 2023
    Inventor: Narendra KAMAT
  • Patent number: 11470004
    Abstract: Graded throttling for network-on-chip traffic, including: calculating, by an agent of a network-on-chip, a number of outstanding transactions issued by the agent; determining that the number of outstanding transactions meets a threshold; and implementing, by the agent, in response to the number of outstanding transactions meeting the threshold, a traffic throttling policy.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: October 11, 2022
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Narendra Kamat
  • Publication number: 20220197840
    Abstract: Systems, devices, and methods for direct memory access. A system direct memory access (SDMA) device disposed on a processor die sends a message which includes physical addresses of a source buffer and a destination buffer, and a size of a data transfer, to a data fabric device. The data fabric device sends an instruction which includes the physical addresses of the source and destination buffer, and the size of the data transfer, to first agent devices. Each of the first agent devices reads a portion of the source buffer from a memory device at the physical address of the source buffer. Each of the first agent devices sends the portion of the source buffer to one of second agent devices. Each of the second agent devices writes the portion of the source buffer to the destination buffer.
    Type: Application
    Filed: December 27, 2021
    Publication date: June 23, 2022
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Narendra Kamat
  • Publication number: 20220103481
    Abstract: Dynamic network-on-chip traffic throttling, including: determining, by a detector module of a network-on-chip, that a predefined condition is met; sending, by the detector module, a signal to a mediator module of the network-on-chip; and sending, in response to the signal, by the mediator module, an indication to a plurality of agents to implement a traffic throttling policy.
    Type: Application
    Filed: September 25, 2020
    Publication date: March 31, 2022
    Inventors: NARENDRA KAMAT, VYDHYANATHAN KALYANASUNDHARAM, GREGG DONLEY, ASHWIN CHINCHOLI
  • Publication number: 20220094641
    Abstract: Graded throttling for network-on-chip traffic, including: calculating, by an agent of a network-on-chip, a number of outstanding transactions issued by the agent; determining that the number of outstanding transactions meets a threshold; and implementing, by the agent, in response to the number of outstanding transactions meeting the threshold, a traffic throttling policy.
    Type: Application
    Filed: September 22, 2020
    Publication date: March 24, 2022
    Inventor: NARENDRA KAMAT
  • Patent number: 11210248
    Abstract: Systems, devices, and methods for direct memory access. A system direct memory access (SDMA) device disposed on a processor die sends a message which includes physical addresses of a source buffer and a destination buffer, and a size of a data transfer, to a data fabric device. The data fabric device sends an instruction which includes the physical addresses of the source and destination buffer, and the size of the data transfer, to first agent devices. Each of the first agent devices reads a portion of the source buffer from a memory device at the physical address of the source buffer. Each of the first agent devices sends the portion of the source buffer to one of second agent devices. Each of the second agent devices writes the portion of the source buffer to the destination buffer.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: December 28, 2021
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Narendra Kamat
  • Publication number: 20210191890
    Abstract: Systems, devices, and methods for direct memory access. A system direct memory access (SDMA) device disposed on a processor die sends a message which includes physical addresses of a source buffer and a destination buffer, and a size of a data transfer, to a data fabric device. The data fabric device sends an instruction which includes the physical addresses of the source and destination buffer, and the size of the data transfer, to first agent devices. Each of the first agent devices reads a portion of the source buffer from a memory device at the physical address of the source buffer. Each of the first agent devices sends the portion of the source buffer to one of second agent devices. Each of the second agent devices writes the portion of the source buffer to the destination buffer.
    Type: Application
    Filed: December 20, 2019
    Publication date: June 24, 2021
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Narendra Kamat
  • Publication number: 20200259747
    Abstract: Systems, apparatuses, and methods for dynamic buffer management in multi-client token flow control routers are disclosed. A system includes at least one or more processing units, a memory, and a communication fabric with a plurality of routers coupled to the processing unit(s) and the memory. A router servicing multiple active clients allocates a first number of tokens to each active client. The first number of tokens is less than a second number of tokens needed to saturate the bandwidth of each client to the router. The router also allocates a third number of tokens to a free pool, with tokens from the free pool being dynamically allocated to different clients. The third number of tokens is equal to the difference between the second number of tokens and the first number of tokens. An advantage of this approach is reducing the amount of buffer space needed at the router.
    Type: Application
    Filed: February 19, 2020
    Publication date: August 13, 2020
    Inventors: Alan Dodson Smith, Chintan S. Patel, Eric Christopher Morton, Vydhyanathan Kalyanasundharam, Narendra Kamat
  • Patent number: 10608943
    Abstract: Systems, apparatuses, and methods for dynamic buffer management in multi-client token flow control routers are disclosed. A system includes at least one or more processing units, a memory, and a communication fabric with a plurality of routers coupled to the processing unit(s) and the memory. A router servicing multiple active clients allocates a first number of tokens to each active client. The first number of tokens is less than a second number of tokens needed to saturate the bandwidth of each client to the router. The router also allocates a third number of tokens to a free pool, with tokens from the free pool being dynamically allocated to different clients. The third number of tokens is equal to the difference between the second number of tokens and the first number of tokens. An advantage of this approach is reducing the amount of buffer space needed at the router.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: March 31, 2020
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Alan Dodson Smith, Chintan S. Patel, Eric Christopher Morton, Vydhyanathan Kalyanasundharam, Narendra Kamat
  • Patent number: 10491524
    Abstract: A system for implementing load balancing schemes includes one or more processing units, a memory, and a communication fabric with a plurality of switches coupled to the processing unit(s) and the memory. A switch of the fabric determines a first number of streams on a first input port that are targeting a first output port. The switch also determines a second number of requestors, from all input ports, that are targeting the first output port. Then, the switch calculates a throttle factor for the first input port by dividing the first number of streams by the second number of streams. The switch applies the throttle factor to regulate bandwidth on the first input port for requestors targeting the first output port. The switch also calculates throttle factors for the other ports and applies the throttle factors when regulating bandwidth on the other ports.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: November 26, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Alan Dodson Smith, Chintan S. Patel, Eric Christopher Morton, Vydhyanathan Kalyanasundharam, Narendra Kamat
  • Publication number: 20190140954
    Abstract: A system for implementing load balancing schemes includes one or more processing units, a memory, and a communication fabric with a plurality of switches coupled to the processing unit(s) and the memory. A switch of the fabric determines a first number of streams on a first input port that are targeting a first output port. The switch also determines a second number of requestors, from all input ports, that are targeting the first output port. Then, the switch calculates a throttle factor for the first input port by dividing the first number of streams by the second number of streams. The switch applies the throttle factor to regulate bandwidth on the first input port for requestors targeting the first output port. The switch also calculates throttle factors for the other ports and applies the throttle factors when regulating bandwidth on the other ports.
    Type: Application
    Filed: November 7, 2017
    Publication date: May 9, 2019
    Inventors: Alan Dodson Smith, Chintan S. Patel, Eric Christopher Morton, Vydhyanathan Kalyanasundharam, Narendra Kamat
  • Publication number: 20190132249
    Abstract: Systems, apparatuses, and methods for dynamic buffer management in multi-client token flow control routers are disclosed. A system includes at least one or more processing units, a memory, and a communication fabric with a plurality of routers coupled to the processing unit(s) and the memory. A router servicing multiple active clients allocates a first number of tokens to each active client. The first number of tokens is less than a second number of tokens needed to saturate the bandwidth of each client to the router. The router also allocates a third number of tokens to a free pool, with tokens from the free pool being dynamically allocated to different clients. The third number of tokens is equal to the difference between the second number of tokens and the first number of tokens. An advantage of this approach is reducing the amount of buffer space needed at the router.
    Type: Application
    Filed: October 27, 2017
    Publication date: May 2, 2019
    Inventors: Alan Dodson Smith, Chintan S. Patel, Eric Christopher Morton, Vydhyanathan Kalyanasundharam, Narendra Kamat