Patents by Inventor Greggory D. Donley

Greggory D. Donley has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11831565
    Abstract: Systems, apparatuses, and methods for performing efficient data transfer in a computing system are disclosed. A computing system includes multiple fabric interfaces in clients and a fabric. A packet transmitter in the fabric interface includes multiple queues, each for storing packets of a respective type, and a corresponding address history cache for each queue. Queue arbiters in the packet transmitter select candidate packets for issue and determine when address history caches on both sides of the link store the upper portion of the address. The packet transmitter sends a source identifier and a pointer for the request in the packet on the link, rather than the entire request address, which reduces the size of the packet. The queue arbiters support out-of-order issue from the queues. The queue arbiters detect conflicts with out-of-order issue and adjust the outbound packets and fields stored in the queue entries to avoid data corruption.
    Type: Grant
    Filed: October 3, 2018
    Date of Patent: November 28, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Greggory D. Donley, Bryan P. Broussard
  • Publication number: 20220103489
    Abstract: Systems, apparatuses, and methods for efficient data transfer in a computing system are disclosed. A source generates packets to send across a communication fabric (or fabric) to a destination. The source generates partition enable signals for the partitions of payload data. The source negates an enable signal for a particular partition when the source determines the packet type indicates the particular partition should have an associated asserted enable signal in the packet, but the source also determines the particular partition includes a particular data pattern. Routing components of the fabric disable clock signals to storage elements assigned to store the particular partition. The destination inserts the particular data pattern for the particular partition in the payload data.
    Type: Application
    Filed: December 10, 2021
    Publication date: March 31, 2022
    Inventors: Greggory D. Donley, Vydhyanathan Kalyanasundharam, Mark A. Silla, Ashwin Chincholi
  • Patent number: 11223575
    Abstract: Systems, apparatuses, and methods for efficient data transfer in a computing system are disclosed. A source generates packets to send across a communication fabric (or fabric) to a destination. The source generates partition enable signals for the partitions of payload data. The source negates an enable signal for a particular partition when the source determines the packet type indicates the particular partition should have an associated asserted enable signal in the packet, but the source also determines the particular partition includes a particular data pattern. Routing components of the fabric disable clock signals to storage elements assigned to store the particular partition. The destination inserts the particular data pattern for the particular partition in the payload data.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: January 11, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Greggory D. Donley, Vydhyanathan Kalyanasundharam, Mark A. Silla, Ashwin Chincholi
  • Publication number: 20210333860
    Abstract: Systems, apparatuses, and methods for performing efficient power management for a multi-node computing system are disclosed. A computing system includes multiple nodes. When power down negotiation is distributed, negotiation for system-wide power down occurs within a lower level of a node hierarchy prior to negotiation for power down occurring at a higher level of the node hierarchy. When power down negotiation is centralized, a given node combines a state of its clients with indications received on its downstream link and sends an indication on an upstream link based on the combining. Only a root node sends power down requests.
    Type: Application
    Filed: July 2, 2021
    Publication date: October 28, 2021
    Inventors: Benjamin Tsien, Greggory D. Donley, Bryan P. Broussard
  • Patent number: 11054887
    Abstract: Systems, apparatuses, and methods for performing efficient power management for a multi-node computing system are disclosed. A computing system includes multiple nodes. When power down negotiation is distributed, negotiation for system-wide power down occurs within a lower level of a node hierarchy prior to negotiation for power down occurring at a higher level of the node hierarchy. When power down negotiation is centralized, a given node combines a state of its clients with indications received on its downstream link and sends an indication on an upstream link based on the combining. Only a root node sends power down requests.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: July 6, 2021
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Benjamin Tsien, Greggory D. Donley, Bryan P. Broussard
  • Publication number: 20210194827
    Abstract: Systems, apparatuses, and methods for efficient data transfer in a computing system are disclosed. A source generates packets to send across a communication fabric (or fabric) to a destination. The source generates partition enable signals for the partitions of payload data. The source negates an enable signal for a particular partition when the source determines the packet type indicates the particular partition should have an associated asserted enable signal in the packet, but the source also determines the particular partition includes a particular data pattern. Routing components of the fabric disable clock signals to storage elements assigned to store the particular partition. The destination inserts the particular data pattern for the particular partition in the payload data.
    Type: Application
    Filed: December 23, 2019
    Publication date: June 24, 2021
    Inventors: Greggory D. Donley, Vydhyanathan Kalyanasundharam, Mark A. Silla, Ashwin Chincholi
  • Publication number: 20200112525
    Abstract: Systems, apparatuses, and methods for performing efficient data transfer in a computing system are disclosed. A computing system includes multiple fabric interfaces in clients and a fabric. A packet transmitter in the fabric interface includes multiple queues, each for storing packets of a respective type, and a corresponding address history cache for each queue. Queue arbiters in the packet transmitter select candidate packets for issue and determine when address history caches on both sides of the link store the upper portion of the address. The packet transmitter sends a source identifier and a pointer for the request in the packet on the link, rather than the entire request address, which reduces the size of the packet. The queue arbiters support out-of-order issue from the queues. The queue arbiters detect conflicts with out-of-order issue and adjust the outbound packets and fields stored in the queue entries to avoid data corruption.
    Type: Application
    Filed: October 3, 2018
    Publication date: April 9, 2020
    Inventors: Greggory D. Donley, Bryan P. Broussard
  • Patent number: 10601723
    Abstract: A computing system uses a memory for storing data, one or more clients for generating network traffic and a communication fabric with network switches. The network switches include centralized storage structures, rather than separate input and output storage structures. The network switches store particular metadata corresponding to received packets in a single, centralized collapsing queue where the age of the packets corresponds to a queue entry position. The payload data of the packets are stored in a separate memory, so the relatively large amount of data is not shifted during the lifetime of the packet in the network switch. The network switches select sparse queue entries in the collapsible queue, deallocate the selected queue entries, and shift remaining allocated queue entries toward a first end of the queue with a delay proportional to the radix of the network switches.
    Type: Grant
    Filed: April 12, 2018
    Date of Patent: March 24, 2020
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Alan Dodson Smith, Vydhyanathan Kalyanasundharam, Bryan P. Broussard, Greggory D. Donley, Chintan S. Patel
  • Publication number: 20200059437
    Abstract: Systems, apparatuses, and methods for performing efficient data transfer in a computing system are disclosed. A computing system includes multiple fabric interfaces in clients and a fabric. A packet transmitter in the fabric interface includes multiple queues, each for storing packets of a respective type. The packet transmitter includes multiple queue arbiters, each for selecting a candidate packet from a respective one of the multiple queues. The packet transmitter includes a buffer for storing a link packet, which includes data storage space for storing multiple candidate packets. The packet transmitter selects qualified candidate packets from the multiple queues and inserts these candidate packets into the link packet. The packing arbiter avoids data collisions at the receiver by taking into consideration mismatches between the rate of inserting candidate packets into the link packet and the rate of creating available data storage space in a receiving queue in the receiver.
    Type: Application
    Filed: August 20, 2018
    Publication date: February 20, 2020
    Inventors: Greggory D. Donley, Bryan P. Broussard, Vydhyanathan Kalyanasundharam
  • Publication number: 20190319891
    Abstract: A computing system uses a memory for storing data, one or more clients for generating network traffic and a communication fabric with network switches. The network switches include centralized storage structures, rather than separate input and output storage structures. The network switches store particular metadata corresponding to received packets in a single, centralized collapsing queue where the age of the packets corresponds to a queue entry position. The payload data of the packets are stored in a separate memory, so the relatively large amount of data is not shifted during the lifetime of the packet in the network switch. The network switches select sparse queue entries in the collapsible queue, deallocate the selected queue entries, and shift remaining allocated queue entries toward a first end of the queue with a delay proportional to the radix of the network switches.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 17, 2019
    Inventors: Alan Dodson Smith, Vydhyanathan Kalyanasundharam, Bryan P. Broussard, Greggory D. Donley, Chintan S. Patel
  • Publication number: 20190204899
    Abstract: Systems, apparatuses, and methods for performing efficient power management for a multi-node computing system are disclosed. A computing system includes multiple nodes. When power down negotiation is distributed, negotiation for system-wide power down occurs within a lower level of a node hierarchy prior to negotiation for power down occurring at a higher level of the node hierarchy. When power down negotiation is centralized, a given node combines a state of its clients with indications received on its downstream link and sends an indication on an upstream link based on the combining. Only a root node sends power down requests.
    Type: Application
    Filed: December 28, 2017
    Publication date: July 4, 2019
    Inventors: Benjamin Tsien, Greggory D. Donley, Bryan P. Broussard
  • Patent number: 10305509
    Abstract: Systems, apparatuses, and methods for compression of frequent data values across narrow links are disclosed. In one embodiment, a system includes a processor, a link interface unit, and a communication link. The link interface unit is configured to receive a data stream for transmission over the communication link, wherein the data stream is generated by the processor. The link interface unit determines if blocks of data of a first size from the data stream match one or more first data patterns and the link interface unit determines if blocks of data of a second size from the data stream match one or more second data patterns. The link interface unit sends, over the communication link, only blocks of data which do not match the first or second data patterns.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: May 28, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Greggory D. Donley, Vydhyanathan Kalyanasundharam, Bryan P. Broussard
  • Patent number: 10223280
    Abstract: A system including a gasket communicatively coupled between a unified northbridge (UNB) having a cache coherent interconnect (CCI) interface and a processor having an Advanced eXtensible Interface (AXI) coherency extension (ACE). The gasket is configured to translate requests from the processor that include ACE commands into equivalent CCI commands, wherein each request from the processor maps onto a specific CCI request type. The gasket is further configured to translate ACE tags into CCI tags. The gasket is further configured to translate CCI encoded probes from a system resource interface (SRI) into equivalent ACE snoop transactions. The gasket is further configured to translate the memory map to inter-operate with a UNB/coherent HyperTransport (cHT) environment. The gasket is further configured to receive a barrier transaction that is used to provide ordering for transactions.
    Type: Grant
    Filed: July 2, 2018
    Date of Patent: March 5, 2019
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Vydhyanathan Kalyanasundharam, Yaniv Adiri, Philip Ng, Maggie Chan, Vincent Cueva, Anthony Asaro, Jimshed Mirza, Greggory D. Donley, Bryan Broussard, Benjamin Tsien
  • Publication number: 20180307619
    Abstract: A system including a gasket communicatively coupled between a unified northbridge (UNB) having a cache coherent interconnect (CCI) interface and a processor having an Advanced eXtensible Interface (AXI) coherency extension (ACE). The gasket is configured to translate requests from the processor that include ACE commands into equivalent CCI commands, wherein each request from the processor maps onto a specific CCI request type. The gasket is further configured to translate ACE tags into CCI tags. The gasket is further configured to translate CCI encoded probes from a system resource interface (SRI) into equivalent ACE snoop transactions. The gasket is further configured to translate the memory map to inter-operate with a UNB/coherent HyperTransport (cHT) environment. The gasket is further configured to receive a barrier transaction that is used to provide ordering for transactions.
    Type: Application
    Filed: July 2, 2018
    Publication date: October 25, 2018
    Applicants: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Vydhyanathan Kalyanasundharam, Philip Ng, Maggie Chan, Vincent Cueva, Anthony Asaro, Jimshed Mirza, Greggory D. Donley, Bryan Broussard, Benjamin Tsien, Yaniv Adiri
  • Patent number: 10042576
    Abstract: A method and apparatus of compressing addresses for transmission includes receiving a transaction at a first device from a source that includes a memory address request for a memory location on a second device. It is determined if a first part of the memory address is stored in a cache located on the first device. If the first part of the memory address is not stored in the cache, the first part of the memory address is stored in the cache and the entire memory address and information relating to the storage of the first part is transmitted to the second device. If the first part of the memory address is stored in the cache, only a second part of the memory address and an identifier that indicates a way in which the first part of the address is stored in the cache is transmitted to the second device.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: August 7, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Greggory D. Donley
  • Patent number: 10025721
    Abstract: The present invention provides for page table access and dirty bit management in hardware via a new atomic test[0] and OR and Mask. The present invention also provides for a gasket that enables ACE to CCI translations. This gasket further provides request translation between ACE and CCI, deadlock avoidance for victim and probe collision, ARM barrier handling, and power management interactions. The present invention also provides a solution for ARM victim/probe collision handling which deadlocks the unified northbridge. These solutions includes a dedicated writeback virtual channel, probes for IO requests using 4-hop protocol, and a WrBack Reorder Ability in MCT where victims update older requests with data as they pass the requests.
    Type: Grant
    Filed: October 24, 2014
    Date of Patent: July 17, 2018
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Vydhyanathan Kalyanasundharam, Philip Ng, Maggie Chan, Vincent Cueva, Anthony Asaro, Jimshed Mirza, Greggory D. Donley, Bryan Broussard, Benjamin Tsien, Yaniv Adiri
  • Publication number: 20180167082
    Abstract: Systems, apparatuses, and methods for compression of frequent data values across narrow links are disclosed. In one embodiment, a system includes a processor, a link interface unit, and a communication link. The link interface unit is configured to receive a data stream for transmission over the communication link, wherein the data stream is generated by the processor. The link interface unit determines if blocks of data of a first size from the data stream match one or more first data patterns and the link interface unit determines if blocks of data of a second size from the data stream match one or more second data patterns. The link interface unit sends, over the communication link, only blocks of data which do not match the first or second data patterns.
    Type: Application
    Filed: October 16, 2017
    Publication date: June 14, 2018
    Inventors: Greggory D. Donley, Vydhyanathan Kalyanasundharam, Bryan P. Broussard
  • Publication number: 20180052631
    Abstract: A method and apparatus of compressing addresses for transmission includes receiving a transaction at a first device from a source that includes a memory address request for a memory location on a second device. It is determined if a first part of the memory address is stored in a cache located on the first device. If the first part of the memory address is not stored in the cache, the first part of the memory address is stored in the cache and the entire memory address and information relating to the storage of the first part is transmitted to the second device. If the first part of the memory address is stored in the cache, only a second part of the memory address and an identifier that indicates a way in which the first part of the address is stored in the cache is transmitted to the second device.
    Type: Application
    Filed: November 8, 2016
    Publication date: February 22, 2018
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Greggory D. Donley
  • Patent number: 9793919
    Abstract: Systems, apparatuses, and methods for compression of frequent data values across narrow links are disclosed. In one embodiment, a system includes a processor, a link interface unit, and a communication link. The link interface unit is configured to receive a data stream for transmission over the communication link, wherein the data stream is generated by the processor. The link interface unit determines if blocks of data of a first size from the data stream match one or more first data patterns and the link interface unit determines if blocks of data of a second size from the data stream match one or more second data patterns. The link interface unit sends, over the communication link, only blocks of data which do not match the first or second data patterns.
    Type: Grant
    Filed: December 8, 2016
    Date of Patent: October 17, 2017
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Greggory D. Donley, Vydhyanathan Kalyanasundharam, Bryan P. Broussard
  • Patent number: 9658960
    Abstract: A method and apparatus for controlling affinity of subcaches is disclosed. When a core compute unit evicts a line of victim data, a prioritized search for space allocation on available subcaches is executed, in order of proximity between the subcache and the compute unit. The victim data may be injected into an adjacent subcache if space is available. Otherwise, a line may be evicted from the adjacent subcache to make room for the victim data or the victim data may be sent to the next closest subcache. To retrieve data, a core compute unit sends a Tag Lookup Request message directly to the nearest subcache as well as to a cache controller, which controls routing of messages to all of the subcaches. A Tag Lookup Response message is sent back to the cache controller to indicate if the requested data is located in the nearest sub-cache.
    Type: Grant
    Filed: December 22, 2010
    Date of Patent: May 23, 2017
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Greggory D. Donley