Patents by Inventor Ann Ling

Ann Ling has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240134795
    Abstract: A method includes, in a cache directory, storing an entry associating a memory region with an exclusive coherency state, and in response to a memory access directed to the memory region, transmitting a demote superprobe to convert at least one cache line of the memory region from an exclusive coherency state to a shared coherency state.
    Type: Application
    Filed: October 18, 2022
    Publication date: April 25, 2024
    Inventors: Ganesh Balakrishnan, Amit Apte, Ann Ling, Vydhyanathan Kalyanasundharam
  • Patent number: 11954033
    Abstract: A method includes, in a cache directory, storing an entry associating a memory region with an exclusive coherency state, and in response to a memory access directed to the memory region, transmitting a demote superprobe to convert at least one cache line of the memory region from an exclusive coherency state to a shared coherency state.
    Type: Grant
    Filed: October 19, 2022
    Date of Patent: April 9, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Ganesh Balakrishnan, Amit Apte, Ann Ling, Vydhyanathan Kalyanasundharam
  • Patent number: 11803470
    Abstract: Disclosed are examples of a system and method to communicate cache line eviction data from a CPU subsystem to a home node over a prioritized channel and to release the cache subsystem early to process other transactions.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: October 31, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Amit Apte, Ganesh Balakrishnan, Ann Ling, Vydhyanathan Kalyanasundharam
  • Publication number: 20220100661
    Abstract: Disclosed are examples of a system and method to communicate cache line eviction data from a CPU subsystem to a home node over a prioritized channel and to release the cache subsystem early to process other transactions.
    Type: Application
    Filed: December 22, 2020
    Publication date: March 31, 2022
    Inventors: Amit Apte, Ganesh Balakrishnan, Ann Ling, Vydhyanathan Kalyanasundharam
  • Patent number: 11281280
    Abstract: Systems, apparatuses, and methods for reducing chiplet interrupt latency are disclosed. A system includes one or more processing nodes, one or more memory devices, a communication fabric coupled to the processing unit(s) and memory device(s) via link interfaces, and a power management unit. The power management unit manages the power states of the various components and the link interfaces of the system. If the power management unit detects a request to wake up a given component, and the link interface to the given component is powered down, then the power management unit sends an out-of-band signal to wake up the given component in parallel with powering up the link interface. Also, when multiple link interfaces need to be powered up, the power management unit powers up the multiple link interfaces in an order which complies with voltage regulator load-step requirements while minimizing the latency of pending operations.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: March 22, 2022
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Benjamin Tsien, Michael J. Tresidder, Ivan Yanfeng Wang, Kevin M. Lepak, Ann Ling, Richard M. Born, John P. Petry, Bryan P. Broussard, Eric Christopher Morton
  • Publication number: 20200387208
    Abstract: Systems, apparatuses, and methods for reducing chiplet interrupt latency are disclosed. A system includes one or more processing nodes, one or more memory devices, a communication fabric coupled to the processing unit(s) and memory device(s) via link interfaces, and a power management unit. The power management unit manages the power states of the various components and the link interfaces of the system. If the power management unit detects a request to wake up a given component, and the link interface to the given component is powered down, then the power management unit sends an out-of-band signal to wake up the given component in parallel with powering up the link interface. Also, when multiple link interfaces need to be powered up, the power management unit powers up the multiple link interfaces in an order which complies with voltage regulator load-step requirements while minimizing the latency of pending operations.
    Type: Application
    Filed: May 18, 2020
    Publication date: December 10, 2020
    Inventors: Benjamin Tsien, Michael J. Tresidder, Ivan Yanfeng Wang, Kevin M. Lepak, Ann Ling, Richard M. Born, John P. Petry, Bryan P. Broussard, Eric Christopher Morton
  • Patent number: 10656696
    Abstract: Systems, apparatuses, and methods for reducing chiplet interrupt latency are disclosed. A system includes one or more processing nodes, one or more memory devices, a communication fabric coupled to the processing unit(s) and memory device(s) via link interfaces, and a power management unit. The power management unit manages the power states of the various components and the link interfaces of the system. If the power management unit detects a request to wake up a given component, and the link interface to the given component is powered down, then the power management unit sends an out-of-band signal to wake up the given component in parallel with powering up the link interface. Also, when multiple link interfaces need to be powered up, the power management unit powers up the multiple link interfaces in an order which complies with voltage regulator load-step requirements while minimizing the latency of pending operations.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: May 19, 2020
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Benjamin Tsien, Michael J. Tresidder, Ivan Yanfeng Wang, Kevin M. Lepak, Ann Ling, Richard M. Born, John P. Petry, Bryan P. Broussard, Eric Christopher Morton
  • Patent number: 10503648
    Abstract: Systems, apparatuses, and methods for accelerating cache to cache data transfers are disclosed. A system includes at least a plurality of processing nodes and prediction units, an interconnect fabric, and a memory. A first prediction unit is configured to receive memory requests generated by a first processing node as the requests traverse the interconnect fabric on the path to memory. When the first prediction unit receives a memory request, the first prediction unit generates a prediction of whether data targeted by the request is cached by another processing node. The first prediction unit is configured to cause a speculative probe to be sent to a second processing node responsive to predicting that the data targeted by the memory request is cached by the second processing node. The speculative probe accelerates the retrieval of the data from the second processing node if the prediction is correct.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: December 10, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Amit P. Apte, Ganesh Balakrishnan, Ann Ling, Ravindra N. Bhargava
  • Publication number: 20190179758
    Abstract: Systems, apparatuses, and methods for accelerating cache to cache data transfers are disclosed. A system includes at least a plurality of processing nodes and prediction units, an interconnect fabric, and a memory. A first prediction unit is configured to receive memory requests generated by a first processing node as the requests traverse the interconnect fabric on the path to memory. When the first prediction unit receives a memory request, the first prediction unit generates a prediction of whether data targeted by the request is cached by another processing node. The first prediction unit is configured to cause a speculative probe to be sent to a second processing node responsive to predicting that the data targeted by the memory request is cached by the second processing node. The speculative probe accelerates the retrieval of the data from the second processing node if the prediction is correct.
    Type: Application
    Filed: December 12, 2017
    Publication date: June 13, 2019
    Inventors: Vydhyanathan Kalyanasundharam, Amit P. Apte, Ganesh Balakrishnan, Ann Ling, Ravindra N. Bhargava