Patents Assigned to Advanced Micro Device, Inc.
  • Publication number: 20240111674
    Abstract: Data reuse cache techniques are described. In one example, a load instruction is generated by an execution unit of a processor unit. In response to the load instruction, data is loaded by a load-store unit for processing by the execution unit and is also stored to a data reuse cache communicatively coupled between the load-store unit and the execution unit. Upon receipt of a subsequent load instruction for the data from the execution unit, the data is loaded from the data reuse cache for processing by the execution unit.
    Type: Application
    Filed: September 29, 2022
    Publication date: April 4, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Alok Garg, Neil N Marketkar, Matthew T. Sobel
  • Publication number: 20240111676
    Abstract: A disclosed computing device includes at least one prefetcher and a processing device communicatively coupled to the prefetcher. The processing device is configured to detect a throttling instruction that indicates a start of a throttling region. The computing device is further configured to prevent the prefetcher from being trained on one or more memory instructions included in the throttling region in response to the throttling instruction. Various other apparatuses, systems, and methods are also disclosed.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 4, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: John Kalamatianos, Marko Scrbak, Gabriel H. Loh, Akhil Arunkumar
  • Publication number: 20240112722
    Abstract: A memory controller for generating accesses for a memory includes a row hammer logic circuit for providing a sample request. In response to the sample request, the memory controller generates a sample command for dispatch to the memory to cause the memory to capture a current row. In response to a completion of the sample command, the memory controller generates a mitigation command for dispatch to the memory.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 4, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Kevin M. Brandl, James R. Magro, Kedarnath Balakrishnan, Jing Wang
  • Publication number: 20240111682
    Abstract: Runtime flushing to persistency in heterogenous systems is described. In accordance with the described techniques, a system may include a persistent memory in electronic communication with at least one cache and a controller configured to command the at least one cache to flush dirty data to the persistent memory in response to a dirtiness of the at least one cache reaching a cache dirtiness threshold.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 4, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventor: Alexander Joseph Branover
  • Publication number: 20240111684
    Abstract: The disclosed method includes detecting a stalled memory request, for a first level cache within a cache hierarchy of a processor, that includes an outstanding memory request associated with a second level cache within the cache hierarchy that is higher than the first level cache. The method further includes indicating, to the second level cache, that the first level cache is experiencing a starvation issue due to stalled memory requests to enable the second level cache to perform starvation-remediation actions. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 4, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Sankaranarayanan Gurumurthy, Anil Harwani
  • Publication number: 20240111425
    Abstract: A method for operating a memory having a plurality of banks accessible in parallel, each bank including a plurality of grains accessible in parallel is provided. The method includes: based on a memory access request that specifies a memory address, identifying a set that stores data for the memory access request, wherein the set is spread across multiple grains of the plurality of grains; and performing operations to satisfy the memory access request, using entries of the set stored across the multiple grains of the plurality of grains.
    Type: Application
    Filed: September 29, 2022
    Publication date: April 4, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Jagadish B. Kotra, Marko Scrbak
  • Publication number: 20240112749
    Abstract: The disclosed computing device includes a cache memory and at least one processor coupled to the cache memory. The at least one processor is configured to copy data written to one or more nonredundant wordlines of the cache memory to one or more redundant wordlines of the cache memory. The at least one processor is additionally configured to detect a mismatch between data read from the one or more nonredundant wordlines and data stored in the one or more redundant wordlines. The at least one processor is also configured to perform a remediation action in response to detecting the mismatch. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 4, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventor: Patrick James Shyvers
  • Publication number: 20240111591
    Abstract: Portions of programs, oftentimes referred to as kernels, are written by programmers to target a particular type of compute unit, such as a central processing unit (CPU) core or a graphics processing unit (GPU) core. When executing a kernel, the kernel is separated into multiple parts referred to as workgroups, and each workgroup is provided to a compute unit for execution. Usage of one type of compute unit is monitored and, in response to the one type of compute unit being idle, one or more workgroups targeting another type of compute unit are executed on the one type of compute unit. For example, usage of CPU cores is monitored, and in response to the CPU cores being idle, one or more workgroups targeting GPU cores are executed on the CPU cores.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 4, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Bradford Michael Beckmann, Sooraj Puthoor
  • Publication number: 20240111618
    Abstract: A method for receiving a multi-level error signal having more than two logic levels includes oversampling the multi-level error signal to provide sampled symbols, wherein a first level of the multi-level error signal indicates no error, and second and third levels of the multi-level error signal indicate first and second error conditions, respectively. The sampled signals are de-serialized to provide sets of symbols. A start of a symbol period is determined in response to detecting that a given sample is different from a prior sample, and the prior sample indicates no error. The sets of symbols are filtered to provide corresponding output symbols based on the start.
    Type: Application
    Filed: September 29, 2022
    Publication date: April 4, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Aaron D. Willey, Karthik Gopalakrishnan, Pradeep Jayaraman, Ramon Mangaser
  • Publication number: 20240111620
    Abstract: The disclosed computer-implemented method for generating remedy recommendations for power and performance issues within semiconductor software and hardware. For example, the disclosed systems and methods can apply a rule-based model to telemetry data to generate rule-based root-cause outputs as well as telemetry-based unknown outputs. The disclosed systems and methods can further apply a root-cause machine learning model to the telemetry-based unknown outputs to analyze deep and complex failure patterns with the telemetry-based unknown outputs to ultimately generate one or more root-cause remedy recommendations that are specific to the identified failure and the client computing device that is experiencing that failure.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 4, 2024
    Applicants: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Mohammad Hamed Mousazadeh, Arpit Patel, Gabor Sines, Omer Irshad, Phillippe John Louis Yu, Zongjie Yan, Ian Charles Colbert
  • Publication number: 20240111622
    Abstract: A disclosed method can include (i) reporting, by a microcontroller, detection of a violation of a physical infrastructure constraint to a machine check architecture, (ii) triggering, by the machine check architecture in response to the reporting, a machine-check exception such that the violation of the physical infrastructure constraint is recorded, and (iii) performing a corrective action based on the triggering of the machine-check exception. Various other apparatuses, systems, and methods are also disclosed.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 4, 2024
    Applicant: Advanced Micro Devices, Inc
    Inventors: Siddharth K. Shah, Vilas Sridharan, Amitabh Mehra, Anil Harwani, William Fischofer
  • Publication number: 20240111421
    Abstract: Connection modification based on traffic pattern is described. In accordance with the described techniques, a traffic pattern of memory operations across a set of connections between at least one device and at least one memory is monitored. The traffic pattern is then compared to a threshold traffic pattern condition, such as an amount of data traffic in different directions across the connections. A traffic direction of at least one connection of the set of connections is modified based on the traffic pattern corresponding to the threshold traffic pattern condition.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 4, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Nathaniel Morris, Kevin Yu-Cheng Cheng, Atul Kumar Sujayendra Sandur, Sergey Blagodurov
  • Publication number: 20240111355
    Abstract: Methods and systems are disclosed for reducing power consumption by a system including a digital unit and an optical unit. Techniques disclosed comprise generating a workload signature of an incoming workload to be executed by the system. Based on the generated workload signature, techniques disclosed comprise matching the incoming workload with a profile of stored workload profiles. The workload profiles are generated by a trace capture unit. Based on the associated profile, a task submission transaction is sent to the optical unit of the system, representative of a request to execute the incoming workload by the optical unit.
    Type: Application
    Filed: September 29, 2022
    Publication date: April 4, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Sergey Blagodurov, Kevin Y. Cheng, SeyedMohammad SeyedzadehDelcheh, Masab Ahmad
  • Patent number: 11948223
    Abstract: Methods and systems are described. A system includes a redundant shader pipe array that performs rendering calculations on data provided thereto and a shader pipe array that includes a plurality of shader pipes, each of which performs rendering calculations on data provided thereto. The system also includes a circuit that identifies a defective shader pipe of the plurality of shader pipes in the shader pipe array. In response to identifying the defective shader pipe, the circuit generates a signal. The system also includes a redundant shader switch. The redundant shader switch receives the generated signal, and, in response to receiving the generated signal, transfers the data for the defective shader pipe to the redundant shader pipe array.
    Type: Grant
    Filed: July 11, 2022
    Date of Patent: April 2, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Michael J. Mantor, Jeffrey T. Brady, Angel E. Socarras
  • Patent number: 11947476
    Abstract: Methods and systems are disclosed for cross-chiplet performance data streaming. Techniques disclosed include accumulating, by a subservient chiplet, event data associated with an event indicative of a performance aspect of the subservient chiplet; sending, by the subservient chiplet, the event data over a chiplet bus to a master chiplet; and adding, by the master chiplet, the received event data to an event record, the event record containing previously received, from the subservient chiplet over the chiplet bus, event data associated with the event.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: April 2, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Bryan Broussard, Pravesh Gupta, Benjamin Tsien, Vydhyanathan Kalyanasundharam
  • Patent number: 11947487
    Abstract: Methods and systems are disclosed for performing dataflow execution by an accelerated processing unit (APU). Techniques disclosed include decoding information from one or more dataflow instructions. The decoded information is associated with dataflow execution of a computational task. Techniques disclosed further include configuring, based on the decoded information, dataflow circuitry, and, then, executing the dataflow execution of the computational task using the dataflow circuitry.
    Type: Grant
    Filed: June 28, 2022
    Date of Patent: April 2, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Johnathan Robert Alsop, Karthik Ramu Sangaiah, Anthony T. Gutierrez
  • Patent number: 11948073
    Abstract: Systems, apparatuses, and methods for adaptively mapping a machine learning model to a multi-core inference accelerator engine are disclosed. A computing system includes a multi-core inference accelerator engine with multiple inference cores coupled to a memory subsystem. The system also includes a control unit which determines how to adaptively map a machine learning model to the multi-core inference accelerator engine. In one implementation, the control unit selects a mapping scheme which minimizes the memory bandwidth utilization of the multi-core inference accelerator engine. In one implementation, this mapping scheme involves having one inference core of the multi-core inference accelerator engine fetch given data and broadcast the given data to other inference cores of the inference accelerator engine. Each inference core fetches second data unique to the respective inference core.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: April 2, 2024
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Lei Zhang, Sateesh Lagudu, Allen Rush
  • Patent number: 11947833
    Abstract: A method and apparatus for training data in a computer system includes reading data stored in a first memory address in a memory and writing it to a buffer. Training data is generated for transmission to the first memory address. The data is transmitted to the first memory address. Information relating to the training data is read from the first memory address and the stored data is read from the buffer and written to the memory area where the training data was transmitted.
    Type: Grant
    Filed: June 21, 2022
    Date of Patent: April 2, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Anwar Kashem, Craig Daniel Eaton, Pouya Najafi Ashtiani, Tsun Ho Liu
  • Patent number: 11947380
    Abstract: Systems and methods related to controlling clock signals for clocking shader engines modules (SEs) and non-shader-engine modules (nSEs) of a graphics processing unit (GPU) are provided. One or more dividers receive a clock signal CLK and output a clock signal CLKA to the SEs and output a clock signal CLKB to the nSEs. The frequencies of CLKA and CLKB are independently selected based on sets of performance counter data monitored at the SEs and nSEs, respectively. The clock signal frequency for either the SEs or the nSEs is reduced when the corresponding sets of performance counter data indicates a comparatively lower processing workload for the SEs or for the nSEs.
    Type: Grant
    Filed: August 18, 2022
    Date of Patent: April 2, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Ranjith Kumar Sajja, Sreekanth Godey, Anirudh R. Acharya
  • Patent number: 11947455
    Abstract: Disclosed is a system and method for use in a cache for suppressing modification of cache line. The system and method includes a processor and a memory operating cooperatively with a cache controller. The memory includes a coherence directory stored within a cache created to track at least one cache line in the cache via the cache controller. The processor instructs a cache controller to store a first data in a cache line in the cache. The cache controller tags the cache line based on the first data. The processor instructs the cache controller to store a second data in the cache line in the cache causing eviction of the first data from the cache line. The processor compares based on the tagging the first data and the second data and suppresses modification of the cache line based on the comparing of the first data and the second data.
    Type: Grant
    Filed: April 17, 2023
    Date of Patent: April 2, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Paul J. Moyer