Abstract: A method includes, in a cache directory, storing an entry associating a memory region with an exclusive coherency state, and in response to a memory access directed to the memory region, transmitting a demote superprobe to convert at least one cache line of the memory region from an exclusive coherency state to a shared coherency state.
Type:
Grant
Filed:
October 19, 2022
Date of Patent:
April 9, 2024
Assignee:
Advanced Micro Devices, Inc.
Inventors:
Ganesh Balakrishnan, Amit Apte, Ann Ling, Vydhyanathan Kalyanasundharam
Abstract: In an implementation, a semiconductor chip includes a device layer, an interconnect layer fabricated on the device layer, the interconnect layer including a conductive pad, and a conductive pillar coupled to the conductive pad. The conductive pillar includes at least a first portion having a first width and a second portion having a second width, the first portion being disposed between the second portion and the conductive pad, wherein the first width of the first portion is greater than the second width of the second portion.
Type:
Grant
Filed:
November 17, 2021
Date of Patent:
April 9, 2024
Assignees:
ADVANCED MICRO DEVICES, INC., ATI TECHNOOGIES ULC
Abstract: A processing device and method for executing a color twist operation are provided. The processing device comprises memory and a processor configured to convert values of pixels of a frame from a first color domain to a hue, saturation and value (HSV) color domain, adjust hue values and saturation values of the pixels, store the adjusted hue and saturation values in a portion of the memory local to the processor and convert the frame from the HSV color domain to the first color domain using the adjusted hue and saturation values stored in local memory. The adjusted hue and saturation values are generated from pre-adjusted values, which are generated from masked vector values.
Abstract: Devices and methods for node traversal for ray tracing are provided, which comprise casting a first ray in a space comprising objects represented by geometric shapes, traversing, for the first ray, at least one first node of an accelerated hierarchy structure representing an approximate volume of a group of the geometric shapes and a second node representing a volume of one of the geometric shapes, casting a second ray in the space, selecting, for the second ray, a starting node of traversal based on locations of intersection of the first ray and the second ray and an identifier which identifies one or more nodes intersected by the first ray and traversing, for the second ray, the accelerated hierarchy structure beginning at the starting node of traversal.
Type:
Application
Filed:
September 29, 2022
Publication date:
April 4, 2024
Applicants:
Advanced Micro Devices, Inc., ATI Technologies ULC
Inventors:
David William John Pankratz, Konstantin I. Shkurko
Abstract: Runtime flushing to persistency in heterogenous systems is described. In accordance with the described techniques, a system may include a persistent memory in electronic communication with at least one cache and a controller configured to command the at least one cache to flush dirty data to the persistent memory in response to a dirtiness of the at least one cache reaching a cache dirtiness threshold.
Abstract: Methods and devices are provided for processing image data on a sub-frame portion basis using layers of a convolutional neural network. The processing device comprises memory and a processor. The processor is configured to determine, for an input tile of an image, a receptive field via backward propagation and determine a size of the input tile based on the receptive field and an amount of local memory allocated to store data for the input tile. The processor determines whether the amount of local memory allocated to store the data of the input tile and padded data for the receptive field.
Type:
Application
Filed:
September 30, 2022
Publication date:
April 4, 2024
Applicants:
Advanced Micro Devices, Inc., ATI Technologies ULC
Abstract: The disclosed method includes detecting a stalled memory request, for a first level cache within a cache hierarchy of a processor, that includes an outstanding memory request associated with a second level cache within the cache hierarchy that is higher than the first level cache. The method further includes indicating, to the second level cache, that the first level cache is experiencing a starvation issue due to stalled memory requests to enable the second level cache to perform starvation-remediation actions. Various other methods, systems, and computer-readable media are also disclosed.
Abstract: Methods, devices, and systems for retrieving information based on cache miss prediction. It is predicted, based on a history of cache misses at a private cache, that a cache lookup for the information will miss a shared victim cache. A speculative memory request is enabled based on the prediction that the cache lookup for the information will miss the shared victim cache. The information is fetched based on the enabled speculative memory request.
Abstract: Portions of programs, oftentimes referred to as kernels, are written by programmers to target a particular type of compute unit, such as a central processing unit (CPU) core or a graphics processing unit (GPU) core. When executing a kernel, the kernel is separated into multiple parts referred to as workgroups, and each workgroup is provided to a compute unit for execution. Usage of one type of compute unit is monitored and, in response to the one type of compute unit being idle, one or more workgroups targeting another type of compute unit are executed on the one type of compute unit. For example, usage of CPU cores is monitored, and in response to the CPU cores being idle, one or more workgroups targeting GPU cores are executed on the CPU cores.
Type:
Application
Filed:
September 30, 2022
Publication date:
April 4, 2024
Applicant:
Advanced Micro Devices, Inc.
Inventors:
Bradford Michael Beckmann, Sooraj Puthoor
Abstract: The disclosed computing device includes a cache memory and at least one processor coupled to the cache memory. The at least one processor is configured to copy data written to one or more nonredundant wordlines of the cache memory to one or more redundant wordlines of the cache memory. The at least one processor is additionally configured to detect a mismatch between data read from the one or more nonredundant wordlines and data stored in the one or more redundant wordlines. The at least one processor is also configured to perform a remediation action in response to detecting the mismatch. Various other methods, systems, and computer-readable media are also disclosed.
Abstract: Selecting between basic and global persistent flush modes is described. In accordance with the described techniques, a system includes a data fabric in electronic communication with at least one cache and a controller configured to select between operating in a global persistent flush mode and a basic persistent flush mode based on an available flushing latency of the system, control the at least one cache to flush dirty data to the data fabric in response to a flush event trigger while operating in the global persistent flush mode, and transmit a signal to switch control to an application to flush the dirty data from the at least one cache to the data fabric while operating in the basic persistent flush mode.
Abstract: A method for receiving a multi-level error signal having more than two logic levels includes oversampling the multi-level error signal to provide sampled symbols, wherein a first level of the multi-level error signal indicates no error, and second and third levels of the multi-level error signal indicate first and second error conditions, respectively. The sampled signals are de-serialized to provide sets of symbols. A start of a symbol period is determined in response to detecting that a given sample is different from a prior sample, and the prior sample indicates no error. The sets of symbols are filtered to provide corresponding output symbols based on the start.
Type:
Application
Filed:
September 29, 2022
Publication date:
April 4, 2024
Applicant:
Advanced Micro Devices, Inc.
Inventors:
Aaron D. Willey, Karthik Gopalakrishnan, Pradeep Jayaraman, Ramon Mangaser
Abstract: A method for performing prefetching operations is disclosed. The method includes storing a recorded access pattern indicating a set of accesses for a region; in response to an access within the region, fetching the recorded access pattern; and performing prefetching based on the access pattern.
Type:
Application
Filed:
September 30, 2022
Publication date:
April 4, 2024
Applicant:
Advanced Micro Devices, Inc.
Inventors:
Gabriel H. Loh, Marko Scrbak, Akhil Arunkumar, John Kalamatianos
Abstract: Data reuse cache techniques are described. In one example, a load instruction is generated by an execution unit of a processor unit. In response to the load instruction, data is loaded by a load-store unit for processing by the execution unit and is also stored to a data reuse cache communicatively coupled between the load-store unit and the execution unit. Upon receipt of a subsequent load instruction for the data from the execution unit, the data is loaded from the data reuse cache for processing by the execution unit.
Type:
Application
Filed:
September 29, 2022
Publication date:
April 4, 2024
Applicant:
Advanced Micro Devices, Inc.
Inventors:
Alok Garg, Neil N Marketkar, Matthew T. Sobel
Abstract: Methods and systems are disclosed for reducing power consumption by a system including a digital unit and an optical unit. Techniques disclosed comprise generating a workload signature of an incoming workload to be executed by the system. Based on the generated workload signature, techniques disclosed comprise matching the incoming workload with a profile of stored workload profiles. The workload profiles are generated by a trace capture unit. Based on the associated profile, a task submission transaction is sent to the optical unit of the system, representative of a request to execute the incoming workload by the optical unit.
Type:
Application
Filed:
September 29, 2022
Publication date:
April 4, 2024
Applicant:
Advanced Micro Devices, Inc.
Inventors:
Sergey Blagodurov, Kevin Y. Cheng, SeyedMohammad SeyedzadehDelcheh, Masab Ahmad
Abstract: A disclosed computing device includes at least one prefetcher and a processing device communicatively coupled to the prefetcher. The processing device is configured to detect a throttling instruction that indicates a start of a throttling region. The computing device is further configured to prevent the prefetcher from being trained on one or more memory instructions included in the throttling region in response to the throttling instruction. Various other apparatuses, systems, and methods are also disclosed.
Type:
Application
Filed:
September 30, 2022
Publication date:
April 4, 2024
Applicant:
Advanced Micro Devices, Inc.
Inventors:
John Kalamatianos, Marko Scrbak, Gabriel H. Loh, Akhil Arunkumar
Abstract: A method and apparatus for storing keys in a key storage block includes processing a key request. A first key is allocated based upon the key request. The first key is stored in the key storage block, wherein the first key is of a first size and includes a first rule.
Type:
Application
Filed:
September 29, 2022
Publication date:
April 4, 2024
Applicants:
Advanced Micro Devices, Inc., ATI Technologies ULC
Inventors:
Norman Vernon Douglas Stewart, Mihir Shaileshbhai Doctor, Omar Fakhri Ahmed, Hemaprabhu Jayanna, John Traver
Abstract: A method for operating a memory having a plurality of banks accessible in parallel, each bank including a plurality of grains accessible in parallel is provided. The method includes: based on a memory access request that specifies a memory address, identifying a set that stores data for the memory access request, wherein the set is spread across multiple grains of the plurality of grains; and performing operations to satisfy the memory access request, using entries of the set stored across the multiple grains of the plurality of grains.
Abstract: A memory controller includes a first arbiter for selecting memory commands for dispatch to a memory over a first channel, a second arbiter for selecting memory commands for dispatch to the memory over a second channel, and a test circuit. The test circuit generates a respective testing sequence of read commands and write commands for each of the first channel and second channel, and causes the testing sequences to be transmitted over the first and second channels at least partially overlapping in time without selection by the first or second arbiters.
Type:
Application
Filed:
September 30, 2022
Publication date:
April 4, 2024
Applicant:
Advanced Micro Devices, Inc.
Inventors:
Tahsin Askar, Naveen Davanam, Kedarnath Balakrishnan, Kevin M. Brandl, James R. Magro
Abstract: A memory controller for generating accesses for a memory includes a row hammer logic circuit for providing a sample request. In response to the sample request, the memory controller generates a sample command for dispatch to the memory to cause the memory to capture a current row. In response to a completion of the sample command, the memory controller generates a mitigation command for dispatch to the memory.
Type:
Application
Filed:
September 30, 2022
Publication date:
April 4, 2024
Applicant:
Advanced Micro Devices, Inc.
Inventors:
Kevin M. Brandl, James R. Magro, Kedarnath Balakrishnan, Jing Wang