Patents Assigned to Advanced Micro Devices, Inc.
  • Patent number: 11880260
    Abstract: A heterogeneous processor system includes a first processor implementing an instruction set architecture (ISA) including a set of ISA features and configured to support a first subset of the set of ISA features. The heterogeneous processor system also includes a second processor implementing the ISA including the set of ISA features and configured to support a second subset of the set of ISA features, wherein the first subset and the second subset of the set of ISA features are different from each other. When the first subset includes an entirety of the set of ISA features, the lower-feature second processor is configured to execute an instruction thread by consuming less power and with lower performance than the first processor.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Elliot H. Mednick, Edward McLellan
  • Patent number: 11880715
    Abstract: Methods and systems for load balancing in a neural network system using metadata are disclosed. Any one or a combination of one or more kernels, one or more neurons, and one or more layers of the neural network system are tagged with metadata. A scheduler detects whether there are neurons that are available to execute. The scheduler uses the metadata to schedule and load balance computations across compute resources and available resources.
    Type: Grant
    Filed: April 5, 2021
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Nicholas Malaya, Yasuko Eckert
  • Patent number: 11880277
    Abstract: Selecting an error correction code type for a memory device includes: selecting, by the memory device in dependence upon predefined selection criteria, one of a plurality of error correction code types and carrying out memory access requests utilizing the selected error correction code type.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: January 23, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Sudhanva Gurumurthi, Vilas Sridharan
  • Patent number: 11880769
    Abstract: A system is described that performs training operations for a neural network, the system including an analog circuit element functional block with an array of analog circuit elements, and a controller. The controller monitors error values computed using an output from each of one or more initial iterations of a neural network training operation, the one or more initial iterations being performed using neural network data acquired from the memory. When one or more error values are less than a threshold, the controller uses the neural network data from the memory to configure the analog circuit element functional block to perform remaining iterations of the neural network training operation. The controller then causes the analog circuit element functional block to perform the remaining iterations.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Sudhanva Gurumurthi
  • Patent number: 11881393
    Abstract: A system and method for efficiently creating layout for memory bit cells are described. In various implementations, cells of a library use Cross field effect transistors (FETs) that include vertically stacked gate all around (GAA) transistors with conducting channels oriented in an orthogonal direction between them. The channels of the vertically stacked transistors use opposite doping polarities. A first category of cells includes devices where each of the two devices in a particular vertical stack receive a same input signal. The second category of cells includes devices where the two devices in a particular vertical stack receive different input signals. The cells of the second category have a larger height dimension than the cells of the first category.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: January 23, 2024
    Assignee: Advanced micro devices, inc.
    Inventor: Richard T. Schultz
  • Patent number: 11880683
    Abstract: Systems, apparatuses, and methods for efficiently processing arithmetic operations are disclosed. A computing system includes a processor capable of executing single precision mathematical instructions on data sizes of M bits and half precision mathematical instructions on data sizes of N bits, which is less than M bits. At least two source operands with M bits indicated by a received instruction are read from a register file. If the instruction is a packed math instruction, at least a first source operand with a size of N bits less than M bits is selected from either a high portion or a low portion of one of the at least two source operands read from the register file. The instruction includes fields storing bits, each bit indicating the high portion or the low portion of a given source operand associated with a register identifier specified elsewhere in the instruction.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: January 23, 2024
    Assignee: Advanced micro devices, inc.
    Inventors: Jiasheng Chen, Bin He, Yunxiao Zou, Michael J. Mantor, Radhakrishna Giduthuri, Eric J. Finger, Brian D. Emberling
  • Patent number: 11880926
    Abstract: A method, computer system, and a non-transitory computer-readable storage medium for performing primitive batch binning are disclosed. The method, computer system, and non-transitory computer-readable storage medium include techniques for generating a primitive batch from a plurality of primitives, computing respective bin intercepts for each of the plurality of primitives in the primitive batch, and shading the primitive batch by iteratively processing each of the respective bin intercepts computed until all of the respective bin intercepts are processed.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: January 23, 2024
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Michael Mantor, Laurent Lefebvre, Mark Fowler, Timothy Kelley, Mikko Alho, Mika Tuomi, Kiia Kallio, Patrick Klas Rudolf Buss, Jari Antero Komppa, Kaj Tuomi
  • Publication number: 20240020173
    Abstract: Methods and systems are disclosed for distribution of a workload among nodes of a NUMA architecture. Techniques disclosed include receiving the workload and data batches, the data batches to be processed by the workload. Techniques disclosed further include assigning workload processes to the nodes according to a determined distribution, and, then, executing the workload according to the determined distribution. The determined distribution is selected out of a set of distributions, so that the execution time of the workload, when executed according to the determined distribution, is minimal.
    Type: Application
    Filed: July 12, 2022
    Publication date: January 18, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventor: Aditya Chatterjee
  • Patent number: 11875197
    Abstract: Systems, apparatuses, and methods for managing a number of wavefronts permitted to concurrently execute in a processing system. An apparatus includes a register file with a plurality of registers and a plurality of compute units configured to execute wavefronts. A control unit of the apparatus is configured to allow a first number of wavefronts to execute concurrently on the plurality of compute units. The control unit is configured to allow no more than a second number of wavefronts to execute concurrently on the plurality of compute units, wherein the second number is less than the first number, in response to detection that thrashing of the register file is above a threshold. The control unit is configured to detect said thrashing based at least in part on a number of registers in use by executing wavefronts that spill to memory.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: January 16, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Bradford Michael Beckmann, Steven Tony Tye, Brian L. Sumner, Nicolai Hähnle
  • Patent number: 11875875
    Abstract: Methods and systems are disclosed for calibrating, by a memory interface system, an interface with dynamic random-access memory (DRAM) using a dynamically changing training clock. Techniques disclosed comprise receiving a system clock having a clock signal at a first pulse rate. Then, during the training of the interface, techniques disclosed comprise generating a training clock from the clock signal at the first pulse rate, the training clock having a clock signal at a second pulse rate, and sending, based on the generated training clock, command signals, including address data, to the DRAM.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: January 16, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Anwar Kashem, Craig Daniel Eaton, Pouya Najafi Ashtiani
  • Patent number: 11876718
    Abstract: Graded throttling for network-on-chip traffic, including: calculating, by an agent of a network-on-chip, a number of outstanding transactions issued by the agent; determining that the number of outstanding transactions meets a threshold; and implementing, by the agent, in response to the number of outstanding transactions meeting the threshold, a traffic throttling policy.
    Type: Grant
    Filed: October 6, 2022
    Date of Patent: January 16, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Narendra Kamat
  • Patent number: 11874783
    Abstract: A coherent memory fabric includes a plurality of coherent master controllers and a coherent slave controller. The plurality of coherent master controllers each include a response data buffer. The coherent slave controller is coupled to the plurality of coherent master controllers. The coherent slave controller, responsive to determining a selected coherent block read command is guaranteed to have only one data response, sends a target request globally ordered message to the selected coherent master controller and transmits responsive data. The selected coherent master controller, responsive to receiving the target request globally ordered message, blocks any coherent probes to an address associated with the selected coherent block read command until receipt of the responsive data is acknowledged by a requesting client.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: January 16, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Amit P. Apte, Eric Christopher Morton, Ganesh Balakrishnan, Ann M. Ling
  • Patent number: 11874739
    Abstract: A memory module includes one or more programmable ECC engines that may be programed by a host processing element with a particular ECC implementation. As used herein, the term “ECC implementation” refers to ECC functionality for performing error detection and subsequent processing, for example using the results of the error detection to perform error correction and to encode corrupted data that cannot be corrected, etc. The approach allows an SoC designer or company to program and reprogram ECC engines in memory modules in a secure manner without having to disclose the particular ECC implementations used by the ECC engines to memory vendors or third parties.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: January 16, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Sudhanva Gurumurthi, Vilas Sridharan, Shaizeen Aga, Nuwan Jayasena, Michael Ignatowski, Shrikanth Ganapathy, John Kalamatianos
  • Patent number: 11874774
    Abstract: A method includes, in response to each write request of a plurality of write requests received at a memory-side cache device coupled with a memory device, writing payload data specified by the write request to the memory-side cache device, and when a first bandwidth availability condition is satisfied, performing a cache write-through by writing the payload data to the memory device, and recording an indication that the payload data written to the memory-side cache device matches the payload data written to the memory device.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: January 16, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Ravindra N. Bhargava, Ganesh Balakrishnan, Joe Sargunaraj, Chintan S. Patel, Girish Balaiah Aswathaiya, Vydhyanathan Kalyanasundharam
  • Patent number: 11875425
    Abstract: Implementing heterogeneous wavefronts on a graphics processing unit (GPU) is disclosed. A scheduler assigns heterogeneous wavefronts for execution on a compute unit of a processing device. The heterogeneous wavefronts include different types of wavefronts such as vector compute wavefronts and service-level wavefronts that vary in resource requirements and instruction sets. As one example, heterogeneous wavefronts may include scalar wavefronts and vector compute wavefronts that execute on scalar units and vector units, respectively. Distinct sets of instructions are executed for the heterogeneous wavefronts on the compute unit. Heterogeneous wavefronts are processed in the same pipeline of the processing device.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: January 16, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Sooraj Puthoor, Bradford Beckmann, Nuwan Jayasena, Anthony Gutierrez
  • Patent number: 11869140
    Abstract: Improvements to graphics processing pipelines are disclosed. More specifically, the vertex shader stage, which performs vertex transformations, and the hull or geometry shader stages, are combined. If tessellation is disabled and geometry shading is enabled, then the graphics processing pipeline includes a combined vertex and graphics shader stage. If tessellation is enabled, then the graphics processing pipeline includes a combined vertex and hull shader stage. If tessellation and geometry shading are both disabled, then the graphics processing pipeline does not use a combined shader stage. The combined shader stages improve efficiency by reducing the number of executing instances of shader programs and associated resources reserved.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: January 9, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mangesh P. Nijasure, Randy W. Ramsey, Todd Martin
  • Patent number: 11868306
    Abstract: A processing system includes a processing unit and a memory device. The memory device includes a processing-in-memory (PIM) module that performs processing operations on behalf of the processing unit. An instruction set architecture (ISA) of the PIM module has fewer instructions than an ISA of the processing unit. Instructions received from the processing unit are translated such that processing resources of the PIM module are virtualized. As a result, the PIM module concurrently performs processing operations for multiple threads or applications of the processing unit.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: January 9, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Michael L. Chu, Ashwin Aji, Muhammad Amber Hassaan
  • Patent number: 11868809
    Abstract: A processor includes a task scheduling unit and a compute unit coupled to the task scheduling unit. The task scheduling unit performs a task dependency assessment of a task dependency graph and task data requirements that correspond to each task of the plurality of tasks. Based on the task dependency assessment, the task scheduling unit schedules a first task of the plurality of tasks and a second proxy object of a plurality of proxy objects specified by the task data requirements such that a memory transfer of the second proxy object of the plurality of proxy objects occurs while the first task is being executed.
    Type: Grant
    Filed: January 11, 2023
    Date of Patent: January 9, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Muhammad Amber Hassaan, Anirudh Mohan Kaushik, Sooraj Puthoor, Gokul Subramanian Ravi, Bradford Beckmann, Ashwin Aji
  • Patent number: 11868818
    Abstract: Techniques for selectively executing a lock instruction speculatively or non-speculatively based on lock address prediction and/or temporal lock prediction. including methods an devices for locking an entry in a memory device. In some techniques, a lock instruction executed by a thread for a particular memory entry of a memory device is detected. Whether contention occurred for the particular memory entry during an earlier speculative lock is detected on a condition that the lock instruction comprises a speculative lock instruction. The lock is executed non-speculatively if contention occurred for the particular memory entry during an earlier speculative lock. The lock is executed speculatively if contention did not occur for the particular memory entry during an earlier speculative lock.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: January 9, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Gregory W. Smaus, John M. King, Matthew A. Rafacz, Matthew M. Crum
  • Patent number: 11868778
    Abstract: Compacted addressing for transaction layer packets, including: determining, for a first epoch, one or more low entropy address bits in a plurality of first transaction layer packets; removing, from one or more memory addresses of one or more second transaction layer packets, the one or more low entropy address bits; and sending the one or more second transaction layer packets.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: January 9, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Ganesh Dasika, Sergey Blagodurov, Seyedmohammad Seyedzadehdelcheh