Patents Assigned to Advanced Micro Device
  • Patent number: 11880769
    Abstract: A system is described that performs training operations for a neural network, the system including an analog circuit element functional block with an array of analog circuit elements, and a controller. The controller monitors error values computed using an output from each of one or more initial iterations of a neural network training operation, the one or more initial iterations being performed using neural network data acquired from the memory. When one or more error values are less than a threshold, the controller uses the neural network data from the memory to configure the analog circuit element functional block to perform remaining iterations of the neural network training operation. The controller then causes the analog circuit element functional block to perform the remaining iterations.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Sudhanva Gurumurthi
  • Patent number: 11880310
    Abstract: A processor includes a cache having two or more test regions and a larger non-test region. The processor further includes a cache controller that applies different cache replacement policies to the different test regions of the cache, and a performance monitor that measures performance metrics for the different test regions, such as a cache hit rate at each test region. Based on the performance metrics, the cache controller selects a cache replacement policy for the non-test region, such as selecting the replacement policy associated with the test region having the better performance metrics among the different test regions. The processor deskews the memory access measurements in response to a difference in the amount of accesses to the different test regions exceeding a threshold.
    Type: Grant
    Filed: December 16, 2021
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Paul Moyer, John Kelley
  • Patent number: 11880277
    Abstract: Selecting an error correction code type for a memory device includes: selecting, by the memory device in dependence upon predefined selection criteria, one of a plurality of error correction code types and carrying out memory access requests utilizing the selected error correction code type.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: January 23, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Sudhanva Gurumurthi, Vilas Sridharan
  • Patent number: 11880312
    Abstract: A method includes storing a function representing a set of data elements stored in a backing memory and, in response to a first memory read request for a first data element of the set of data elements, calculating a function result representing the first data element based on the function.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Kishore Punniyamurthy, SeyedMohammad SeyedzadehDelcheh, Sergey Blagodurov, Ganesh Dasika, Jagadish B Kotra
  • Patent number: 11881393
    Abstract: A system and method for efficiently creating layout for memory bit cells are described. In various implementations, cells of a library use Cross field effect transistors (FETs) that include vertically stacked gate all around (GAA) transistors with conducting channels oriented in an orthogonal direction between them. The channels of the vertically stacked transistors use opposite doping polarities. A first category of cells includes devices where each of the two devices in a particular vertical stack receive a same input signal. The second category of cells includes devices where the two devices in a particular vertical stack receive different input signals. The cells of the second category have a larger height dimension than the cells of the first category.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: January 23, 2024
    Assignee: Advanced micro devices, inc.
    Inventor: Richard T. Schultz
  • Patent number: 11881450
    Abstract: A system and method for fabricating on-die metal-insulator-metal capacitors capable of supporting relatively high voltage applications and increasing capacitance per area are described. In various implementations, an integrated circuit includes multiple metal-insulator-metal (MIM) capacitors. The MIM capacitors are formed between two signal nets such as two different power rails, two different control signals, or two different data signals. The integrated circuit includes multiple intermediate metal layers (or metal plates) formed between two signal nets. In high voltage regions, a MIM capacitor has one or more intermediate metal plates formed as floating plates between electrode metal plates. The floating plates have no connection to any power supply reference voltage level used by the integrated circuit.
    Type: Grant
    Filed: October 25, 2021
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Regina Tien Schmidt
  • Patent number: 11880926
    Abstract: A method, computer system, and a non-transitory computer-readable storage medium for performing primitive batch binning are disclosed. The method, computer system, and non-transitory computer-readable storage medium include techniques for generating a primitive batch from a plurality of primitives, computing respective bin intercepts for each of the plurality of primitives in the primitive batch, and shading the primitive batch by iteratively processing each of the respective bin intercepts computed until all of the respective bin intercepts are processed.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: January 23, 2024
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Michael Mantor, Laurent Lefebvre, Mark Fowler, Timothy Kelley, Mikko Alho, Mika Tuomi, Kiia Kallio, Patrick Klas Rudolf Buss, Jari Antero Komppa, Kaj Tuomi
  • Patent number: 11880610
    Abstract: A cluster compute server stores different types of data at different storage volumes in order to reduce data duplication at the storage volumes. The storage volumes are categorized into two classes: common storage volumes and dedicated storage volumes, wherein the common storage volumes store data to be accessed and used by multiple compute nodes (or multiple virtual servers) of the cluster compute server. The dedicated storage volumes, in contrast, store data to be accessed only by a corresponding compute node (or virtual server).
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mauricio Breternitz, Jr., Leonardo Piga
  • Patent number: 11880683
    Abstract: Systems, apparatuses, and methods for efficiently processing arithmetic operations are disclosed. A computing system includes a processor capable of executing single precision mathematical instructions on data sizes of M bits and half precision mathematical instructions on data sizes of N bits, which is less than M bits. At least two source operands with M bits indicated by a received instruction are read from a register file. If the instruction is a packed math instruction, at least a first source operand with a size of N bits less than M bits is selected from either a high portion or a low portion of one of the at least two source operands read from the register file. The instruction includes fields storing bits, each bit indicating the high portion or the low portion of a given source operand associated with a register identifier specified elsewhere in the instruction.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: January 23, 2024
    Assignee: Advanced micro devices, inc.
    Inventors: Jiasheng Chen, Bin He, Yunxiao Zou, Michael J. Mantor, Radhakrishna Giduthuri, Eric J. Finger, Brian D. Emberling
  • Patent number: 11880260
    Abstract: A heterogeneous processor system includes a first processor implementing an instruction set architecture (ISA) including a set of ISA features and configured to support a first subset of the set of ISA features. The heterogeneous processor system also includes a second processor implementing the ISA including the set of ISA features and configured to support a second subset of the set of ISA features, wherein the first subset and the second subset of the set of ISA features are different from each other. When the first subset includes an entirety of the set of ISA features, the lower-feature second processor is configured to execute an instruction thread by consuming less power and with lower performance than the first processor.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Elliot H. Mednick, Edward McLellan
  • Publication number: 20240020173
    Abstract: Methods and systems are disclosed for distribution of a workload among nodes of a NUMA architecture. Techniques disclosed include receiving the workload and data batches, the data batches to be processed by the workload. Techniques disclosed further include assigning workload processes to the nodes according to a determined distribution, and, then, executing the workload according to the determined distribution. The determined distribution is selected out of a set of distributions, so that the execution time of the workload, when executed according to the determined distribution, is minimal.
    Type: Application
    Filed: July 12, 2022
    Publication date: January 18, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventor: Aditya Chatterjee
  • Patent number: 11875197
    Abstract: Systems, apparatuses, and methods for managing a number of wavefronts permitted to concurrently execute in a processing system. An apparatus includes a register file with a plurality of registers and a plurality of compute units configured to execute wavefronts. A control unit of the apparatus is configured to allow a first number of wavefronts to execute concurrently on the plurality of compute units. The control unit is configured to allow no more than a second number of wavefronts to execute concurrently on the plurality of compute units, wherein the second number is less than the first number, in response to detection that thrashing of the register file is above a threshold. The control unit is configured to detect said thrashing based at least in part on a number of registers in use by executing wavefronts that spill to memory.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: January 16, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Bradford Michael Beckmann, Steven Tony Tye, Brian L. Sumner, Nicolai Hähnle
  • Patent number: 11875425
    Abstract: Implementing heterogeneous wavefronts on a graphics processing unit (GPU) is disclosed. A scheduler assigns heterogeneous wavefronts for execution on a compute unit of a processing device. The heterogeneous wavefronts include different types of wavefronts such as vector compute wavefronts and service-level wavefronts that vary in resource requirements and instruction sets. As one example, heterogeneous wavefronts may include scalar wavefronts and vector compute wavefronts that execute on scalar units and vector units, respectively. Distinct sets of instructions are executed for the heterogeneous wavefronts on the compute unit. Heterogeneous wavefronts are processed in the same pipeline of the processing device.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: January 16, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Sooraj Puthoor, Bradford Beckmann, Nuwan Jayasena, Anthony Gutierrez
  • Patent number: 11875875
    Abstract: Methods and systems are disclosed for calibrating, by a memory interface system, an interface with dynamic random-access memory (DRAM) using a dynamically changing training clock. Techniques disclosed comprise receiving a system clock having a clock signal at a first pulse rate. Then, during the training of the interface, techniques disclosed comprise generating a training clock from the clock signal at the first pulse rate, the training clock having a clock signal at a second pulse rate, and sending, based on the generated training clock, command signals, including address data, to the DRAM.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: January 16, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Anwar Kashem, Craig Daniel Eaton, Pouya Najafi Ashtiani
  • Patent number: 11874739
    Abstract: A memory module includes one or more programmable ECC engines that may be programed by a host processing element with a particular ECC implementation. As used herein, the term “ECC implementation” refers to ECC functionality for performing error detection and subsequent processing, for example using the results of the error detection to perform error correction and to encode corrupted data that cannot be corrected, etc. The approach allows an SoC designer or company to program and reprogram ECC engines in memory modules in a secure manner without having to disclose the particular ECC implementations used by the ECC engines to memory vendors or third parties.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: January 16, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Sudhanva Gurumurthi, Vilas Sridharan, Shaizeen Aga, Nuwan Jayasena, Michael Ignatowski, Shrikanth Ganapathy, John Kalamatianos
  • Patent number: 11876718
    Abstract: Graded throttling for network-on-chip traffic, including: calculating, by an agent of a network-on-chip, a number of outstanding transactions issued by the agent; determining that the number of outstanding transactions meets a threshold; and implementing, by the agent, in response to the number of outstanding transactions meeting the threshold, a traffic throttling policy.
    Type: Grant
    Filed: October 6, 2022
    Date of Patent: January 16, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Narendra Kamat
  • Patent number: 11874783
    Abstract: A coherent memory fabric includes a plurality of coherent master controllers and a coherent slave controller. The plurality of coherent master controllers each include a response data buffer. The coherent slave controller is coupled to the plurality of coherent master controllers. The coherent slave controller, responsive to determining a selected coherent block read command is guaranteed to have only one data response, sends a target request globally ordered message to the selected coherent master controller and transmits responsive data. The selected coherent master controller, responsive to receiving the target request globally ordered message, blocks any coherent probes to an address associated with the selected coherent block read command until receipt of the responsive data is acknowledged by a requesting client.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: January 16, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Amit P. Apte, Eric Christopher Morton, Ganesh Balakrishnan, Ann M. Ling
  • Patent number: 11874774
    Abstract: A method includes, in response to each write request of a plurality of write requests received at a memory-side cache device coupled with a memory device, writing payload data specified by the write request to the memory-side cache device, and when a first bandwidth availability condition is satisfied, performing a cache write-through by writing the payload data to the memory device, and recording an indication that the payload data written to the memory-side cache device matches the payload data written to the memory device.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: January 16, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Ravindra N. Bhargava, Ganesh Balakrishnan, Joe Sargunaraj, Chintan S. Patel, Girish Balaiah Aswathaiya, Vydhyanathan Kalyanasundharam
  • Patent number: 11869140
    Abstract: Improvements to graphics processing pipelines are disclosed. More specifically, the vertex shader stage, which performs vertex transformations, and the hull or geometry shader stages, are combined. If tessellation is disabled and geometry shading is enabled, then the graphics processing pipeline includes a combined vertex and graphics shader stage. If tessellation is enabled, then the graphics processing pipeline includes a combined vertex and hull shader stage. If tessellation and geometry shading are both disabled, then the graphics processing pipeline does not use a combined shader stage. The combined shader stages improve efficiency by reducing the number of executing instances of shader programs and associated resources reserved.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: January 9, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mangesh P. Nijasure, Randy W. Ramsey, Todd Martin
  • Patent number: 11868306
    Abstract: A processing system includes a processing unit and a memory device. The memory device includes a processing-in-memory (PIM) module that performs processing operations on behalf of the processing unit. An instruction set architecture (ISA) of the PIM module has fewer instructions than an ISA of the processing unit. Instructions received from the processing unit are translated such that processing resources of the PIM module are virtualized. As a result, the PIM module concurrently performs processing operations for multiple threads or applications of the processing unit.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: January 9, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Michael L. Chu, Ashwin Aji, Muhammad Amber Hassaan