Patents Assigned to Advanced Micro Device
  • Patent number: 11893502
    Abstract: A system assigns experts of a mixture-of-experts artificial intelligence model to processing devices in an automated manner. The system includes an orchestrator component that maintains priority data that stores, for each of a set of experts, and for each of a set of execution parameters, ranking information that ranks different processing devices for the particular execution parameter. In one example, for the execution parameter of execution speed, and for a first expert, the priority data indicates that a central processing unit (“CPU”) executes the first expert faster than a graphics processing unit (“GPU”). In this example, for the execution parameter of power consumption, and for the first expert, the priority data indicates that a GPU uses less power than a CPU. The priority data stores such information for one or more processing devices, one or more experts, and one or more execution characteristics.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: February 6, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Nicholas Malaya, Nuwan Jayasena
  • Publication number: 20240037031
    Abstract: A technique for operating a device is disclosed. The technique includes recording log data for the device; analyzing the log data to determine one or more performance settings adjustments to apply to the device; and applying the one or more performance settings adjustments to the device.
    Type: Application
    Filed: September 28, 2023
    Publication date: February 1, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Christopher J. Brennan, Akshay Lahiry
  • Publication number: 20240036748
    Abstract: A phase training update circuit operates to perform a phase training update on individual bit lanes. The phase training update circuit adjusts a bit lane transmit phase offset forward a designated number of phase steps, transmits a training pattern, and determines a first number of errors in the transmission. It also adjusts the bit lane transmit phase offset backward the designated number of phase steps, transmits the training pattern, and determines a second number of errors in the transmission. Responsive to a difference between the first number of errors and the second number of errors, the phase training update circuits adjusts a center phase position for the bit lane transmit phase offset of the selected bit lane.
    Type: Application
    Filed: October 11, 2023
    Publication date: February 1, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Scott P. Murphy, Huuhau M. Do
  • Patent number: 11886878
    Abstract: An integrated coprocessor such as an accelerated processing unit (APU) generates commands for execution on a discrete coprocessor such as a discrete graphics processing unit (dGPU). Power distribution circuitry selectively provides power to the APU and the dGPU based on characteristics of workloads executing on the APU and the dGPU and based on a platform power limit that is shared by the APU and the dGPU. In some cases, the power distribution circuitry determines a first power provided to the APU and a second power provided to the dGPU. The power distribution circuitry increases the second power provided to the dGPU in response to a sum of the first and second powers being less than the platform power limit. In some cases, the power distribution circuitry modifies the power provided to the APU, the dGPU, or both in response to changes in temperatures measured by a set of sensors.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: January 30, 2024
    Assignees: Advanced Micro Devices, Inc., ATI TECHNOLOGIES ULC
    Inventors: Sukesh Shenoy, Adam N. C. Clark, Indrani Paul
  • Patent number: 11886370
    Abstract: A semiconductor package includes multiple dies that share the same package pin. An output enable register provided on each die is used to select the die that drives an output to the shared pin. A hardware arbitration circuit ensures that two or more dies do not drive an output to the shared pin at the same time.
    Type: Grant
    Filed: May 13, 2022
    Date of Patent: January 30, 2024
    Assignees: ADVANCED MICRO DEVICES, INC., ATI TECHNOLOGIES ULO
    Inventors: Yulei Shen, Tyrone Tung Huang, Chen-Kuan Hong
  • Patent number: 11886224
    Abstract: A processing unit of a processing system compiles a priority queue listing of a plurality of processor cores to run a workload based on a cost of running the workload on each of the processor cores. The cost is based on at least one of a system usage policy, characteristics of the workload, and one or more physical constraints of each processor core. The processing unit selects a processor core based on the cost to run the workload and communicates an identifier of the selected processor core to an operating system of the processing system.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: January 30, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Leonardo De Paula Rosa Piga, Karthik Rao, Indrani Paul, Mahesh Subramony, Kenneth Mitchell, Dana Glenn Lewis, Sriram Sambamurthy, Wonje Choi
  • Publication number: 20240029336
    Abstract: Techniques for executing computing work by a plurality of chiplets are provided. The techniques include assigning workgroups of a kernel dispatch packet to the chiplets; by each chiplet, executing the workgroups assigned to that chiplet; for each chiplet, upon completion of all workgroups assigned to that chiplet for the kernel dispatch packet, notifying the other chiplets of such completion; and upon completion of all workgroups of the kernel dispatch packet, notifying a client of such completion and proceeding to a subsequent kernel dispatch packet.
    Type: Application
    Filed: October 3, 2023
    Publication date: January 25, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Milind N. Nemlekar, Maxim V. Kazakov, Prerit Dak
  • Patent number: 11880924
    Abstract: A method of tiled rendering is provided which comprises dividing a frame to be rendered, into a plurality of tiles, receiving commands to execute a plurality of subpasses of the tiles and interleaving execution of same subpasses of multiple tiles of the frame. Interleaving execution of same subpasses of multiple tiles comprises executing a previously ordered first subpass of a second tile between execution of the previously ordered first subpass of a first tile and execution of a subsequently ordered second subpass of the first tile. The interleaving is performed, for example, by executing the plurality of subpasses in an order different from the order in which the commands to execute the plurality of subpasses are stored and issued. Alternatively, interleaving is performed by executing one or more subpasses as skip operations such that the plurality of subpasses are executed in the same order.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Ruijin Wu, Mika Tuomi, Paavo Sampo Ilmari Pessi, Anirudh R. Acharya
  • Patent number: 11880248
    Abstract: A docking station for secondary external cooling for mobile computing devices, including: one or more ports for docking a mobile computing device; a docking platform for supporting the mobile computing device; a cooling element; and a thermal interface housed in the docking platform for transferring heat between the mobile computing device and the cooling element.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: January 23, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Christopher M. Jaggers, Christopher M. Helberg
  • Patent number: 11880310
    Abstract: A processor includes a cache having two or more test regions and a larger non-test region. The processor further includes a cache controller that applies different cache replacement policies to the different test regions of the cache, and a performance monitor that measures performance metrics for the different test regions, such as a cache hit rate at each test region. Based on the performance metrics, the cache controller selects a cache replacement policy for the non-test region, such as selecting the replacement policy associated with the test region having the better performance metrics among the different test regions. The processor deskews the memory access measurements in response to a difference in the amount of accesses to the different test regions exceeding a threshold.
    Type: Grant
    Filed: December 16, 2021
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Paul Moyer, John Kelley
  • Patent number: 11880715
    Abstract: Methods and systems for load balancing in a neural network system using metadata are disclosed. Any one or a combination of one or more kernels, one or more neurons, and one or more layers of the neural network system are tagged with metadata. A scheduler detects whether there are neurons that are available to execute. The scheduler uses the metadata to schedule and load balance computations across compute resources and available resources.
    Type: Grant
    Filed: April 5, 2021
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Nicholas Malaya, Yasuko Eckert
  • Patent number: 11880312
    Abstract: A method includes storing a function representing a set of data elements stored in a backing memory and, in response to a first memory read request for a first data element of the set of data elements, calculating a function result representing the first data element based on the function.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Kishore Punniyamurthy, SeyedMohammad SeyedzadehDelcheh, Sergey Blagodurov, Ganesh Dasika, Jagadish B Kotra
  • Patent number: 11880769
    Abstract: A system is described that performs training operations for a neural network, the system including an analog circuit element functional block with an array of analog circuit elements, and a controller. The controller monitors error values computed using an output from each of one or more initial iterations of a neural network training operation, the one or more initial iterations being performed using neural network data acquired from the memory. When one or more error values are less than a threshold, the controller uses the neural network data from the memory to configure the analog circuit element functional block to perform remaining iterations of the neural network training operation. The controller then causes the analog circuit element functional block to perform the remaining iterations.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Sudhanva Gurumurthi
  • Patent number: 11881450
    Abstract: A system and method for fabricating on-die metal-insulator-metal capacitors capable of supporting relatively high voltage applications and increasing capacitance per area are described. In various implementations, an integrated circuit includes multiple metal-insulator-metal (MIM) capacitors. The MIM capacitors are formed between two signal nets such as two different power rails, two different control signals, or two different data signals. The integrated circuit includes multiple intermediate metal layers (or metal plates) formed between two signal nets. In high voltage regions, a MIM capacitor has one or more intermediate metal plates formed as floating plates between electrode metal plates. The floating plates have no connection to any power supply reference voltage level used by the integrated circuit.
    Type: Grant
    Filed: October 25, 2021
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Regina Tien Schmidt
  • Patent number: 11880277
    Abstract: Selecting an error correction code type for a memory device includes: selecting, by the memory device in dependence upon predefined selection criteria, one of a plurality of error correction code types and carrying out memory access requests utilizing the selected error correction code type.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: January 23, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Sudhanva Gurumurthi, Vilas Sridharan
  • Patent number: 11880610
    Abstract: A cluster compute server stores different types of data at different storage volumes in order to reduce data duplication at the storage volumes. The storage volumes are categorized into two classes: common storage volumes and dedicated storage volumes, wherein the common storage volumes store data to be accessed and used by multiple compute nodes (or multiple virtual servers) of the cluster compute server. The dedicated storage volumes, in contrast, store data to be accessed only by a corresponding compute node (or virtual server).
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mauricio Breternitz, Jr., Leonardo Piga
  • Patent number: 11881393
    Abstract: A system and method for efficiently creating layout for memory bit cells are described. In various implementations, cells of a library use Cross field effect transistors (FETs) that include vertically stacked gate all around (GAA) transistors with conducting channels oriented in an orthogonal direction between them. The channels of the vertically stacked transistors use opposite doping polarities. A first category of cells includes devices where each of the two devices in a particular vertical stack receive a same input signal. The second category of cells includes devices where the two devices in a particular vertical stack receive different input signals. The cells of the second category have a larger height dimension than the cells of the first category.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: January 23, 2024
    Assignee: Advanced micro devices, inc.
    Inventor: Richard T. Schultz
  • Patent number: 11880260
    Abstract: A heterogeneous processor system includes a first processor implementing an instruction set architecture (ISA) including a set of ISA features and configured to support a first subset of the set of ISA features. The heterogeneous processor system also includes a second processor implementing the ISA including the set of ISA features and configured to support a second subset of the set of ISA features, wherein the first subset and the second subset of the set of ISA features are different from each other. When the first subset includes an entirety of the set of ISA features, the lower-feature second processor is configured to execute an instruction thread by consuming less power and with lower performance than the first processor.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Elliot H. Mednick, Edward McLellan
  • Patent number: 11880926
    Abstract: A method, computer system, and a non-transitory computer-readable storage medium for performing primitive batch binning are disclosed. The method, computer system, and non-transitory computer-readable storage medium include techniques for generating a primitive batch from a plurality of primitives, computing respective bin intercepts for each of the plurality of primitives in the primitive batch, and shading the primitive batch by iteratively processing each of the respective bin intercepts computed until all of the respective bin intercepts are processed.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: January 23, 2024
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Michael Mantor, Laurent Lefebvre, Mark Fowler, Timothy Kelley, Mikko Alho, Mika Tuomi, Kiia Kallio, Patrick Klas Rudolf Buss, Jari Antero Komppa, Kaj Tuomi
  • Patent number: 11880683
    Abstract: Systems, apparatuses, and methods for efficiently processing arithmetic operations are disclosed. A computing system includes a processor capable of executing single precision mathematical instructions on data sizes of M bits and half precision mathematical instructions on data sizes of N bits, which is less than M bits. At least two source operands with M bits indicated by a received instruction are read from a register file. If the instruction is a packed math instruction, at least a first source operand with a size of N bits less than M bits is selected from either a high portion or a low portion of one of the at least two source operands read from the register file. The instruction includes fields storing bits, each bit indicating the high portion or the low portion of a given source operand associated with a register identifier specified elsewhere in the instruction.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: January 23, 2024
    Assignee: Advanced micro devices, inc.
    Inventors: Jiasheng Chen, Bin He, Yunxiao Zou, Michael J. Mantor, Radhakrishna Giduthuri, Eric J. Finger, Brian D. Emberling