Patents Assigned to Advanced Micro Device, Inc.
  • Patent number: 11809558
    Abstract: A method of packet attribute confirmation includes receiving, at a command processor of a parallel processor, a command packet including a received packet attribute, such as a packet size, of the command packet. The command processor compares the received packet attribute of the command packet relative to an expected packet attribute of the command packet. The command processor passes one or more commands to a prefetch parser such that a summed total size of the one or more commands is equal to the received packet size of the command packet. The command processor passes, based at least on determining a match between the received packet size and the expected packet size, the received command packet to the prefetch parser. Otherwise, the command processor passes, based at least on determining a mismatch between the received packet size and the expected packet size, one or more no-operation instructions to the prefetch parser.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: November 7, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Harry J. Wise, Alexander Fuad Ashkar, Manu Rastogi
  • Patent number: 11809743
    Abstract: A memory controller includes a command queue having a first input for receiving memory access requests, and a memory interface queue having an output for coupling to a memory channel adapted for connecting to at least one dynamic random access memory (DRAM) module. A refresh control circuit monitors activate commands to be sent over the memory channel. In response to an activate command meeting a designated condition, the refresh control circuit identifies a candidate aggressor row associated with the activate command. A command is sent to the DRAM requesting that the candidate aggressor row be queued for mitigation in a future refresh or refresh management event.
    Type: Grant
    Filed: September 21, 2020
    Date of Patent: November 7, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Kevin M. Brandl
  • Patent number: 11810891
    Abstract: Various chip stacks and methods and structures of interconnecting the same are disclosed. In one aspect, an apparatus is provided that includes a first semiconductor chip that has a first glass layer and plural first groups of plural conductor pads in the first glass layer. Each of the plural first groups of conductor pads is configured to bumplessly connect to a corresponding second group of plural conductor pads of a second semiconductor chip to make up a first interconnect of a plurality interconnects that connect the first semiconductor chip to the second semiconductor chip. The first glass layer is configured to bond to a second glass layer of the second semiconductor chip.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: November 7, 2023
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Priyal Shah, Milind S. Bhagavat
  • Publication number: 20230351187
    Abstract: Systems, methods, and devices for pruning a convolutional neural network (CNN). A subset of layers of the CNN is chosen, and for each layer of the subset of layers, how salient each filter in the layer is to an output of the CNN is determined, a subset of the filters in the layer is determined based on the salience of each filter in the layer, and the subset of filters in the layer is pruned. In some implementations, the layers of the subset of layers of the CNN are non-contiguous. In some implementations, the subset of layers includes odd numbered layers of the CNN and excludes even numbered layers of the CNN. In some implementations, the subset of layers includes even numbered layers of the CNN and excludes odd numbered layers of the CNN.
    Type: Application
    Filed: June 30, 2023
    Publication date: November 2, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Arun Coimbatore Ramachandran, Chandra Kumar Ramasamy, Prakash Sathyanath Raghavendra, Keerthan Shagrithaya
  • Publication number: 20230350830
    Abstract: An apparatus and method for performing memory operations in memory stacks comprising receiving a memory operation request at a first memory controller, where the first memory controller is in included in a first logic die in communication with a first memory die of a first memory technology, from a processor via a first bus. The method further comprising, on a condition that the memory operation request is associated with a second memory technology, communicating the memory operation request to a second memory controller via a side bus, where the second memory controller is included in a second logic die in communication with a second memory die of the second memory technology, and, on a condition that the memory operation request is associated with the first memory technology, performing the memory operation request. The first and second logic dies and the first and second memory dies being stacked on the processor.
    Type: Application
    Filed: March 13, 2023
    Publication date: November 2, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Dmitri Yudanov, Michael Ignatowski
  • Publication number: 20230350715
    Abstract: Various timing parameter values for a memory system are changed and a workload is run using the changed timing parameter values resulting in workload performance values. The workload is run multiple times with different timing parameter values and the performance values generated by the workload are used to generate and output a performance indication that identifies how sensitive performance of the physical memory is to the one or more timing parameters. The parameter values generated by the workload are optionally used to predict what parameter value the workload would have generated for user selected timing parameter values (e.g., without running the workload).
    Type: Application
    Filed: April 29, 2022
    Publication date: November 2, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Joshua Taylor Knight, Jayesh Hari Joshi, Anil Harwani, Grant Evan Ley, Jerry Anton Ahrens, William Robert Alverson, Amitabh Mehra
  • Publication number: 20230350484
    Abstract: A processing device and method for efficient transitioning to and from a reduced power state is provided. The processing device comprises a plurality of components having assigned registers used to store data to execute a program and a power management controller, in communication with the plurality of components. The power management controller receives an indication that the plurality of components are idle, executes a process to enter a component into a reduced power state in response to receiving an acknowledgement from the component of a request from the power management controller to remove power to the component, and executes a process to exit the component from the reduced power state in response to the component being active.
    Type: Application
    Filed: April 21, 2023
    Publication date: November 2, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Mihir Shaileshbhai Doctor, Alexander J. Branover, Benjamin Tsien, Indrani Paul, Christopher T. Weaver, Thomas J. Gibney, Stephen V. Kosonocky, John P. Petry
  • Publication number: 20230350480
    Abstract: Platform power management includes boosting performance in a platform power boost mode or restricting performance to keep a power or temperature under a desired threshold in a platform power cap mode. Platform power management exploits the mutually exclusive nature of activities and the associated headroom created in a temperature and/or power budget of a server platform to boost performance of a particular component while also keeping temperature and/or power below a threshold or budget.
    Type: Application
    Filed: June 23, 2023
    Publication date: November 2, 2023
    Applicants: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Indrani Paul, Sriram Sambamurthy, Larry David Hewitt, Kevin M. Lepak, Samuel D. Naffziger, Adam Neil Calder Clark, Aaron Joseph Grenat, Steven Frederick Liepe, Sandhya Shyamasundar, Wonje Choi, Dana Glenn Lewis, Leonardo de Paula Rosa Piga
  • Publication number: 20230350591
    Abstract: Profile switching for memory overclocking is described. In accordance with the described techniques, a memory is operated according to a first memory profile. During operation of the memory according to the first memory profile, a request is received to operate the memory according to a second memory profile. Responsive to the request, operation of the memory is switched to operate according to the second memory profile without rebooting. In one or more implementations, at least one of the first memory profile or the second memory profile comprises an overclocking memory profile that configures the memory to operate in an overclocking mode. In one or more implementations, the memory is trained to operate according to the overclocking memory profile prior to operating the memory according to the first memory profile.
    Type: Application
    Filed: April 29, 2022
    Publication date: November 2, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Grant Evan Ley, Jayesh Hari Joshi, Amitabh Mehra, Jerry Anton Ahrens, Joshua Taylor Knight, Anil Harwani, William Robert Alverson
  • Publication number: 20230351667
    Abstract: A technique for building a bounding volume hierarchy is disclosed. The technique includes performing a nearest neighbor search for a set of clusters to generate a set of nearest neighbors; without performing a global barrier operation, performing a merge operation for the set of clusters, based on the set of nearest neighbors to generate merge results for the set of clusters; and without performing a global barrier operation, outputting clusters for a level of the bounding volume hierarchy, based on the merge results.
    Type: Application
    Filed: September 30, 2022
    Publication date: November 2, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventor: John Alexandre Tsakok
  • Publication number: 20230350696
    Abstract: Real time workload-based system adjustment is described. In accordance with the described techniques, a processor and a memory are operated according to first settings associated with a first workload. A second workload configured to utilize the processor and the memory is detected. The second workload is associated with second settings. Responsive to detecting the second workload, operation of the processor and the memory are adjusted to operate according to the second settings without rebooting.
    Type: Application
    Filed: April 29, 2022
    Publication date: November 2, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Anil Harwani, William Robert Alverson, Amitabh Mehra, Jerry Anton Ahrens, Grant Evan Ley, Joshua Taylor Knight
  • Publication number: 20230350485
    Abstract: Systems, methods, devices, and computer-implemented instructions for processor power management implemented in a compiler. In some implementations, a characteristic of code is determined. An instruction based on the determined characteristic is inserted into the code. The code and inserted instruction are compiled to generate compiled code. The compiled code is output.
    Type: Application
    Filed: July 3, 2023
    Publication date: November 2, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Vedula Venkata Srikant Bharadwaj, Shomit Das, Anthony T. Gutierrez, Vignesh Adhinarayanan
  • Patent number: 11804479
    Abstract: Systems, apparatuses, and methods for routing traffic through vertically stacked semiconductor dies are disclosed. A first semiconductor die has a second die stacked vertically on top of it in a three-dimensional integrated circuit. The first die includes a through silicon via (TSV) interconnect that does not traverse the first die. The first die includes one or more metal layers above the TSV, which connect to a bonding pad interface through a bonding pad via. If the signals transferred through the TSV of the first die are shared by the second die, then the second die includes a TSV aligned with the bonding pad interface of the first die. If these signals are not shared by the second die, then the second die includes an insulated portion of a wafer backside aligned with the bonding pad interface.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: October 31, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: John J. Wuu, Milind S. Bhagavat, Brett P. Wilkerson, Rahul Agarwal
  • Patent number: 11803999
    Abstract: Systems, methods, and techniques utilize reinforcement learning to efficiently schedule a sequence of jobs for execution by one or more processing threads. A first sequence of execution jobs associated with rendering a target frame of a sequence of frames is received. One or more reward metrics related to rendering the target frame are selected. A modified sequence of execution jobs for rendering the target frame is generated, such as by reordering the first sequence of execution jobs. The modified sequence is evaluated with respect to the selected reward metric(s); and rendering the target frame is initiated based at least in part on the evaluating of the modified sequence with respect to the one or more selected reward metric(s).
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: October 31, 2023
    Assignees: Advanced Micro Devices, Inc., ATI TECHNOLOGIES ULC
    Inventors: Thomas Daniel Perry, Steven Tovey, Mehdi Saeedi, Andrej Zdravkovic, Zhuo Chen
  • Patent number: 11803385
    Abstract: An array processor includes processor element arrays (PEAs) distributed in rows and columns. The PEAs are configured to perform operations on parameter values. A first sequencer received a first direct memory access (DMA) instruction that includes a request to read data from at least one address in memory. A texture address (TA) engine requests the data from the memory based on the at least one address and a texture data (TD) engine provides the data to the PEAs. The PEAs provide first synchronization signals to the TD engine to indicate availability of registers for receiving the data. The TD engine provides second synchronization signals to the first sequencer in response to receiving acknowledgments that the PEAs have consumed the data.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: October 31, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Sateesh Lagudu, Arun Vaidyanathan Ananthanarayan, Michael Mantor, Allen H. Rush
  • Patent number: 11803437
    Abstract: A memory includes a link training circuit with a pseudo-random bit sequence (PRBS) generator and a burst error detection counter. The burst error detection counter including a comparator, a first input coupled to the data input, a second input coupled to the PRBS generator, and a counter operable to increase an error count value by one responsive to detecting any number of errors greater than zero in a sequence of symbols including a predetermined number of symbols.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: October 31, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Aaron D Willey, Karthik Gopalakrishnan
  • Patent number: 11803473
    Abstract: Systems and techniques for dynamic selection of policy that determines whether copies of shared cache lines in a processor core complex are to be stored and maintained in a level 3 (L3) cache of the processor core complex are based on one or more cache line sharing parameters or based on a counter that tracks L3 cache misses and cache-to-cache (C2C) transfers in the processor core complex, according to various embodiments. Shared cache lines are shared between processor cores or between threads. By comparing either the cache line sharing parameters or the counter to corresponding thresholds, a policy is set which defines whether copies of shared cache lines at such indices are to be retained in the L3 cache.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: October 31, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: John Kelley, Paul Moyer
  • Patent number: 11803311
    Abstract: Methods and apparatuses to control digital data transfer via a memory channel between a memory module and a processor are disclosed. At least one of the memory module or the processor coalesces a plurality of short data words into multicast coalesced block data comprising a single data block for transfer via the memory channel. Each of the plurality of short data words pertains to one of at least two partitioned memory submodules in the memory module. The multicast coalesced block data is communicated over the memory channel.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: October 31, 2023
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Johnathan Alsop, Nuwan Jayasena, Shaizeen Aga, Andrew McCrabb
  • Patent number: 11805026
    Abstract: Systems, apparatuses, and methods for utilizing training sequences on a replica lane are described. A transmitter is coupled to a receiver via a communication channel with a plurality of lanes. One of the lanes is a replica lane used for tracking the drift in the optimal sampling point due to temperature variations, power supply variations, or other factors. While data is sent on the data lanes, test patterns are sent on the replica lane to determine if the optimal sampling point for the replica lane has drifted since a previous test. If the optimal sampling point has drifted for the replica lane, adjustments are made to the sampling point of the replica lane and to the sampling points of the data lanes.
    Type: Grant
    Filed: August 14, 2020
    Date of Patent: October 31, 2023
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Stanley Ames Lackey, Jr., Damon Tohidi, Gerald R. Talbot, Edoardo Prete
  • Patent number: 11803734
    Abstract: Methods, devices, systems, and instructions for adaptive quantization in an artificial neural network (ANN) calculate a distribution of ANN information; select a quantization function from a set of quantization functions based on the distribution; apply the quantization function to the ANN information to generate quantized ANN information; load the quantized ANN information into the ANN; and generate an output based on the quantized ANN information. Some examples recalculate the distribution of ANN information and reselect the quantization function from the set of quantization functions based on the resampled distribution if the output does not sufficiently correlate with a known correct output. In some examples, the ANN information includes a set of training data. In some examples, the ANN information includes a plurality of link weights.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: October 31, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Daniel I. Lowell, Sergey Voronov, Mayank Daga