Patents Assigned to Advanced Micro Devices, Inc.
  • Publication number: 20230325313
    Abstract: Disclosed is a system and method for use in a cache for suppressing modification of cache line. The system and method includes a processor and a memory operating cooperatively with a cache controller. The memory includes a coherence directory stored within a cache created to track at least one cache line in the cache via the cache controller. The processor instructs a cache controller to store a first data in a cache line in the cache. The cache controller tags the cache line based on the first data. The processor instructs the cache controller to store a second data in the cache line in the cache causing eviction of the first data from the cache line. The processor compares based on the tagging the first data and the second data and suppresses modification of the cache line based on the comparing of the first data and the second data.
    Type: Application
    Filed: April 17, 2023
    Publication date: October 12, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventor: Paul J. Moyer
  • Patent number: 11782848
    Abstract: Systems, apparatuses, and methods for implementing a speculative probe mechanism are disclosed. A system includes at least multiple processing nodes, a probe filter, and a coherent slave. The coherent slave includes an early probe cache to cache recent lookups to the probe filter. The early probe cache includes entries for regions of memory, wherein a region includes a plurality of cache lines. The coherent slave performs parallel lookups to the probe filter and the early probe cache responsive to receiving a memory request. An early probe is sent to a first processing node responsive to determining that a lookup to the early probe cache hits on a first entry identifying the first processing node as an owner of a first region targeted by the memory request and responsive to determining that a confidence indicator of the first entry is greater than a threshold.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: October 10, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Amit P. Apte, Ganesh Balakrishnan, Vydhyanathan Kalyanasundharam, Kevin M. Lepak
  • Patent number: 11782494
    Abstract: A method and apparatus controls power management of a graphics processing core when multiple virtual machines are allocated to the graphics processing core on a much finer-grain level than conventional systems. In one example, the method and apparatus processes a plurality of virtual machine power control setting requests to determine a power control request for a power management unit of a graphics processing core. The method and apparatus then controls power levels of the graphics processing core with the power management unit based on the determined power control request.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: October 10, 2023
    Assignees: ADVANCED MICRO DEVICES, INC., ATI TECHNOLOGIES ULC
    Inventors: Oleksandr Khodorkovsky, Stephen D. Presant
  • Patent number: 11782897
    Abstract: Described herein is a system and method for multiplexer tree (muxtree) indexing. Muxtree indexing performs hashing and row reduction in parallel by use of at least one bit in a lookup address at least once in a particular path of the muxtree. The muxtree indexing generates a different final index as compared to conventional hashed indexing but still results in a fair hash, where all table entries get used with equal distribution with uniformly random selects.
    Type: Grant
    Filed: April 15, 2022
    Date of Patent: October 10, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Steven R. Havlir, Patrick J. Shyvers
  • Patent number: 11783529
    Abstract: A technique for performing ray tracing operations is provided. The technique includes combining one or more common exponent values of a compressed box node with one or more minimum vertex mantissas of the compressed box node and one or more maximum vertex mantissas of the compressed box node to obtain one or more minimum vertices and one or more maximum vertices; and combining a minimum vertex of the one or more minimum vertices and a maximum vertex of the one or more maximum vertices to obtain a bounding box for a compressed box data item of the compressed box node.
    Type: Grant
    Filed: December 27, 2021
    Date of Patent: October 10, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Skyler Jonathon Saleh, Chen Huang
  • Patent number: 11782838
    Abstract: Techniques for prefetching are provided. The techniques include receiving a first prefetch command; in response to determining that a history buffer indicates that first information associated with the first prefetch command has not already been prefetched, prefetching the first information into a memory; receiving a second prefetch command; and in response to determining that the history buffer indicates that second information associated with the second prefetch command has already been prefetched, avoiding prefetching the second information into the memory.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: October 10, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Anirudh R. Acharya, Alexander Fuad Ashkar
  • Patent number: 11782640
    Abstract: A memory controller includes a command queue that receives and stores decoded memory commands and information related thereto including information indicating a type, a priority, an age, and a region of a memory system for a corresponding decoded memory command, and an arbiter coupled to the command queue and picks selected decoded memory commands among the decoded memory commands from the command queue for dispatch to the memory system by comparing the priority and the age for decoded memory commands having a first type. The arbiter detects when the command queue receives a decoded memory command of a second type opposite to said first type that accesses a first memory region of the memory system, and in response elevates at least one of the priority and the age of a decoded command of the first type that accesses the first memory region already stored in the command queue.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: October 10, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Guanhao Shen, Ravindra Nath Bhargava
  • Patent number: 11784427
    Abstract: A socket actuation mechanism for package insertion and package-socket alignment, including: a socket frame comprising a plurality of first hinge portions; a carrier frame comprising: a center portion comprising one or more package interlocks; and a tab extending from a first end of the carrier frame, the tab comprising a second hinge portion couplable with the plurality of first hinge portions to form a hinge coupling the carrier frame to the socket frame.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: October 10, 2023
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Peng Li, Changwei Liang
  • Publication number: 20230315191
    Abstract: Core activation and deactivation for a multi-core processor is described. In accordance with the described techniques, a processor having multiple cores operates using a first core configuration. A request to switch from the first core configuration to a second core configuration is received. Responsive to the request, a switch from the first core configuration to the second core configuration occurs by adjusting a number of active cores of the processor without rebooting.
    Type: Application
    Filed: March 30, 2022
    Publication date: October 5, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: William Robert Alverson, Amitabh Mehra, Jerry Anton Ahrens, Grant Evan Ley, Anil Harwani, Joshua Taylor Knight
  • Publication number: 20230315320
    Abstract: A page swapping memory protection system tracks accesses to physical memory pages, such as in a table with each row storing a physical memory page address and a counter value. This counter value records the number of accesses (e.g., read access or write accesses) to the corresponding physical memory page. In response to one of the counters exceeding a threshold value, the corresponding physical memory page is swapped with another page in physical memory (e.g., a page the table indicates has a smallest number of accesses). According, unreliability in the physical memory introduced due to frequent accesses to a particular physical memory page is mitigated.
    Type: Application
    Filed: March 24, 2022
    Publication date: October 5, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: SeyedMohammad SeyedzadehDelcheh, Sriseshan Srikanth
  • Publication number: 20230315188
    Abstract: Methods and systems are disclosed for transitioning, by a hardware-based controller, a system on a chip (SoC) into different power states. Techniques disclosed include tracking, by the controller, metrics associated with the SoC and transitioning, by the controller, the SoC from a first power state to a second power state based on the tracked metrics. Were the total amount of power that is used by at least a portion of the transition between the first power state to the second power state and a time spent in the second power state is less than the total amount of power that would have been used by remaining in the first power state.
    Type: Application
    Filed: March 31, 2022
    Publication date: October 5, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Alexander J. Branover, Thomas J. Gibney, Mihir Shaileshbhai Doctor, Indrani Paul, Benjamin Tsien, Stephen V. Kosonocky, John P. Petry, Christopher T. Weaver
  • Publication number: 20230315171
    Abstract: Package lids with carveouts configured for processor connection and alignment are described. Lid carveouts are configured to align and mechanically secure a cooling device to the package lid by receiving protrusions of the cooling device. Because the lid carveouts ensure precise alignment and orientation of a cooling device relative to a package lid, the lid design enables targeted cooling of discrete portions of the lid. Lid carveouts are further configured to expose one or more connectors disposed on a surface that supports package internal components. When contacted by corresponding connectors of a cooling device, the lid carveouts enable direct connections between the package and the attached cooling device. By creating a direct connection between package components and an attached cooling device, the lid carveouts enable a high-speed connection for proactive and on-demand cooling actuation.
    Type: Application
    Filed: March 25, 2022
    Publication date: October 5, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Jerry Anton Ahrens, William Robert Alverson, Amitabh Mehra, Grant Evan Ley, Anil Harwani, Joshua Taylor Knight
  • Publication number: 20230315657
    Abstract: Methods and systems are disclosed for cross-chiplet performance data streaming. Techniques disclosed include accumulating, by a subservient chiplet, event data associated with an event indicative of a performance aspect of the subservient chiplet; sending, by the subservient chiplet, the event data over a chiplet bus to a master chiplet; and adding, by the master chiplet, the received event data to an event record, the event record containing previously received, from the subservient chiplet over the chiplet bus, event data associated with the event.
    Type: Application
    Filed: March 31, 2022
    Publication date: October 5, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Bryan Broussard, Pravesh Gupta, Benjamin Tsien, Vydhyanathan Kalyanasundharam
  • Patent number: 11776599
    Abstract: A processing device is provided which includes a processor and a data storage structure. The data storage structure comprises a data storage array comprising a plurality of lines. Each line comprises at least one A latch configured to store a data bit and a clock gater. The data storage structure also comprises a write data B latch configured to store, over different clock cycles, a different data bit, each to be written to the at least one A latch of one of the plurality of lines. The data storage structure also comprises a plurality of write index B latches shared by the clock gaters of the lines. The write index B latches are configured to store, over the different clock cycles, combinations of index bits having values which index one of the lines to which a corresponding data bit is to be stored.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: October 3, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Patrick J. Shyvers
  • Patent number: 11775349
    Abstract: One or more processors are operative to carry out neural network operations and include a plurality of compute units (CUs) configurable for neural network operations. The neural network compute unit remapping logic detects a condition to remap neural network compute units that are currently used in carrying out a neural network operation in a processor with at least one replacement compute unit that is not currently being used to carry out the neural network operation. In response to detecting the condition, the logic remaps a logical address of at least one currently used compute unit to a different physical address that corresponds to the replacement compute unit and causes the replacement compute unit to carry out neural network operations.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: October 3, 2023
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Greg Sadowski
  • Patent number: 11775799
    Abstract: Systems, apparatuses, and methods for managing buffers in a neural network implementation with heterogeneous memory are disclosed. A system includes a neural network coupled to a first memory and a second memory. The first memory is a relatively low-capacity, high-bandwidth memory while the second memory is a relatively high-capacity, low-bandwidth memory. During a forward propagation pass of the neural network, a run-time manager monitors the usage of the buffers for the various layers of the neural network. During a backward propagation pass of the neural network, the run-time manager determines how to move the buffers between the first and second memories based on the monitored buffer usage during the forward propagation pass. As a result, the run-time manager is able to reduce memory access latency for the layers of the neural network during the backward propagation pass.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: October 3, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Georgios Mappouras, Amin Farmahini-Farahani, Sudhanva Gurumurthi, Abhinav Vishnu, Gabriel H. Loh
  • Patent number: 11775043
    Abstract: Systems and methods are disclosed for reducing the power consumption of a system. Techniques are described that queue a message, sent by a source engine of the system, in a queue of a destination engine of the system that is in a sleep mode. Then, a priority level associated with the queued message is determined. If the priority level is at a maximum level, the destination engine is brought into an active mode. If the priority level is at an intermediate level, the destination engine is brought into an active mode when a time, associated with the intermediate level, has elapsed. When the destination engine is brought into an active mode it processes all messages accumulated in its queue in an order determined by their associated priority levels.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: October 3, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Vidyashankar Viswanathan
  • Patent number: 11776085
    Abstract: A processing system includes a graphics pipeline that executes a first shader of a first type and a second shader of a second type. In some cases, the first shader is a geometry shader and the second shader is a pixel shader. The processing system also includes buffers that hold primitives generated by the first shader and provide the primitives to the second shader. The processing system also includes a primitive hub that monitors fullness of the buffers. Launching of waves from the first shader is throttled based on the fullness of the buffers. A shader processor input (SPI) selectively throttles the waves launched by the geometry shader based on a signal from the primitive hub indicating the fullness, an indication of relative resource usage of geometry waves and pixel waves in the graphics pipeline, or an indication of lifetimes of the geometry waves.
    Type: Grant
    Filed: December 16, 2020
    Date of Patent: October 3, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Nishank Pathak, Randy Wayne Ramsey, Tad Litwiller, Rex Eldon McCrary
  • Patent number: 11778803
    Abstract: A system and method for efficiently creating layout for memory bit cells are described. In various implementations, a memory bit cell uses Cross field effect transistors (FETs) that include vertically stacked gate all around (GAA) transistors with conducting channels oriented in an orthogonal direction between them. The channels of the vertically stacked transistors use opposite doping polarities. The memory bit cell includes one of a read bit line and a write word line routed in no other metal layer other than a local interconnect layer. In addition, a six transistor (6T) random access data storage of the given memory bit cell consumes a planar area above a silicon substrate of four transistors.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: October 3, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Richard T. Schultz
  • Publication number: 20230305923
    Abstract: A memory system uses error detection codes to detect when errors have occurred in a region of memory. A count of the number of errors is kept and a notification is output in response to the number of errors satisfying a threshold value. The notification is an indication to a host (e.g., a program accessing or managing a machine learning system) that the threshold number of errors have been detected in the region of memory. As long as the number of errors that have been detected in the region of memory remains under the threshold number no notification need be output to the host.
    Type: Application
    Filed: March 25, 2022
    Publication date: September 28, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Sudhanva Gurumurthi, Ganesh Suryanarayan Dasika