Patents Assigned to Advanced Micro Devics, Inc.
-
Publication number: 20250112470Abstract: The disclosed device includes power circuits that can communicate with a control circuit. In response to a power circuit communicating a low efficiency state, the control circuit can redistribute at least a portion of a load of the power circuit to one or more other power circuits. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: September 29, 2023Publication date: April 3, 2025Applicant: Advanced Micro Devices, Inc.Inventors: David King Wai Li, Indrani Paul
-
Publication number: 20250111586Abstract: A technique for performing ray tracing operations is provided. The technique includes for a ray being tested for intersection with geometry associated with a bounding volume hierarchy, traversing to a pre-filtering node that includes information for filtering out triangles of a leaf node of the bounding volume hierarchy; evaluating a quantized ray that corresponds to the ray against quantized triangles of the pre-filtering node to filter out one or more triangles of the leaf node from consideration; and testing the triangles of the leaf node that are not filtered out and not testing the triangles of the leaf node that are filtered out.Type: ApplicationFiled: September 29, 2023Publication date: April 3, 2025Applicants: Advanced Micro Devices, Inc., ATI Technologies ULCInventors: Michael John Livesley, David William John Pankratz, Sean Keely, Andrew Erin Kensler
-
Publication number: 20250110894Abstract: Scratchpad memory translation lookaside buffer techniques are described. In an implementation, the techniques described herein relate to a device including a memory management unit implemented in hardware of an integrated circuit to receive a mapping instruction from a mapping instruction source, the mapping instruction specifying a mapping between a virtual memory address and a physical memory address of a scratchpad memory and store a virtual-to-physical mapping entry in a translation lookaside buffer based on the mapping instruction.Type: ApplicationFiled: September 28, 2023Publication date: April 3, 2025Applicant: Advanced Micro Devices, Inc.Inventor: Mark Evan Wilkening
-
Publication number: 20250112389Abstract: A data interface connector and method of manufacture and/or assembly thereof can include first electrical terminals at a first end of the data interface connector, the first electrical terminals being configured to interface with a mating data interface connector conforming to a first data interface specification. The data interface connector and method of manufacture and/or assembly thereof can include second electrical terminals at a second end of the data interface connector, the second electrical terminals being configured to interface with data interface pads on a circuit board; where the data interface pads have pitches and lengths according to a second data interface specification.Type: ApplicationFiled: September 29, 2023Publication date: April 3, 2025Applicant: Advanced Micro Devices, Inc.Inventor: HaiFeng Gu
-
Publication number: 20250110663Abstract: A data processor that is operable to be coupled to a memory includes a memory operation array, a controller, a refresh logic circuit, and a selector. The memory operation array is for storing memory operations for a first power state of the memory. The controller is responsive to a power state change request to execute a plurality of memory operations from the memory operation array when the first power state is selected. The refresh logic circuit generates refresh cycles periodically for the memory. The selector is for multiplexing the refresh cycles with the memory operations during a power state change to the first power state.Type: ApplicationFiled: September 29, 2023Publication date: April 3, 2025Applicant: Advanced Micro Devices, Inc.Inventors: Jean J. Chittilappilly, Kevin M. Brandl, Jing Wang, Kedarnath Balakrishnan
-
Publication number: 20250110864Abstract: Enhanced methods for memory context restore are described. A device may include a physical layer (PHY) having an interface to support communication of command signals and data with a physical memory. The PHY implements a training mode to train the interface, detect values of a plurality of parameters as part of training the interface, and store the detected values as initial training data. The PHY also implements a retraining mode to use the initial training data as seed data to retrain the interface.Type: ApplicationFiled: September 29, 2023Publication date: April 3, 2025Applicant: Advanced Micro Devices, Inc.Inventors: Anwar Parvez Kashem, Alicia Wen Ju Yurie Leong, Glennis Eliagh Covington
-
Publication number: 20250110525Abstract: A computer-implemented method for enabling a feature of a semiconductor device can include receiving, by at least one processor of a semiconductor device, a command to enable a feature of the semiconductor device. The method can also include burning, by the at least one processor and in response to the command, an electronic fuse of the semiconductor device. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: September 29, 2023Publication date: April 3, 2025Applicants: Advanced Micro Devices, Inc., ATI Technologies ULCInventors: Paul Blinzer, Maulik Ojas Mankad, Victor Ignatski, Ashish Jain, Gia Phan, Ranjeet Kumar
-
Publication number: 20250110792Abstract: In accordance with the described techniques, a host processor receives a task graph including tasks and indicating dependencies between the task graph. The host processor formats the task graph, in part, by sorting the tasks of the task graph in an order based on the dependencies between the tasks. Further, the host processor submits the formatted task graph to a scalable input/output virtualization (SIOV) device, which directs the SIOV device to process the tasks of the task graph based on the order.Type: ApplicationFiled: September 28, 2023Publication date: April 3, 2025Applicant: Advanced Micro Devices, Inc.Inventors: Stephen Alexander Zekany, Anthony Thomas Gutierrez
-
Publication number: 20250110664Abstract: A memory controller includes a command queue for receiving memory access requests and an arbiter. The arbiter is operable to allow cross-mode activations during a streak of accesses of a current mode in response to a number of cross-mode accesses present in the command queue exceeding an adaptive threshold.Type: ApplicationFiled: September 28, 2023Publication date: April 3, 2025Applicant: Advanced Micro Devices, Inc.Inventor: Guanhao Shen
-
Publication number: 20250110878Abstract: Selectively bypassing cache directory lookups for processing-in-memory instructions is described. In one example, a system maintains information describing a status—clean or dirty—of a memory address, where a dirty status indicates that the memory address is modified in a cache and thus different than the memory address as represented in system memory. A processing-in-memory request involving the memory address is assigned a cache directory bypass bit based on the status of the memory address. The cache directory bypass bit for a processing-in-memory request controls whether a cache directory lookup is performed after the processing-in-memory request is issued by a processor core and before the processing-in-memory request is executed by a processing-in-memory component.Type: ApplicationFiled: September 29, 2023Publication date: April 3, 2025Applicant: Advanced Micro Devices, Inc.Inventors: Travis Henry Boraten, Jagadish B. Kotra, David Andrew Werner
-
Publication number: 20250111006Abstract: Fast Fourier transforms for processing-in-memory are described. In accordance with the described techniques, a computing device includes a memory, a host processing unit, and a processing-in-memory unit that operates on data of one or more banks of the memory. The host processing unit stores interacting elements of a fast Fourier transform at locations in the one or more banks. The locations are mapped to a lane of the processing-in-memory unit. The host processing unit issues processing-in-memory commands instructing the processing-in-memory unit to load the interacting elements from the locations into the lane of the processing-in-memory unit, and execute an operation on the interacting elements.Type: ApplicationFiled: September 29, 2023Publication date: April 3, 2025Applicant: Advanced Micro Devices, Inc.Inventors: Mohamed Assem Abd ElMohsen Ibrahim, Shaizeen Dilawarhusen Aga
-
Publication number: 20250112047Abstract: A hybrid bonding method includes fabricating plural semiconductor devices in a region of a bottom wafer adjacent to a front surface thereof, fusion bonding the front surface to a carrier substrate, thinning the bottom wafer opposite to the front surface to expose conductive regions of the semiconductor devices, forming a dielectric layer over a backside of the semiconductor devices, forming openings in the dielectric layer to expose the conductive regions, forming metal pads within the openings, dicing the bottom wafer and the carrier substrate to singulate the plural semiconductor devices, bonding the dielectric layer overlying the backside of the semiconductor devices to a dielectric layer overlying a front surface of a top wafer, bonding the metal pads within the openings in the dielectric layer to metal pads overlying the front surface of the top wafer, and removing the carrier substrate from the front surface of the bottom wafer.Type: ApplicationFiled: September 29, 2023Publication date: April 3, 2025Applicant: Advanced Micro Devices, Inc.Inventors: Chandra Sekhar Mandalapu, Raja Swaminathan, Liwei Wang, John Wuu
-
Publication number: 20250111195Abstract: Disclosed is a computer-implemented method for model ensemble acceleration in an active learning loop. The method includes receiving a set of datapoint inputs, where each datapoint input is an unlabeled equivalent of other datapoint inputs in the set of datapoint inputs and has a different applied weight value. The method then executes a set of neural network models, where the execution of each neural network model is based on the received set of datapoint inputs. The outputs from the set of neural network models are analyzed, where an inference computation is performed, and a label for the set of datapoints is determined. The method then stores the labeled set of datapoint inputs in a database. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: September 29, 2023Publication date: April 3, 2025Applicant: Advanced Micro Devices, Inc.Inventors: Karthik Ramu Sangaiah, Yao Cui Fehlis
-
Publication number: 20250111585Abstract: A technique for building a bounding volume hierarchy is disclosed. The technique includes for a subject node, selecting a dimension along which to perform a split to form child nodes of the subject node; assigning primitives of the subject node to the child nodes; and updating bounds for the child nodes in a next split dimension and not in the other dimensions.Type: ApplicationFiled: September 28, 2023Publication date: April 3, 2025Applicant: Advanced Micro Devices, Inc.Inventor: Leo Hendrik Reyes Lozano
-
Publication number: 20250111600Abstract: A technique for rendering is provided. The technique includes performing a visibility pass that designates portions of shade space textures visible in a scene, wherein the visibility pass generates tiles that cover shade space textures visible in the scene; performing a rate controller operation on output of the visibility pass using spatially-adaptive sampling; performing a sparse shade space shading operation on the tiles that cover the shade space textures visible in the scene based on a result of the spatially-adaptive sampling; performing a regularization operation based on an output of the sparse shade space shading operation; and performing a reconstruction operation using output from the regularization operation to produce a final scene.Type: ApplicationFiled: September 29, 2023Publication date: April 3, 2025Applicants: Advanced Micro Devices, Inc., ATI Technologies ULCInventors: Guennadi Riguer, Michal Adam Wozniak
-
Patent number: 12266030Abstract: Systems and methods related to priority-based and performance-based selection of a render mode, such as a two-level binning mode, in which to execute workloads with a graphics processing unit (GPU) of a system are provided. A user mode driver (UMD) or kernel mode driver (KMD) executed at a central processing unit (CPU) configures low and medium priority workloads to be executed in a two-level binning mode and selects a binning mode for high priority workloads based on whether performance heuristics indicate that one or more binning conditions or override conditions have been met. High priority workloads are maintained in a high priority queue, while low and medium priority workloads are maintained in a low/medium priority queue, such that execution of low and medium priority workloads at the GPU can be preempted in favor of executing high priority workloads.Type: GrantFiled: April 15, 2021Date of Patent: April 1, 2025Assignee: Advanced Micro Devices, Inc.Inventors: Anirudh R. Acharya, Ruijin Wu, Young In Yeo
-
Patent number: 12265470Abstract: Selectively bypassing cache directory lookups for processing-in-memory instructions is described. In one example, a system maintains information describing a status—clean or dirty—of a memory address, where a dirty status indicates that the memory address is modified in a cache and thus different than the memory address as represented in system memory. A processing-in-memory request involving the memory address is assigned a cache directory bypass bit based on the status of the memory address. The cache directory bypass bit for a processing-in-memory request controls whether a cache directory lookup is performed after the processing-in-memory request is issued by a processor core and before the processing-in-memory request is executed by a processing-in-memory component.Type: GrantFiled: September 29, 2023Date of Patent: April 1, 2025Assignee: Advanced Micro Devices, Inc.Inventors: Travis Henry Boraten, Jagadish B. Kotra, David Andrew Werner
-
Patent number: 12265908Abstract: Systems, apparatuses, and methods for achieving higher cache hit rates for machine learning models are disclosed. When a processor executes a given layer of a machine learning model, the processor generates and stores activation data in a cache subsystem a forward or reverse manner. Typically, the entirety of the activation data does not fit in the cache subsystem. The processor records the order in which activation data is generated for the given layer. Next, when the processor initiates execution of a subsequent layer of the machine learning model, the processor processes the previous layer's activation data in a reverse order from how the activation data was generated. In this way, the processor alternates how the layers of the machine learning model process data by either starting from the front end or starting from the back end of the array.Type: GrantFiled: August 31, 2020Date of Patent: April 1, 2025Assignees: Advanced Micro Devices, Inc., ATI Technologies ULCInventors: Benjamin Thomas Sander, Swapnil Sakharshete, Ashish Panday
-
Patent number: 12265915Abstract: A technique for manipulating a generic tensor is provided. The technique includes receiving a first request to perform a first operation on a generic tensor descriptor associated with the generic tensor, responsive to the first request, performing the first operation on the generic tensor descriptor, receiving a second request to perform a second operation on generic tensor raw data associated with the generic tensor, and responsive to the second request, performing the second operation on the generic tensor raw data, the performing the second operation including mapping a tensor coordinate specified by the second request to a memory address, the mapping including evaluating a delta function to determine an address delta value to add to a previously determined address for a previously processed tensor coordinate.Type: GrantFiled: December 30, 2020Date of Patent: April 1, 2025Assignee: Advanced Micro Devices, Inc.Inventors: Chao Liu, Daniel Isamu Lowell, Wen Heng Chung, Jing Zhang
-
Patent number: 12265735Abstract: An approach is provided for processing near-memory processing commands, e.g., PIM commands, using PIM register definition data that defines multiple combinations of source and/or destination registers to be used to process PIM commands. A particular combination of source and/or destination registers to be used to process a PIM command is specified by the PIM command or determined by a near-memory processing element processing the PIM command. According to another implementation, the PIM register definition data specifies an initial combination of source and/or destination registers and one or more update functions for each PIM command. A near-memory processing element processes a PIM command using the initial combination of source and/or destination registers and uses the one or more update functions to update the combination of source and/or destination registers to be used the next time the PIM command is processed.Type: GrantFiled: June 21, 2022Date of Patent: April 1, 2025Assignee: Advanced Micro Devices, Inc.Inventors: Shaizeen Aga, Nuwan Jayasena