Patents by Inventor Rex Eldon MCCRARY

Rex Eldon MCCRARY has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240320042
    Abstract: A first processing unit such as a graphics processing unit (GPU) pipelines that execute commands and a scheduler to schedule one or more first commands for execution by one or more of the pipelines. The one or more first commands are received from a user mode driver in a second processing unit such as a central processing unit (CPU). The scheduler schedules one or more second commands for execution in response to completing execution of the one or more first commands and without notifying the second processing unit. In some cases, the first processing unit includes a direct memory access (DMA) engine that writes blocks of information from the first processing unit to a memory. The one or more second commands program the DMA engine to write a block of information including results generated by executing the one or more first commands.
    Type: Application
    Filed: March 6, 2024
    Publication date: September 26, 2024
    Inventor: Rex Eldon MCCRARY
  • Patent number: 12051144
    Abstract: An apparatus such as a graphics processing unit (GPU) includes a set of shader engines and a set of front end (FE) circuits. Subsets of the set of FE circuits schedule geometry workloads for subsets of the set of shader engines based on a mapping. The apparatus also includes a set of physical paths that convey information from the set of FE circuits to a memory via the set of shader engines. Subsets of the set of physical paths are allocated to the subsets of the set of FE circuits and the subsets of the set of shader engines based on the mapping. The mapping determines information stored in a set of registers used to configure the apparatus. In some cases, the set of registers store information indicating a spatial partitioning of the set of physical paths.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: July 30, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Rex Eldon McCrary
  • Publication number: 20240202003
    Abstract: Systems, apparatuses, and methods for implementing a hierarchical scheduling in fixed-function graphics pipeline are disclosed. In various implementations, a processor includes a pipeline comprising a plurality of fixed-function units and a scheduler. The scheduler is configured to schedule a first operation for execution by one or more fixed-function units of the pipeline by scheduling the first operation with a first unit of the pipeline, responsive to a first mode of operation and schedule a second operation for execution by a selected fixed-function unit of the pipeline by scheduling the second operation directly to the selected fixed-function unit, independent of a sequential arrangement of the one or more fixed-function units in the pipeline, responsive to a second mode of operation.
    Type: Application
    Filed: December 14, 2022
    Publication date: June 20, 2024
    Inventors: Matthäus G. Chajdas, Michael J. Mantor, Rex Eldon McCrary, Christopher J. Brennan, Robert Martin, Brian Kenneth Bennett
  • Patent number: 12014442
    Abstract: A primary processing unit includes queues configured to store commands prior to execution in corresponding pipelines. The primary processing unit also includes a first table configured to store entries indicating dependencies between commands that are to be executed on different ones of a plurality of processing units that include the primary processing unit and one or more secondary processing units. The primary processing unit also includes a scheduler configured to release commands in response to resolution of the dependencies. In some cases, a first one of the secondary processing units schedules the first command for execution in response to resolution of a dependency on a second command executing in a second one of the secondary processing units. The second one of the secondary processing units notifies the primary processing unit in response to completing execution of the second command.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: June 18, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Rex Eldon McCrary
  • Publication number: 20240192994
    Abstract: Techniques for implementing accelerated draw indirect fetching are disclosed. A fetch accelerator enables streamlined data fetching by looping internally and filling a draw queue for a micro engine. By using a dedicated fetch accelerator rather than processing data fetches separately and individually using a conventional processor, significant processing overhead is eliminated and computational latency is reduced. Additionally, different types of aligned or unaligned data structures are usable with equivalent or nearly equivalent performance.
    Type: Application
    Filed: March 28, 2023
    Publication date: June 13, 2024
    Inventors: Alexander Fuad Ashkar, Michael Mantor, Rex Eldon McCrary, Yi Luo, Manu Rastogi, James Robert Klobcar
  • Patent number: 11977933
    Abstract: A processing unit such as a graphics processing unit (GPU) includes a set of queues that stores command buffers prior to execution in a corresponding plurality of pipelines. The processing unit also implements a kernel mode driver that allocates a first subset of the set of queues to a first application in response to receiving registration requests from the first application. The processing unit further includes a scheduler that schedules command buffers in the first subset of the set of queues for concurrent execution on a first subset of the set of pipelines. In some cases, an interrupt is generated in response to execution of a first command in a first command buffer in the first queue or the second queue. The interrupt includes an address indicating a location of a routine to be executed by a second subset of the plurality of pipelines.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: May 7, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Rex Eldon McCrary
  • Publication number: 20240111574
    Abstract: Systems, apparatuses, and methods for implementing a hierarchical scheduler. In various implementations, a processor includes a global scheduler, and a plurality of independent local schedulers with each of the local schedulers coupled to a plurality of processors. In one implementation, the processor is a graphics processing unit and the processors are computation units. The processor further includes a shared cache that is shared by the plurality of local schedulers. Each of the local schedulers also includes a local cache used by the local scheduler and processors coupled to the local scheduler. To schedule work items for execution, the global scheduler is configured to store one or more work items in the shared cache and convey an indication to a first local scheduler of the plurality of local schedulers which causes the first local scheduler to retrieve the one or more work items from the shared cache.
    Type: Application
    Filed: September 29, 2022
    Publication date: April 4, 2024
    Inventors: Matthäus G. Chajdas, Michael J. Mantor, Rex Eldon McCrary, Christopher J. Brennan, Robert Martin, Dominik Baumeister, Fabian Robert Sebastian Wildgrube
  • Publication number: 20240111575
    Abstract: Systems, apparatuses, and methods for implementing a message passing system to schedule work in a computing system. In various implementations, a processor includes a global scheduler, and a plurality of local schedulers with each of the local schedulers coupled to a plurality of processors. The processor further includes a shared cache that is shared by the plurality of local schedulers. Also, a plurality of mailboxes are implemented to enable communication between the local schedulers and the global scheduler. To schedule work items for execution, the global scheduler is configured to store one or more work items in the shared cache and store an indication in a mailbox for a first local scheduler of the plurality of local schedulers. Responsive to detecting the message in the mailbox, the first local scheduler identifies a location of the one or more work items in the shared cache and retrieves them for scheduling locally.
    Type: Application
    Filed: September 29, 2022
    Publication date: April 4, 2024
    Inventors: Matthäus G. Chajdas, Michael J. Mantor, Rex Eldon McCrary, Christopher J. Brennan, Robert Martin, Dominik Baumeister, Fabian Robert Sebastian Wildgrube
  • Patent number: 11934873
    Abstract: A first processing unit such as a graphics processing unit (GPU) pipelines that execute commands and a scheduler to schedule one or more first commands for execution by one or more of the pipelines. The one or more first commands are received from a user mode driver in a second processing unit such as a central processing unit (CPU). The scheduler schedules one or more second commands for execution in response to completing execution of the one or more first commands and without notifying the second processing unit. In some cases, the first processing unit includes a direct memory access (DMA) engine that writes blocks of information from the first processing unit to a memory. The one or more second commands program the DMA engine to write a block of information including results generated by executing the one or more first commands.
    Type: Grant
    Filed: September 16, 2022
    Date of Patent: March 19, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Rex Eldon McCrary
  • Patent number: 11776085
    Abstract: A processing system includes a graphics pipeline that executes a first shader of a first type and a second shader of a second type. In some cases, the first shader is a geometry shader and the second shader is a pixel shader. The processing system also includes buffers that hold primitives generated by the first shader and provide the primitives to the second shader. The processing system also includes a primitive hub that monitors fullness of the buffers. Launching of waves from the first shader is throttled based on the fullness of the buffers. A shader processor input (SPI) selectively throttles the waves launched by the geometry shader based on a signal from the primitive hub indicating the fullness, an indication of relative resource usage of geometry waves and pixel waves in the graphics pipeline, or an indication of lifetimes of the geometry waves.
    Type: Grant
    Filed: December 16, 2020
    Date of Patent: October 3, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Nishank Pathak, Randy Wayne Ramsey, Tad Litwiller, Rex Eldon McCrary
  • Publication number: 20230094639
    Abstract: A first processing unit such as a graphics processing unit (GPU) pipelines that execute commands and a scheduler to schedule one or more first commands for execution by one or more of the pipelines. The one or more first commands are received from a user mode driver in a second processing unit such as a central processing unit (CPU). The scheduler schedules one or more second commands for execution in response to completing execution of the one or more first commands and without notifying the second processing unit. In some cases, the first processing unit includes a direct memory access (DMA) engine that writes blocks of information from the first processing unit to a memory. The one or more second commands program the DMA engine to write a block of information including results generated by executing the one or more first commands.
    Type: Application
    Filed: September 16, 2022
    Publication date: March 30, 2023
    Inventor: Rex Eldon MCCRARY
  • Publication number: 20230055695
    Abstract: Systems, apparatuses, and methods for abstracting tasks in virtual memory identifier (VMID) containers are disclosed. A processor coupled to a memory executes a plurality of concurrent tasks including a first task. Responsive to detecting one or more instructions of the first task which correspond to a first operation, the processor retrieves a first identifier (ID) which is used to uniquely identify the first task, wherein the first ID is transparent to the first task. Then, the processor maps the first ID to a second ID and/or a third ID. The processor completes the first operation by using the second ID and/or the third ID to identify the first task to at least a first data structure. In one implementation, the first operation is a memory access operation and the first data structure is a set of page tables. Also, in one implementation, the second ID identifies a first application of the first task and the third ID identifies a first operating system (OS) of the first task.
    Type: Application
    Filed: October 7, 2022
    Publication date: February 23, 2023
    Inventors: Anirudh R. Acharya, Michael J. Mantor, Rex Eldon McCrary, Anthony Asaro, Jeffrey Gongxian Cheng, Mark Fowler
  • Patent number: 11467870
    Abstract: Systems, apparatuses, and methods for abstracting tasks in virtual memory identifier (VMID) containers are disclosed. A processor coupled to a memory executes a plurality of concurrent tasks including a first task. Responsive to detecting one or more instructions of the first task which correspond to a first operation, the processor retrieves a first identifier (ID) which is used to uniquely identify the first task, wherein the first ID is transparent to the first task. Then, the processor maps the first ID to a second ID and/or a third ID. The processor completes the first operation by using the second ID and/or the third ID to identify the first task to at least a first data structure. In one implementation, the first operation is a memory access operation and the first data structure is a set of page tables. Also, in one implementation, the second ID identifies a first application of the first task and the third ID identifies a first operating system (OS) of the first task.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: October 11, 2022
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Anirudh R. Acharya, Michael J. Mantor, Rex Eldon McCrary, Anthony Asaro, Jeffrey Gongxian Cheng, Mark Fowler
  • Patent number: 11461137
    Abstract: A first processing unit such as a graphics processing unit (GPU) pipelines that execute commands and a scheduler to schedule one or more first commands for execution by one or more of the pipelines. The one or more first commands are received from a user mode driver in a second processing unit such as a central processing unit (CPU). The scheduler schedules one or more second commands for execution in response to completing execution of the one or more first commands and without notifying the second processing unit. In some cases, the first processing unit includes a direct memory access (DMA) engine that writes blocks of information from the first processing unit to a memory. The one or more second commands program the DMA engine to write a block of information including results generated by executing the one or more first commands.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: October 4, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Rex Eldon McCrary
  • Patent number: 11403729
    Abstract: An apparatus such as a graphics processing unit (GPU) includes shader engines and front end (FE) circuits. Subsets of the FE circuits are configured to schedule commands for execution on corresponding subsets of the shader engines. The apparatus also includes a set of physical paths configured to convey information from the FE circuits to a memory via the shader engines. Subsets of the physical paths are allocated to the subsets of the FE circuits and the corresponding subsets of the shader engines. The apparatus further includes a scheduler configured to receive a reconfiguration request and modify the set of physical paths based on the reconfiguration request. In some cases, the reconfiguration request is provided by a central processing unit (CPU) that requests the modification based on characteristics of applications generating the commands.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: August 2, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Rex Eldon McCrary
  • Publication number: 20220188963
    Abstract: A processing system includes a graphics pipeline that executes a first shader of a first type and a second shader of a second type. In some cases, the first shader is a geometry shader and the second shader is a pixel shader. The processing system also includes buffers that hold primitives generated by the first shader and provide the primitives to the second shader. The processing system also includes a primitive hub that monitors fullness of the buffers. Launching of waves from the first shader is throttled based on the fullness of the buffers. A shader processor input (SPI) selectively throttles the waves launched by the geometry shader based on a signal from the primitive hub indicating the fullness, an indication of relative resource usage of geometry waves and pixel waves in the graphics pipeline, or an indication of lifetimes of the geometry waves.
    Type: Application
    Filed: December 16, 2020
    Publication date: June 16, 2022
    Inventors: Nishank PATHAK, Randy Wayne RAMSEY, Tad LITWILLER, Rex Eldon MCCRARY
  • Publication number: 20220091847
    Abstract: In response to executing a specified command packet, a processing unit prefetches commands stored at an indirect buffer a command queue for execution, prior to executing a command that initiates execution of the commands stored at the indirect buffer. By prefetching the data prior to executing the indirect buffer execution command, the processing unit reduces delays in processing the commands stored at the indirect buffer.
    Type: Application
    Filed: September 23, 2020
    Publication date: March 24, 2022
    Inventors: Alexander Fuad ASHKAR, Harry J. WISE, Rex Eldon MCCRARY, Hans FERNLUND
  • Patent number: 11281495
    Abstract: A system and method for providing security of sensitive information within chips using SIMD micro-architecture are described. A command processor within a parallel data processing unit, such as a graphics processing unit (GPU), schedules commands across multiple compute units based on state information. When the command processor determines a rescheduling condition is satisfied, it causes the overwriting of at least a portion of data stored in each of the one or more local memories used by the multiple compute units. The command processor also stores in the secure memory a copy of state information associated with a given group of commands and later checks it to ensure corruption by a malicious or careless program is prevented.
    Type: Grant
    Filed: August 27, 2018
    Date of Patent: March 22, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Rex Eldon McCrary
  • Patent number: 11169811
    Abstract: A method of context bouncing includes receiving, at a command processor of a graphics processing unit (GPU), a conditional execute packet providing a hash identifier corresponding to an encapsulated state. The encapsulated state includes one or more context state packets following the conditional execute packet. A command packet following the encapsulated state is executed based at least in part on determining whether the hash identifier of the encapsulated state matches one of a plurality of hash identifiers of active context states currently stored at the GPU.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: November 9, 2021
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Rex Eldon McCrary, Yi Luo, Harry J. Wise, Alexander Fuad Ashkar, Michael Mantor
  • Patent number: 11144329
    Abstract: A processing unit employs microcode wherein the jump table associated with the microcode is embedded in the microcode itself. When the microcode is compiled based on a set of programmer instructions, the compiler prepares the jump table for the microcode and stores the jump table in the same file or other storage unit as the microcode. When the processing unit is initialized to execute a program, such as an operating system, the processing unit retrieves the microcode corresponding to the program from memory, stores the microcode in a cache or other memory module for execution, and automatically loads the embedded jump table from the microcode to a specified set of jump table registers, thereby preparing the processing unit to process received packets.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: October 12, 2021
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Alexander Fuad Ashkar, Rakan Khraisha, Rex Eldon McCrary, Harry J. Wise