Patents by Inventor Brian Emberling

Brian Emberling has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12229570
    Abstract: Block data load with transpose techniques are described. In one example, an input is received, at a control unit, specifying an instruction to load a block of data to at least one memory module using a transpose operation. Responsive to the receiving the input by the control unit, the block of data is caused to be loaded to the at least one memory module by transposing the block of data to form a transposed block of data and storing the transposed block of data in the at least one memory.
    Type: Grant
    Filed: September 25, 2022
    Date of Patent: February 18, 2025
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Bin He, Michael John Mantor, Brian Emberling, Liang Huang, Chao Liu
  • Publication number: 20240355044
    Abstract: A method, system, and computer-readable medium for executing a task is disclosed. The method includes receiving input data and computing instructions, launching a workgroup including wavefronts to execute the task, wherein the launching causes the wavefronts to process the input data by sharing intermediate results and resources, and adjusting the operation based on characteristics of the wavefronts. The characteristics include data dependencies, computational load, memory usage, and execution timing requirements. The wavefronts execute the task in stages, where each stage processes portions of input data and data generated by other wavefronts.
    Type: Application
    Filed: July 2, 2024
    Publication date: October 24, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Brian Emberling, Michael Y. Chow
  • Publication number: 20240311199
    Abstract: A program code executing on a processing system includes one or more instructions each identifying a workload that includes a plurality of waves and each identifying resource allocations for the plurality of waves of the workgroup. In response to receiving an instruction identifying a workload and resource allocations for the plurality of waves of the workgroup, a processor allocates a first set of processing resources to a compute unit of the processor based on the resource allocations for the plurality of waves. The compute unit then performs operations for the workgroup using the allocated set of processing resources.
    Type: Application
    Filed: March 13, 2023
    Publication date: September 19, 2024
    Inventors: Nicolai Haehnle, Mark Leather, Brian Emberling, Michael John Bedy, Daniel Schneider
  • Patent number: 12033275
    Abstract: Methods and systems are disclosed for executing a collaborative task in a shader system. Techniques disclosed include receiving, by the system, input data and computing instructions associated with the collaborative task, as well as a configuration setting, causing the system to operate in a takeover mode. The system then launches, exclusively in one workgroup processor, a workgroup including wavefronts configured to execute the collaborative task.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: July 9, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Brian Emberling, Michael Y. Chow
  • Publication number: 20240168719
    Abstract: A processing system executes wavefronts at multiple arithmetic logic unit (ALU) pipelines of a single instruction multiple data (SIMD) unit in a single execution cycle. The ALU pipelines each include a number of ALUs that execute instructions on wavefront operands that are collected from vector general process register (VGPR) banks at a cache and output results of the instructions executed on the wavefronts at a buffer. By storing wavefronts supplied by the VGPR banks at the cache, a greater number of wavefronts can be made available to the SIMD unit without increasing the VGPR bandwidth, enabling multiple ALU pipelines to execute instructions during a single execution cycle.
    Type: Application
    Filed: January 16, 2024
    Publication date: May 23, 2024
    Inventors: Bin HE, Brian EMBERLING, Mark LEATHER, Michael MANTOR
  • Publication number: 20240103879
    Abstract: Block data load with transpose techniques are described. In one example, an input is received, at a control unit, specifying an instruction to load a block of data to at least one memory module using a transpose operation. Responsive to the receiving the input by the control unit, the block of data is caused to be loaded to the at least one memory module by transposing the block of data to form a transposed block of data and storing the transposed block of data in the at least one memory.
    Type: Application
    Filed: September 25, 2022
    Publication date: March 28, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Bin He, Michael John Mantor, Brian Emberling, Liang Huang, Chao Liu
  • Patent number: 11847462
    Abstract: A software-based instruction scoreboard indicates dependencies between closely-issued instructions issued to an arithmetic logic unit (ALU) pipeline. The software-based instruction scoreboard inserts one or more control words into the command stream between the dependent instructions, which is then executed by the ALU pipeline. The control words identify the instruction(s) upon which the dependent instructions depend (parent instructions) so that the GPU hardware can ensure that the ALU pipeline does not stall while the dependent instruction waits for results from the parent instruction.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: December 19, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Brian Emberling
  • Publication number: 20230205608
    Abstract: A disclosed technique includes executing, for a first wavefront, a barrier arrival notification instruction, for a first barrier, indicating arrival at a first barrier point; performing, for the first wavefront, work prior to the first barrier point; executing, for the first wavefront, a barrier check instruction; and executing, for the first wavefront, at a control flow path based on a result of the barrier check instruction.
    Type: Application
    Filed: December 27, 2021
    Publication date: June 29, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Brian Emberling, Joseph L. Greathouse
  • Patent number: 11675568
    Abstract: A processing system executes wavefronts at multiple arithmetic logic unit (ALU) pipelines of a single instruction multiple data (SIMD) unit in a single execution cycle. The ALU pipelines each include a number of ALUs that execute instructions on wavefront operands that are collected from vector general process register (VGPR) banks at a cache and output results of the instructions executed on the wavefronts at a buffer. By storing wavefronts supplied by the VGPR banks at the cache, a greater number of wavefronts can be made available to the SIMD unit without increasing the VGPR bandwidth, enabling multiple ALU pipelines to execute instructions during a single execution cycle.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: June 13, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Bin He, Brian Emberling, Mark Leather, Michael Mantor
  • Publication number: 20230097279
    Abstract: Methods and systems are disclosed for executing operations on single-instruction-multiple-data (SIMD) units. Techniques disclosed perform a dot product operation on input data during one computer cycle, including convolving the input data, generating intermediate data, and applying one or more transitional operations to the intermediate data to generate output data. Aspects described, wherein the input data is an input to a layer of a convolutional neural network and the generated output data is the output of the layer.
    Type: Application
    Filed: September 29, 2021
    Publication date: March 30, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Brian Emberling, Michael Mantor, Michael Y. Chow, Bin He
  • Publication number: 20230102767
    Abstract: Methods and systems are disclosed for executing a collaborative task in a shader system. Techniques disclosed include receiving, by the system, input data and computing instructions associated with the collaborative task, as well as a configuration setting, causing the system to operate in a takeover mode. The system then launches, exclusively in one workgroup processor, a workgroup including wavefronts configured to execute the collaborative task.
    Type: Application
    Filed: September 29, 2021
    Publication date: March 30, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Brian Emberling, Michael Y. Chow
  • Patent number: 11386518
    Abstract: The address of the draw or dispatch packet responsible for creating an exception is tied to a shader/wavefront back to the draw command from which it originated. In various embodiments, a method of operating a graphics pipeline and exception handling includes receiving, at a command processor of a graphics processing unit (GPU), an exception signal indicating an occurrence of a pipeline exception at a shader stage of a graphics pipeline. The shader stage generates an exception signal in response to a pipeline exception and transmits the exception signal to the command processor. The command processor determines, based on the exception signal, an address of a command packet responsible for the occurrence of the pipeline exception.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: July 12, 2022
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Michael Mantor, Alexander Fuad Ashkar, Randy Ramsey, Mangesh P. Nijasure, Brian Emberling
  • Publication number: 20220197649
    Abstract: A processing unit includes a first memory device and a second memory device. The first memory device includes a first plurality of general purpose registers (GPRs) and the second memory device includes a second plurality of GPRs. The second memory device includes fewer GPRs than the first memory device. Program data is stored at the first memory device and the second memory device based on expected frequency of accesses associated with the program data.
    Type: Application
    Filed: December 21, 2021
    Publication date: June 23, 2022
    Inventors: Prasanna Balasundaram, Dipayan Karmakar, Brian Emberling
  • Publication number: 20220188076
    Abstract: A processing system executes wavefronts at multiple arithmetic logic unit (ALU) pipelines of a single instruction multiple data (SIMD) unit in a single execution cycle. The ALU pipelines each include a number of ALUs that execute instructions on wavefront operands that are collected from vector general process register (VGPR) banks at a cache and output results of the instructions executed on the wavefronts at a buffer. By storing wavefronts supplied by the VGPR banks at the cache, a greater number of wavefronts can be made available to the SIMD unit without increasing the VGPR bandwidth, enabling multiple ALU pipelines to execute instructions during a single execution cycle.
    Type: Application
    Filed: December 14, 2020
    Publication date: June 16, 2022
    Inventors: Bin He, Brian Emberling, Mark Leather, Michael Mantor
  • Publication number: 20220188120
    Abstract: A software-based instruction scoreboard indicates dependencies between closely-issued instructions issued to an arithmetic logic unit (ALU) pipeline. The software-based instruction scoreboard inserts one or more control words into the command stream between the dependent instructions, which is then executed by the ALU pipeline. The control words identify the instruction(s) upon which the dependent instructions depend (parent instructions) so that the GPU hardware can ensure that the ALU pipeline does not stall while the dependent instruction waits for results from the parent instruction.
    Type: Application
    Filed: December 15, 2020
    Publication date: June 16, 2022
    Inventor: Brian EMBERLING
  • Patent number: 11226819
    Abstract: A processing unit includes a plurality of processing elements and one or more caches. A first thread executes a program that includes one or more prefetch instructions to prefetch information into a first cache. Prefetching is selectively enabled when executing the first thread on a first processing element dependent upon whether one or more second threads previously executed the program on the first processing element. The first thread is then dispatched to execute the program on the first processing element. In some cases, a dispatcher receives the first thread four dispatching to the first processing element. The dispatcher modifies the prefetch instruction to disable prefetching into the first cache in response to the one or more second threads having previously executed the program on the first processing element.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: January 18, 2022
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Brian Emberling, Michael Mantor
  • Publication number: 20210090205
    Abstract: The address of the draw or dispatch packet responsible for creating an exception is tied to a shader/wavefront back to the draw command from which it originated. In various embodiments, a method of operating a graphics pipeline and exception handling includes receiving, at a command processor of a graphics processing unit (GPU), an exception signal indicating an occurrence of a pipeline exception at a shader stage of a graphics pipeline. The shader stage generates an exception signal in response to a pipeline exception and transmits the exception signal to the command processor. The command processor determines, based on the exception signal, an address of a command packet responsible for the occurrence of the pipeline exception.
    Type: Application
    Filed: September 24, 2019
    Publication date: March 25, 2021
    Inventors: Michael MANTOR, Alexander Fuad ASHKAR, Randy RAMSEY, Mangesh P. NIJASURE, Brian EMBERLING
  • Publication number: 20190278605
    Abstract: A system includes a processor configured to operate in at least a first mode and a second mode. In the first mode the first processor operates to execute an instruction for an entire wavefront before executing a next instruction for the entire wavefront. In the second mode the processor operates to execute a set instructions for a portion of a wavefront before executing the set instructions for another portion of the same wavefront. The system further includes a memory coupled to the processor. The memory is configured to store a shader program for execution by the processor, wherein the shader program includes at least one indication associated with one of the first mode or the second mode. The processor is further to implement one of the first mode or the second mode while executing the shader program responsive to the at least one indication present in the first shader program.
    Type: Application
    Filed: May 29, 2019
    Publication date: September 12, 2019
    Inventors: Brian EMBERLING, Michael MANTOR
  • Publication number: 20190155604
    Abstract: A processing unit includes a plurality of processing elements and one or more caches. A first thread executes a program that includes one or more prefetch instructions to prefetch information into a first cache. Prefetching is selectively enabled when executing the first thread on a first processing element dependent upon whether one or more second threads previously executed the program on the first processing element. The first thread is then dispatched to execute the program on the first processing element. In some cases, a dispatcher receives the first thread four dispatching to the first processing element. The dispatcher modifies the prefetch instruction to disable prefetching into the first cache in response to the one or more second threads having previously executed the program on the first processing element.
    Type: Application
    Filed: November 20, 2017
    Publication date: May 23, 2019
    Inventors: Brian EMBERLING, Michael MANTOR
  • Patent number: 10140123
    Abstract: A graphics processing unit is disclosed, the graphics processing unit having a processor having one or more SIMD processing units, and a local data share corresponding to one of the one or more SIMD processing units, the local data share comprising one or more low latency accessible memory regions for each group of threads assigned to one or more execution wavefronts, and a global data share comprising one or more low latency memory regions for each group of threads.
    Type: Grant
    Filed: April 10, 2017
    Date of Patent: November 27, 2018
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Michael J. Mantor, Brian Emberling