Patents by Inventor Andrea Deidda

Andrea Deidda has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230259467
    Abstract: An DNN accelerator includes a DMA engine that can execute tasks in parallel. A task includes a sequence of stages, such as a sequence including a source stage, response stage, destination stage, and post stage. The DMA engine may include a channel having a pipelined structure that includes a sequence of control modules and a sequence of data processing modules. A control module may correspond to a data processing module. A pair of control module and data processing module may constitute a stage of the channel, which processes a corresponding stage of tasks. The channel may execute multiple tasks in parallel. For instance, the second stage of a first task may be processed simultaneously with the first stage of a second task. The parallel execution of multiple tasks can reduce or remove impact of memory latencies on performance of the DNN accelerator.
    Type: Application
    Filed: September 12, 2022
    Publication date: August 17, 2023
    Applicant: Intel Corporation
    Inventor: Andrea Deidda
  • Publication number: 20230072082
    Abstract: A system includes a first memory, a compiler, and a DNN accelerator. The DNN accelerator includes a DMA engine, an acceleration module, and a compute block. The compute block includes a second memory. The compiler may generate a task for transferring activations from the second memory to the first memory. The DMA engine may receive the task and read the activations from the second memory. The acceleration module may compress the activations to generate compressed activation data and write the compressed activation data into the external memory. The acceleration module may also store a size of the compressed activation data in the local memory, which may be used by the DMA engine to read the activation from the first memory to the second memory later. The compressed activation data may include non-zero activations and sparsity bitmaps. The compressed activation data may also include a header or zeropoint marker.
    Type: Application
    Filed: October 28, 2022
    Publication date: March 9, 2023
    Inventors: Sudheendra Kadri, Andrea Deidda, Hassan Kamal, Martin-Thomas Grymel, Alfonso Tarazona Martinez, David Thomas Bernard
  • Publication number: 20230017662
    Abstract: An DNN accelerator includes a DMA engine that can rearrange weight data layout. The DMA engine may read a weight tensor from a memory (e.g., DRAM). The weight tensor includes weights arranged in a 3D matrix. The DMA engine may partition the weight tensor into a plurality of virtual banks based on a structure of a PE array, e.g., based on the number of activated PE columns in the PE array. Then the DMA engine may partition a virtual bank into a plurality of virtual sub-banks. The DMA engine may also identify data blocks from different ones of the plurality of virtual sub-banks. A data block may include a plurality of input channels and may have a predetermined spatial size and storage size. The DMA engine form a linear data structure by interleaving the data blocks. The DMA engine can write the linear data structure into another memory (e.g., SRAM).
    Type: Application
    Filed: September 16, 2022
    Publication date: January 19, 2023
    Inventors: Sudheendra Kadri, Darren Crews, Deepak Abraham Mathaikutty, Andrea Deidda, Arnab Raha, Kevin Brady, David Thomas Bernard