Patents by Inventor Fabio Checconi

Fabio Checconi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250247337
    Abstract: Examples described herein relate to a forwarding element. In some examples, a circuitry, in the forwarding element, is to: receive telemetry data; cause storage of the telemetry data in a buffer; and forward the telemetry data to a network device. In some examples, bandwidth and buffer space in the buffer are exclusively allocated for forwarding the telemetry data and wherein the telemetry data comprises at least one of: management commands, device error reporting data, device performance data, device error data, or device debug data.
    Type: Application
    Filed: March 17, 2025
    Publication date: July 31, 2025
    Inventors: Gurpreet Singh KALSI, Kartik LAKHOTIA, Hossein FARROKHBAKHT, Fabio CHECCONI, Fabrizio PETRINI
  • Patent number: 12360824
    Abstract: A memory architecture may provide support for any number of direct memory access (DMA) operations at least partially independent of the CPU coupled to the memory. DMA operations may involve data movement between two or more memory locations and may involve minor computations. At least some DMA operations may include any number of atomic functions, and at least some of the atomic functions may include a corresponding return value. A system includes a first direct memory access (DMA) engine to request a DMA operation. The DMA operation includes an atomic operation. The system also includes a second DMA engine to receive a return value associated with the atomic operation and store the return value at a source memory.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: July 15, 2025
    Assignee: Intel Corporation
    Inventors: Robert Pawlowski, Fabio Checconi, Fabrizio Petrini
  • Publication number: 20250110918
    Abstract: Techniques for offloading function streams are described. In some examples, a function is a sequence of instructions and a stream is a sequence of functions. In some examples, a co-processor is to handle functions and/or function streams provided by a main processor. In some examples, the co-processor includes a plurality of execution resources that at least include one or more of a direct memory access (DMA) engine, an atomic engine, and a collectives engine.
    Type: Application
    Filed: September 30, 2023
    Publication date: April 3, 2025
    Inventors: Robert PAWLOWSKI, Vincent CAVE, Fabio CHECCONI, Scott CLINE, Shruti SHARMA
  • Publication number: 20240241645
    Abstract: Systems, apparatuses and methods may provide for technology that includes a plurality of hash management buffers corresponding to a plurality of pipelines, wherein each hash management buffer in the plurality of hash management buffers is adjacent to a pipeline in the plurality of pipelines, and wherein a first hash management buffer is to issue one or more hash packets associated with one or more hash operations on a hash table. The technology may also include a plurality of hash engines corresponding to a plurality of dynamic random access memories (DRAMs), wherein each hash engine in the plurality of hash engines is adjacent to a DRAM in the plurality of DRAMs, and wherein one or more of the hash engines is to initialize a target memory destination associated with the hash table and conduct the one or more hash operations in response to the one or more hash packets.
    Type: Application
    Filed: March 29, 2024
    Publication date: July 18, 2024
    Inventors: Robert Pawlowski, Shruti Sharma, Fabio Checconi, Sriram Aananthakrishnan, Jesmin Jahan Tithi, Jordi Wolfson-Pou, Joshua B. Fryman
  • Publication number: 20240069921
    Abstract: Technology described herein provides a dynamically reconfigurable processing core. The technology includes a plurality of pipelines comprising a core, where the core is reconfigurable into one of a plurality of core modes, a core network to provide inter-pipeline connections for the pipelines, and logic to receive a morph instruction including a target core mode from an application running on the core, determine a present core state for the core, and morph, based on the present core state, the core to the target core mode. In embodiments, to morph the core, the logic is to select, based on the target core mode, which inter-pipeline connections are active, where each pipeline includes at least one multiplexor via which the inter-pipeline connections are selected to be active. In embodiments, to morph the core, the logic is further to select, based on the target core mode, which memory access paths are active.
    Type: Application
    Filed: September 29, 2023
    Publication date: February 29, 2024
    Inventors: Scott Cline, Robert Pawlowski, Joshua Fryman, Ivan Ganev, Vincent Cave, Sebastian Szkoda, Fabio Checconi
  • Publication number: 20240020253
    Abstract: Systems, apparatuses and methods may provide for technology that detects a plurality of sub-instruction requests from a first memory engine in a plurality of memory engines, wherein the plurality of sub-instruction requests are associated with a direct memory access (DMA) data type conversion request from a first pipeline, wherein each sub-instruction request corresponds to a data element in the DMA data type conversion request, and wherein the first memory engine is to correspond to the first pipeline, decodes the plurality of sub-instruction requests to identify one or more arguments, loads a source array from a dynamic random access memory (DRAM) in a plurality of DRAMs, wherein the operation engine is to correspond to the DRAM, and conducts a conversion of the source array from a first data type to a second data type in accordance with the one or more arguments.
    Type: Application
    Filed: September 29, 2023
    Publication date: January 18, 2024
    Inventors: Shruti Sharma, Robert Pawlowski, Fabio Checconi, Jesmin Jahan Tithi
  • Publication number: 20230333998
    Abstract: Systems, apparatuses and methods may provide for technology that includes a plurality of memory engines corresponding to a plurality of pipelines, wherein each memory engine in the plurality of memory engines is adjacent to a pipeline in the plurality of pipelines, and wherein a first memory engine is to request one or more direct memory access (DMA) operations associated with a first pipeline, and a plurality of operation engines corresponding to a plurality of dynamic random access memories (DRAMs), wherein each operation engine in the plurality of operation engines is adjacent to a DRAM in the plurality of DRAMs, and wherein one or more of the plurality of operation engines is to conduct the one or more DMA operations based on one or more bitmaps.
    Type: Application
    Filed: May 5, 2023
    Publication date: October 19, 2023
    Inventors: Shruti Sharma, Robert Pawlowski, Fabio Checconi, Jesmin Jahan Tithi
  • Publication number: 20230325185
    Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed for performance of sparse matrix time dense matrix operations. Example instructions cause programmable circuitry to control execution of the sparse matrix times dense matrix operation using a sparse matrix and a dense matrix stored in memory, and transmit a plurality of instructions to execute the sparse matrix times dense matrix operation to DMA engine circuitry, the plurality of instructions to cause DMA engine circuitry to create an output matrix in the memory, the creation of the output matrix in the memory performed without the programmable circuitry computing the output matrix.
    Type: Application
    Filed: March 31, 2023
    Publication date: October 12, 2023
    Inventors: Jesmin Jahan Tithi, Fabio Checconi, Ahmed Helal, Fabrizio Petrini
  • Publication number: 20230315451
    Abstract: Systems, apparatuses and methods may provide for technology that detects, by an operation engine, a plurality of sub-instruction requests from a first memory engine in a plurality of memory engines, wherein the plurality of sub-instruction requests are associated with a direct memory access (DMA) bitmap manipulation request from a first pipeline, wherein each sub-instruction request corresponds to a data element in the DMA bitmap manipulation request, and wherein the first memory engine is to correspond to the first pipeline. The technology also detects, by the operation engine, one or more arguments in the plurality of sub-instruction requests, sends, by the operation engine, one or more load requests to a DRAM in the plurality of DRAMs in accordance with the one or more arguments, and sends, by the operation engine, one or more store requests to the DRAM in accordance with the one or more arguments, wherein the operation engine is to correspond to the DRAM.
    Type: Application
    Filed: May 31, 2023
    Publication date: October 5, 2023
    Inventors: Shruti Sharma, Robert Pawlowski, Fabio Checconi, Jesmin Jahan Tithi
  • Publication number: 20230095207
    Abstract: A memory architecture may provide support for any number of direct memory access (DMA) operations at least partially independent of the CPU coupled to the memory. DMA operations may involve data movement between two or more memory locations and may involve minor computations. At least some DMA operations may include any number of atomic functions, and at least some of the atomic functions may include a corresponding return value. A system includes a first direct memory access (DMA) engine to request a DMA operation. The DMA operation includes an atomic operation. The system also includes a second DMA engine to receive a return value associated with the atomic operation and store the return value at a source memory.
    Type: Application
    Filed: September 24, 2021
    Publication date: March 30, 2023
    Inventors: Robert Pawlowski, Fabio Checconi, Fabrizio Petrini
  • Patent number: 10877812
    Abstract: A plurality of hardware accelerators are interconnected and include a special processing unit and accelerator memory. At least one host computer is coupled to each of the plurality of hardware accelerators and includes a general processing unit and host memory. The plurality of hardware accelerators exchange data in a ring communication pattern in computing a linear layer of a neural network.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: December 29, 2020
    Assignee: International Business Machines Corporation
    Inventors: Patrick D.M. Siegl, Fabio Checconi, Daniele Buono, Alessandro Morari
  • Publication number: 20200081744
    Abstract: A plurality of hardware accelerators are interconnected and include a special processing unit and accelerator memory. At least one host computer is coupled to each of the plurality of hardware accelerators and includes a general processing unit and host memory. The plurality of hardware accelerators exchange data in a ring communication pattern in computing a linear layer of a neural network.
    Type: Application
    Filed: September 6, 2018
    Publication date: March 12, 2020
    Inventors: Patrick D.M. Siegl, Fabio Checconi, Daniele Buono, Alessandro Morari