Patents by Inventor Ronen Gabbai

Ronen Gabbai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11847497
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed that enable out-of-order pipelined execution of static mapping of a workload to one or more computational building blocks of an accelerator. An example apparatus includes an interface to load a first number of credits into memory; a comparator to compare the first number of credits to a threshold number of credits associated with memory availability in a buffer; and a dispatcher to, when the first number of credits meets the threshold number of credits, select a workload node of the workload to be executed at a first one of the one or more computational building blocks.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: December 19, 2023
    Assignee: Intel Corporation
    Inventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
  • Publication number: 20230333913
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to configure heterogenous components in an accelerator. An example apparatus includes a graph compiler to identify a workload node in a workload and generate a selector for the workload node, and the selector to identify an input condition and an output condition of a compute building block, wherein the graph compiler is to, in response to obtaining the identified input condition and output condition from the selector, map the workload node to the compute building block.
    Type: Application
    Filed: April 28, 2023
    Publication date: October 19, 2023
    Inventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
  • Patent number: 11675630
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to configure heterogenous components in an accelerator. An example apparatus includes a graph compiler to identify a workload node in a workload and generate a selector for the workload node, and the selector to identify an input condition and an output condition of a compute building block, wherein the graph compiler is to, in response to obtaining the identified input condition and output condition from the selector, map the workload node to the compute building block.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: June 13, 2023
    Assignee: INTEL CORPORATION
    Inventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
  • Publication number: 20220261357
    Abstract: Systems, apparatuses, and methods include technology that determines, with a neural network, that a first eviction node stored in a cache will be evicted from the cache based on a cache policy. The first eviction node is part of a plurality of nodes associated with a graph. Further, a subset of nodes of the plurality of nodes remains in the cache after the eviction of the first eviction node from the cache. The technology further tracks a number of cache hits on the cache during an aggregation operation associated with a hardware accelerator, where the aggregation operation is executed on the subset of nodes that remain in the cache after the eviction of the eviction node from the cache. The technology executes a training process on the neural network to adjust the cache policy based on the number of the cache hits.
    Type: Application
    Filed: May 2, 2022
    Publication date: August 18, 2022
    Inventors: Ronen Gabbai, Amit Bleiweiss, Ohad Falik, Amit Gur, Almog Tzabary
  • Publication number: 20220197703
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed that enable out-of-order pipelined execution of static mapping of a workload to one or more computational building blocks of an accelerator. An example apparatus includes an interface to load a first number of credits into memory; a comparator to compare the first number of credits to a threshold number of credits associated with memory availability in a buffer; and a dispatcher to, when the first number of credits meets the threshold number of credits, select a workload node of the workload to be executed at a first one of the one or more computational building blocks.
    Type: Application
    Filed: December 23, 2021
    Publication date: June 23, 2022
    Inventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
  • Patent number: 11231963
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed that enable out-of-order pipelined execution of static mapping of a workload to one or more computational building blocks of an accelerator. An example apparatus includes an interface to load a first number of credits into memory; a comparator to compare the first number of credits to a threshold number of credits associated with memory availability in a buffer; and a dispatcher to, when the first number of credits meets the threshold number of credits, select a workload node of the workload to be executed at a first one of the one or more computational building blocks.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: January 25, 2022
    Assignee: INTEL CORPORATION
    Inventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
  • Publication number: 20190370074
    Abstract: An apparatus includes a communication processor to receive configuration information from a producing compute building block; a credit generator to generate a number of credits for the producing compute building block corresponding to the configuration information, the configuration information including characteristics of a buffer; a source identifier to analyze a returned credit to determine whether the returned credit originates from the producing compute building block or a consuming compute building block; and a duplicator to, when the returned credit originates from the producing compute building block, multiply the returned credit by a first factor, the first factor indicative of a number of consuming compute building blocks identified in the configuration information.
    Type: Application
    Filed: August 15, 2019
    Publication date: December 5, 2019
    Inventors: Roni Rosner, Moshe Maor, Michael Behar, Ronen Gabbai, Zigi Walter, Oren Agam
  • Publication number: 20190370076
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed that enable dynamic processing of a predefined workload to one or more computational building blocks of an accelerator. An example apparatus includes an interface to obtain a workload node, the workload node associated with a first amount of data, the workload node to be executed at a first one of the one or more computational building blocks; an analyzer to: determine whether the workload node is a candidate for early termination; and in response to determining that the workload node is a candidate for early termination, set a flag associated with a tile of the first amount of data; and a dispatcher to, in response to the tile being transmitted from the first one of the one or more computational building blocks to a buffer, stop execution of the workload node.
    Type: Application
    Filed: August 15, 2019
    Publication date: December 5, 2019
    Inventors: Michael Behar, Oren Agam, Ronen Gabbai, Zigi Walter, Roni Rosner, Moshe Maor
  • Publication number: 20190370073
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed that enable out-of-order pipelined execution of static mapping of a workload to one or more computational building blocks of an accelerator. An example apparatus includes an interface to load a first number of credits into memory; a comparator to compare the first number of credits to a threshold number of credits associated with memory availability in a buffer; and a dispatcher to, when the first number of credits meets the threshold number of credits, select a workload node of the workload to be executed at a first one of the one or more computational building blocks.
    Type: Application
    Filed: August 15, 2019
    Publication date: December 5, 2019
    Inventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
  • Publication number: 20190370084
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to configure heterogenous components in an accelerator. An example apparatus includes a graph compiler to identify a workload node in a workload and generate a selector for the workload node, and the selector to identify an input condition and an output condition of a compute building block, wherein the graph compiler is to, in response to obtaining the identified input condition and output condition from the selector, map the workload node to the compute building block.
    Type: Application
    Filed: August 15, 2019
    Publication date: December 5, 2019
    Inventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam