Patents by Inventor Zigi Walter

Zigi Walter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11847497
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed that enable out-of-order pipelined execution of static mapping of a workload to one or more computational building blocks of an accelerator. An example apparatus includes an interface to load a first number of credits into memory; a comparator to compare the first number of credits to a threshold number of credits associated with memory availability in a buffer; and a dispatcher to, when the first number of credits meets the threshold number of credits, select a workload node of the workload to be executed at a first one of the one or more computational building blocks.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: December 19, 2023
    Assignee: Intel Corporation
    Inventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
  • Publication number: 20230333913
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to configure heterogenous components in an accelerator. An example apparatus includes a graph compiler to identify a workload node in a workload and generate a selector for the workload node, and the selector to identify an input condition and an output condition of a compute building block, wherein the graph compiler is to, in response to obtaining the identified input condition and output condition from the selector, map the workload node to the compute building block.
    Type: Application
    Filed: April 28, 2023
    Publication date: October 19, 2023
    Inventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
  • Publication number: 20230281435
    Abstract: In an example, an apparatus comprises a plurality of execution units comprising and logic, at least partially including hardware logic, to traverse a solution space, score a plurality of solutions to a scheduling deep learning network execution, and select a preferred solution from the plurality of solutions to implement the deep learning network. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: February 24, 2023
    Publication date: September 7, 2023
    Applicant: Intel Corporation
    Inventors: Eran Ben-Avi, Neta Zmora, Guy Jacob, Lev Faivishevsky, Jeremie Dreyfuss, Tomer Bar-On, Jacob Subag, Yaniv Fais, Shira Hirsch, Orly Weisel, Zigi Walter, Yarden Oren
  • Patent number: 11675630
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to configure heterogenous components in an accelerator. An example apparatus includes a graph compiler to identify a workload node in a workload and generate a selector for the workload node, and the selector to identify an input condition and an output condition of a compute building block, wherein the graph compiler is to, in response to obtaining the identified input condition and output condition from the selector, map the workload node to the compute building block.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: June 13, 2023
    Assignee: INTEL CORPORATION
    Inventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
  • Patent number: 11599777
    Abstract: In an example, an apparatus comprises a plurality of execution units comprising and logic, at least partially including hardware logic, to traverse a solution space, score a plurality of solutions to a scheduling deep learning network execution, and select a preferred solution from the plurality of solutions to implement the deep learning network. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: March 7, 2023
    Assignee: Intel Corporation
    Inventors: Eran Ben-Avi, Neta Zmora, Guy Jacob, Lev Faivishevsky, Jeremie Dreyfuss, Tomer Bar-On, Jacob Subag, Yaniv Fais, Shira Hirsch, Orly Weisel, Zigi Walter, Yarden Oren
  • Publication number: 20220197703
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed that enable out-of-order pipelined execution of static mapping of a workload to one or more computational building blocks of an accelerator. An example apparatus includes an interface to load a first number of credits into memory; a comparator to compare the first number of credits to a threshold number of credits associated with memory availability in a buffer; and a dispatcher to, when the first number of credits meets the threshold number of credits, select a workload node of the workload to be executed at a first one of the one or more computational building blocks.
    Type: Application
    Filed: December 23, 2021
    Publication date: June 23, 2022
    Inventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
  • Patent number: 11347551
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to manage memory allocation. An example apparatus includes a memory detector to scan a platform for available memory. The example apparatus also includes a memory size checker to retrieve a virtual memory layout associated with the available memory devices associated with the platform and to determine whether virtual address boundaries of respective ones of a available memory device generate a virtual address gap therebetween. The example apparatus also includes a address assigner to reassign virtual addresses of at least one of the respective ones of the available memory devices to remove the virtual address gap.
    Type: Grant
    Filed: August 13, 2019
    Date of Patent: May 31, 2022
    Assignee: INTEL CORPORATION
    Inventors: Zigi Walter, Anat Heilper
  • Publication number: 20220066923
    Abstract: Systems, apparatuses and methods may provide for technology that determines runtime memory requirements of an artificial intelligence (AI) application, defines a remote address range for a plurality of memories based on the runtime memory requirements, wherein each memory in the plurality of memories corresponds to a processor in a plurality of processors, and defines a shared address range for the plurality of memories based on the runtime memory requirements, wherein the shared address range is aliased. In one example, the technology configures memory mapping hardware to access the remote address range in a linear sequence and access the shared address range in a hashed sequence.
    Type: Application
    Filed: November 10, 2021
    Publication date: March 3, 2022
    Inventors: Zigi Walter, Roni Rosner, Michael Behar
  • Patent number: 11231963
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed that enable out-of-order pipelined execution of static mapping of a workload to one or more computational building blocks of an accelerator. An example apparatus includes an interface to load a first number of credits into memory; a comparator to compare the first number of credits to a threshold number of credits associated with memory availability in a buffer; and a dispatcher to, when the first number of credits meets the threshold number of credits, select a workload node of the workload to be executed at a first one of the one or more computational building blocks.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: January 25, 2022
    Assignee: INTEL CORPORATION
    Inventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
  • Patent number: 11151074
    Abstract: Methods and apparatus to implement multiple inference compute engines are disclosed herein. A disclosed example apparatus includes a first inference compute engine, a second inference compute engine, and an accelerator on coherent fabric to couple the first inference compute engine and the second inference compute engine to a converged coherency fabric of a system-on-chip, the accelerator on coherent fabric to arbitrate requests from the first inference compute engine and the second inference compute engine to utilize a single in-die interconnect port.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: October 19, 2021
    Assignee: Intel Corporation
    Inventors: Israel Diamand, Roni Rosner, Ravi Venkatesan, Shlomi Shua, Oz Shitrit, Henrietta Bezbroz, Alexander Gendler, Ohad Falik, Zigi Walter, Michael Behar, Shlomi Alkalay
  • Patent number: 11036277
    Abstract: Methods and apparatus to dynamically throttle compute engines are disclosed. A disclosed example apparatus includes one or more compute engines to perform calculations, where the one or more compute engines are to cause a total power request to be issued based on the calculations. The example apparatus also includes a power management unit to receive the total power request and respond to the total power request. The apparatus also includes a throttle manager to adjust a throttle speed of at least one of the one or more compute engines based on comparing a minimum of the power request and a granted power to a total used power of the one or more compute engines prior to the power management unit responding to the total power request.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: June 15, 2021
    Assignee: Intel Corporation
    Inventors: Israel Diamand, Avital Paz, Eran Nevet, Zigi Walter
  • Publication number: 20190370074
    Abstract: An apparatus includes a communication processor to receive configuration information from a producing compute building block; a credit generator to generate a number of credits for the producing compute building block corresponding to the configuration information, the configuration information including characteristics of a buffer; a source identifier to analyze a returned credit to determine whether the returned credit originates from the producing compute building block or a consuming compute building block; and a duplicator to, when the returned credit originates from the producing compute building block, multiply the returned credit by a first factor, the first factor indicative of a number of consuming compute building blocks identified in the configuration information.
    Type: Application
    Filed: August 15, 2019
    Publication date: December 5, 2019
    Inventors: Roni Rosner, Moshe Maor, Michael Behar, Ronen Gabbai, Zigi Walter, Oren Agam
  • Publication number: 20190370084
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to configure heterogenous components in an accelerator. An example apparatus includes a graph compiler to identify a workload node in a workload and generate a selector for the workload node, and the selector to identify an input condition and an output condition of a compute building block, wherein the graph compiler is to, in response to obtaining the identified input condition and output condition from the selector, map the workload node to the compute building block.
    Type: Application
    Filed: August 15, 2019
    Publication date: December 5, 2019
    Inventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
  • Publication number: 20190370072
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to manage memory allocation. An example apparatus includes a memory detector to scan a platform for available memory. The example apparatus also includes a memory size checker to retrieve a virtual memory layout associated with the available memory devices associated with the platform and to determine whether virtual address boundaries of respective ones of a available memory device generate a virtual address gap therebetween. The example apparatus also includes a address assigner to reassign virtual addresses of at least one of the respective ones of the available memory devices to remove the virtual address gap.
    Type: Application
    Filed: August 13, 2019
    Publication date: December 5, 2019
    Inventors: Zigi Walter, Anat Heilper
  • Publication number: 20190370209
    Abstract: Methods and apparatus to implement multiple inference compute engines are disclosed herein. A disclosed example apparatus includes a first inference compute engine, a second inference compute engine, and an accelerator on coherent fabric to couple the first inference compute engine and the second inference compute engine to a converged coherency fabric of a system-on-chip, the accelerator on coherent fabric to arbitrate requests from the first inference compute engine and the second inference compute engine to utilize a single in-die interconnect port.
    Type: Application
    Filed: August 15, 2019
    Publication date: December 5, 2019
    Inventors: Israel Diamand, Roni Rosner, Ravi Venkatesan, Shlomi Shua, Oz Shitrit, Henrietta Bezbroz, Alexander Gendler, Ohad Falik, Zigi Walter, Michael Behar, Shlomi Alkalay
  • Publication number: 20190370076
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed that enable dynamic processing of a predefined workload to one or more computational building blocks of an accelerator. An example apparatus includes an interface to obtain a workload node, the workload node associated with a first amount of data, the workload node to be executed at a first one of the one or more computational building blocks; an analyzer to: determine whether the workload node is a candidate for early termination; and in response to determining that the workload node is a candidate for early termination, set a flag associated with a tile of the first amount of data; and a dispatcher to, in response to the tile being transmitted from the first one of the one or more computational building blocks to a buffer, stop execution of the workload node.
    Type: Application
    Filed: August 15, 2019
    Publication date: December 5, 2019
    Inventors: Michael Behar, Oren Agam, Ronen Gabbai, Zigi Walter, Roni Rosner, Moshe Maor
  • Publication number: 20190369694
    Abstract: Methods and apparatus to dynamically throttle compute engines are disclosed. A disclosed example apparatus includes one or more compute engines to perform calculations, where the one or more compute engines are to cause a total power request to be issued based on the calculations. The example apparatus also includes a power management unit to receive the total power request and respond to the total power request. The apparatus also includes a throttle manager to adjust a throttle speed of at least one of the one or more compute engines based on comparing a minimum of the power request and a granted power to a total used power of the one or more compute engines prior to the power management unit responding to the total power request.
    Type: Application
    Filed: August 15, 2019
    Publication date: December 5, 2019
    Inventors: Israel Diamand, Avital Paz, Eran Nevet, Zigi Walter
  • Publication number: 20190370073
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed that enable out-of-order pipelined execution of static mapping of a workload to one or more computational building blocks of an accelerator. An example apparatus includes an interface to load a first number of credits into memory; a comparator to compare the first number of credits to a threshold number of credits associated with memory availability in a buffer; and a dispatcher to, when the first number of credits meets the threshold number of credits, select a workload node of the workload to be executed at a first one of the one or more computational building blocks.
    Type: Application
    Filed: August 15, 2019
    Publication date: December 5, 2019
    Inventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
  • Publication number: 20180314934
    Abstract: In an example, an apparatus comprises a plurality of execution units comprising and logic, at least partially including hardware logic, to traverse a solution space, score a plurality of solutions to a scheduling deep learning network execution, and select a preferred solution from the plurality of solutions to implement the deep learning network. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 28, 2017
    Publication date: November 1, 2018
    Applicant: Intel Corporation
    Inventors: Eran Ben-Avi, Neta Zmora, Guy Jacob, Lev Faivishevsky, Jeremie Dreyfuss, Tomer Bar-On, Jacob Subag, Yaniv Fais, Shira Hirsh, Orly Weisel, Zigi Walter, Yarden Oren