Patents by Inventor Or Danon

Or Danon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12248367
    Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: March 11, 2025
    Inventors: Avi Baum, Daniel Chibotero, Roi Seznayov, Or Danon, Ori Katz, Guy Kaminitz
  • Patent number: 11874900
    Abstract: Novel and useful system and methods of functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The NN processor incorporates functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well. The safety mechanisms cover data stream fault detection, software defined redundant allocation, cluster interlayer safety, cluster intralayer safety, layer control unit (LCU) instruction addressing, weights storage safety, and neural network intermediate results storage safety.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: January 16, 2024
    Inventors: Guy Kaminitz, Ori Katz, Or Danon, Daniel Chibotero, Roi Seznayov, Nir Engelberg, Avi Baum, Itai Resh
  • Patent number: 11811421
    Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The mechanisms address ANN system level safety in situ, as a system level strategy tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts that function to detect and promptly flag and report an error with some mechanisms capable of correction as well.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: November 7, 2023
    Inventors: Guy Kaminitz, Roi Seznayov, Daniel Chibotero, Ori Katz, Nir Engelberg, Yuval Adelstein, Or Danon, Avi Baum
  • Patent number: 11675693
    Abstract: A novel and useful neural network (NN) processing core incorporating inter-device connectivity and adapted to implement artificial neural networks (ANNs). A chip-to-chip interface spreads a given ANN model across multiple devices in a seamless manner. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: June 13, 2023
    Inventors: Avi Baum, Or Danon, Hadar Zeitlin, Daniel Ciubotariu, Rami Feig
  • Patent number: 11615297
    Abstract: A novel and useful system and method of improved power performance and lowered memory requirements for an artificial neural network based on packing memory utilizing several structured sparsity mechanisms. The invention applies to neural network (NN) processing engines adapted to implement mechanisms to search for structured sparsity in weights and activations, resulting in a considerably reduced memory usage. The sparsity guided training mechanism synthesizes and generates structured sparsity weights. A compiler mechanism within a software development kit (SDK), manipulates structured weight domain sparsity to generate a sparse set of static weights for the NN. The structured sparsity static weights are loaded into the NN after compilation and utilized by both the structured weight domain sparsity mechanism and the structured activation domain sparsity mechanism.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: March 28, 2023
    Inventors: Avi Baum, Or Danon, Daniel Chibotero
  • Patent number: 11551028
    Abstract: A novel and useful system and method of improved power performance and lowered memory requirements for an artificial neural network based on packing memory utilizing several structured sparsity mechanisms. The invention applies to neural network (NN) processing engines adapted to implement mechanisms to search for structured sparsity in weights and activations, resulting in a considerably reduced memory usage. The sparsity guided training mechanism synthesizes and generates structured sparsity weights. A compiler mechanism within a software development kit (SDK), manipulates structured weight domain sparsity to generate a sparse set of static weights for the NN. The structured sparsity static weights are loaded into the NN after compilation and utilized by both the structured weight domain sparsity mechanism and the structured activation domain sparsity mechanism.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: January 10, 2023
    Inventors: Avi Baum, Or Danon, Daniel Chibotero, Gilad Nahor
  • Patent number: 11544545
    Abstract: A novel and useful system and method of improved power performance and lowered memory requirements for an artificial neural network based on packing memory utilizing several structured sparsity mechanisms. The invention applies to neural network (NN) processing engines adapted to implement mechanisms to search for structured sparsity in weights and activations, resulting in a considerably reduced memory usage. The sparsity guided training mechanism synthesizes and generates structured sparsity weights. A compiler mechanism within a software development kit (SDK), manipulates structured weight domain sparsity to generate a sparse set of static weights for the NN. The structured sparsity static weights are loaded into the NN after compilation and utilized by both the structured weight domain sparsity mechanism and the structured activation domain sparsity mechanism.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: January 3, 2023
    Inventors: Avi Baum, Or Danon, Daniel Chibotero, Gilad Nahor
  • Patent number: 11514291
    Abstract: A novel and useful neural network (NN) processing core adapted to implement artificial neural networks (ANNs) and incorporating processing circuits having compute and local memory elements. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: November 29, 2022
    Inventors: Avi Baum, Or Danon, Hadar Zeitlin, Daniel Ciubotariu, Rami Feig
  • Patent number: 11461615
    Abstract: A novel and useful system and method of accessing multi-dimensional data in memory. The invention is applicable to neural network (NN) processing engines adapted to implement artificial neural networks (ANNs). The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: October 4, 2022
    Inventors: Avi Baum, Or Danon
  • Patent number: 11461614
    Abstract: A novel and useful system and method of data driven quantization optimization of weights and input data in an artificial neural network (ANN). The system reduces quantization implications (i.e. error) in a limited resource system by employing the information available in the data actually observed by the system. Data counters in the layers of the network observe the data input thereto. The distribution of the data is used to determine an optimum quantization scheme to apply to the weights, input data, or both. The mechanism is sensitive to the data observed at the input layer of the network. As a result, the network auto-tunes to optimize the instance specific representation of the network. The network becomes customized (i.e. specialized) to the inputs it observes and better fits itself to the subset of the sample space that is applicable to its actual data flow. As a result, nominal process noise is reduced and detection accuracy improves.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: October 4, 2022
    Inventors: Avi Baum, Or Danon, Daniel Ciubotariu, Mark Grobman, Alex Finkelstein
  • Patent number: 11354563
    Abstract: A novel and useful neural network (NN) processing core adapted to implement artificial neural networks (ANNs) and incorporating configurable and programmable sliding window based memory access. The memory mapping and allocation scheme trades off random and full access in favor of high parallelism and static mapping to a subset of the overall address space. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: June 7, 2022
    Inventors: Avi Baum, Or Danon, Hadar Zeitlin, Daniel Ciubotariu, Rami Feig
  • Publication number: 20220103186
    Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well.
    Type: Application
    Filed: September 29, 2020
    Publication date: March 31, 2022
    Inventors: Guy Kaminitz, Roi Seznayov, Daniel Chibotero, Ori Katz, Nir Engelberg, Yuval Adelstein, Or Danon, Avi Baum
  • Publication number: 20220101043
    Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well.
    Type: Application
    Filed: September 29, 2020
    Publication date: March 31, 2022
    Inventors: Ori Katz, Roi Seznayov, Daniel Chibotero, Avi Baum, Guy Kaminitz, Amir Shmul, Nir Engelberg, Yuval Adelstein, Or Danon
  • Publication number: 20220100601
    Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well.
    Type: Application
    Filed: September 29, 2020
    Publication date: March 31, 2022
    Inventors: Avi Baum, Daniel Chibotero, Roi Seznayov, Or Danon, Ori Katz, Guy Kaminitz
  • Publication number: 20220101042
    Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well.
    Type: Application
    Filed: September 29, 2020
    Publication date: March 31, 2022
    Inventors: Guy Kaminitz, Ori Katz, Or Danon, Daniel Chibotero, Roi Seznayov, Nir Engelberg, Avi Baum, Itai Resh
  • Patent number: 11263512
    Abstract: A novel and useful neural network (NN) processing core adapted to implement artificial neural networks (ANNs) and incorporating strictly separate control and data planes. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level. Dynamic resource assignment agility is provided which can be adjusted as required depending on resource availability and capacity of the device.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: March 1, 2022
    Inventors: Avi Baum, Or Danon, Hadar Zeitlin, Daniel Ciubotariu, Rami Feig
  • Patent number: 11263077
    Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: March 1, 2022
    Inventors: Roi Seznayov, Guy Kaminitz, Daniel Chibotero, Ori Katz, Amir Shmul, Yuval Adelstein, Nir Engelberg, Or Danon, Avi Baum
  • Patent number: 11237894
    Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: February 1, 2022
    Inventors: Avi Baum, Roi Seznayov, Daniel Chibotero, Ori Katz, Guy Kaminitz, Nir Engelberg, Yuval Adelstein, Itai Resh, Or Danon
  • Patent number: 11238331
    Abstract: A novel and useful augmented artificial neural network (ANN) incorporating an existing artificial neural network (ANN) coupled to a supplemental ANN and a first-in first-out (FIFO) stack for storing historical output values of the network. The augmented ANN exploits the redundant nature of information present in an input data stream. The addition of the supplemental ANN along with a FIFO enables the augmented network to look back into the past in making a decision for the current frame. It provides context aware object presence as well as lowers the rate of false detections and misdetections. The output of the existing ANN is stored in a FIFO to create a lookahead system in which both past output values of the supplemental ANN and ‘future’ values of the output of the existing ANN are used in making a decision for the current frame. In addition, the mechanism does not require retraining the entire neural network nor does it require data set labeling.
    Type: Grant
    Filed: October 26, 2017
    Date of Patent: February 1, 2022
    Inventors: Avi Baum, Or Danon
  • Patent number: 11238334
    Abstract: A novel and useful system and method of input alignment for streamlining vector operations that reduce the required memory read bandwidth. The input aligner as deployed in the NN processor, functions to facilitate the reuse of data read from memory and to avoid having to re-read that data in the context of neural network calculations. The input aligner functions to distribute input data (or weights) to the appropriate compute elements while consuming input data in a single cycle. Thus, the input aligner is operative to lower the required read bandwidth of layer input in an ANN. This reflects the fact that normally in practice, a vector multiplication is performed every time instance. This considers the fact that in many native calculations that take place in an ANN, the same data point is involved in multiple calculations.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: February 1, 2022
    Inventors: Avi Baum, Or Danon, Daniel Ciubotariu