Patents by Inventor Avi Baum

Avi Baum has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11874900
    Abstract: Novel and useful system and methods of functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The NN processor incorporates functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well. The safety mechanisms cover data stream fault detection, software defined redundant allocation, cluster interlayer safety, cluster intralayer safety, layer control unit (LCU) instruction addressing, weights storage safety, and neural network intermediate results storage safety.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: January 16, 2024
    Inventors: Guy Kaminitz, Ori Katz, Or Danon, Daniel Chibotero, Roi Seznayov, Nir Engelberg, Avi Baum, Itai Resh
  • Publication number: 20240015651
    Abstract: A controller is arranged to: receive a first decoded beacon frame which includes a first indication of a first data transmission; receive a second decoded beacon frame which includes a second indication of a second data transmission; compare the first and second decoded beacon frames to determine common bytes in the first and second decoded beacon frames; determine an expected time of receiving the common bytes in a third beacon frame; control a device to enter into a low power mode; and control the device to wake up from the low power mode at a time to receive and decode at least a portion of the third beacon frame, in which the time to wake up is based on the expected time to receive the common bytes instead of based on an expected time to receive a preamble at a start of the third beacon frame.
    Type: Application
    Filed: September 20, 2023
    Publication date: January 11, 2024
    Inventors: Oran Naftali, Avi Baum, Yuval Jakira, Asaf Even-Chen
  • Patent number: 11811421
    Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The mechanisms address ANN system level safety in situ, as a system level strategy tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts that function to detect and promptly flag and report an error with some mechanisms capable of correction as well.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: November 7, 2023
    Inventors: Guy Kaminitz, Roi Seznayov, Daniel Chibotero, Ori Katz, Nir Engelberg, Yuval Adelstein, Or Danon, Avi Baum
  • Patent number: 11800448
    Abstract: A circuit includes a controller configured to: receive a first decoded beacon frame which includes a first indication of a first data transmission; receive a second decoded beacon frame which includes a second indication of a second data transmission; compare the first and second decoded beacon frames to determine common bytes in the first and second decoded beacon frames; determine an expected time of receiving the common bytes in a third beacon frame; control a device to enter into a low power mode; and control the device to wake up from the low power mode at a time to receive and decode at least a portion of the third beacon frame, in which the time to wake up is based on the expected time to receive the common bytes instead of based on an expected time to receive a preamble at a start of the third beacon frame.
    Type: Grant
    Filed: September 12, 2022
    Date of Patent: October 24, 2023
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Oran Naftali, Avi Baum, Yuval Jakira, Asaf Even-Chen
  • Patent number: 11675693
    Abstract: A novel and useful neural network (NN) processing core incorporating inter-device connectivity and adapted to implement artificial neural networks (ANNs). A chip-to-chip interface spreads a given ANN model across multiple devices in a seamless manner. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: June 13, 2023
    Inventors: Avi Baum, Or Danon, Hadar Zeitlin, Daniel Ciubotariu, Rami Feig
  • Publication number: 20230129637
    Abstract: A Wi-Fi device includes a controller coupled to a writeable memory implementing a MAC and PHY layer and to a transceiver. Connection data stored in the writeable memory includes Wi-Fi connection parameters including ?1 router MAC level information or a most recently utilized (MRU) channel used, and IP addresses including ?1 of an IP address of the Wi-Fi device, IP address of the MRU router, an IP address of a MRU target server, and an IP address of a network connected device. An accelerated reconnecting to a Wi-Fi network algorithm is implemented by the processor is for starting from being in a network disconnected state, establishing current connection parameters for a current Wi-Fi network connection using the Wi-Fi connection parameters for at least one MAC layer parameter for the MAC layer.
    Type: Application
    Filed: December 23, 2022
    Publication date: April 27, 2023
    Inventors: YANIV TZOREFF, GILBOA SHVEKI, AVI BAUM, BARAK CHERCHES
  • Patent number: 11615297
    Abstract: A novel and useful system and method of improved power performance and lowered memory requirements for an artificial neural network based on packing memory utilizing several structured sparsity mechanisms. The invention applies to neural network (NN) processing engines adapted to implement mechanisms to search for structured sparsity in weights and activations, resulting in a considerably reduced memory usage. The sparsity guided training mechanism synthesizes and generates structured sparsity weights. A compiler mechanism within a software development kit (SDK), manipulates structured weight domain sparsity to generate a sparse set of static weights for the NN. The structured sparsity static weights are loaded into the NN after compilation and utilized by both the structured weight domain sparsity mechanism and the structured activation domain sparsity mechanism.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: March 28, 2023
    Inventors: Avi Baum, Or Danon, Daniel Chibotero
  • Patent number: 11551028
    Abstract: A novel and useful system and method of improved power performance and lowered memory requirements for an artificial neural network based on packing memory utilizing several structured sparsity mechanisms. The invention applies to neural network (NN) processing engines adapted to implement mechanisms to search for structured sparsity in weights and activations, resulting in a considerably reduced memory usage. The sparsity guided training mechanism synthesizes and generates structured sparsity weights. A compiler mechanism within a software development kit (SDK), manipulates structured weight domain sparsity to generate a sparse set of static weights for the NN. The structured sparsity static weights are loaded into the NN after compilation and utilized by both the structured weight domain sparsity mechanism and the structured activation domain sparsity mechanism.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: January 10, 2023
    Inventors: Avi Baum, Or Danon, Daniel Chibotero, Gilad Nahor
  • Publication number: 20230007586
    Abstract: A circuit includes a controller configured to: receive a first decoded beacon frame which includes a first indication of a first data transmission; receive a second decoded beacon frame which includes a second indication of a second data transmission; compare the first and second decoded beacon frames to determine common bytes in the first and second decoded beacon frames; determine an expected time of receiving the common bytes in a third beacon frame; control a device to enter into a low power mode; and control the device to wake up from the low power mode at a time to receive and decode at least a portion of the third beacon frame, in which the time to wake up is based on the expected time to receive the common bytes instead of based on an expected time to receive a preamble at a start of the third beacon frame.
    Type: Application
    Filed: September 12, 2022
    Publication date: January 5, 2023
    Inventors: Oran Naftali, Avi Baum, Yuval Jakira, Asaf Even-Chen
  • Patent number: 11544545
    Abstract: A novel and useful system and method of improved power performance and lowered memory requirements for an artificial neural network based on packing memory utilizing several structured sparsity mechanisms. The invention applies to neural network (NN) processing engines adapted to implement mechanisms to search for structured sparsity in weights and activations, resulting in a considerably reduced memory usage. The sparsity guided training mechanism synthesizes and generates structured sparsity weights. A compiler mechanism within a software development kit (SDK), manipulates structured weight domain sparsity to generate a sparse set of static weights for the NN. The structured sparsity static weights are loaded into the NN after compilation and utilized by both the structured weight domain sparsity mechanism and the structured activation domain sparsity mechanism.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: January 3, 2023
    Inventors: Avi Baum, Or Danon, Daniel Chibotero, Gilad Nahor
  • Patent number: 11539589
    Abstract: A Wi-Fi device includes a controller coupled to a writeable memory implementing a MAC and PHY layer and to a transceiver. Connection data stored in the writeable memory includes Wi-Fi connection parameters including?1 router MAC level information or a most recently utilized (MRU) channel used, and IP addresses including?1 of an IP address of the Wi-Fi device, IP address of the MRU router, an IP address of a MRU target server, and an IP address of a network connected device. An accelerated reconnecting to a Wi-Fi network algorithm is implemented by the processor is for starting from being in a network disconnected state, establishing current connection parameters for a current Wi-Fi network connection using the Wi-Fi connection parameters for at least one MAC layer parameter for the MAC layer.
    Type: Grant
    Filed: February 3, 2021
    Date of Patent: December 27, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Yaniv Tzoreff, Gilboa Shveki, Avi Baum, Barak Cherches
  • Patent number: 11514291
    Abstract: A novel and useful neural network (NN) processing core adapted to implement artificial neural networks (ANNs) and incorporating processing circuits having compute and local memory elements. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: November 29, 2022
    Inventors: Avi Baum, Or Danon, Hadar Zeitlin, Daniel Ciubotariu, Rami Feig
  • Patent number: 11461615
    Abstract: A novel and useful system and method of accessing multi-dimensional data in memory. The invention is applicable to neural network (NN) processing engines adapted to implement artificial neural networks (ANNs). The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: October 4, 2022
    Inventors: Avi Baum, Or Danon
  • Patent number: 11461614
    Abstract: A novel and useful system and method of data driven quantization optimization of weights and input data in an artificial neural network (ANN). The system reduces quantization implications (i.e. error) in a limited resource system by employing the information available in the data actually observed by the system. Data counters in the layers of the network observe the data input thereto. The distribution of the data is used to determine an optimum quantization scheme to apply to the weights, input data, or both. The mechanism is sensitive to the data observed at the input layer of the network. As a result, the network auto-tunes to optimize the instance specific representation of the network. The network becomes customized (i.e. specialized) to the inputs it observes and better fits itself to the subset of the sample space that is applicable to its actual data flow. As a result, nominal process noise is reduced and detection accuracy improves.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: October 4, 2022
    Inventors: Avi Baum, Or Danon, Daniel Ciubotariu, Mark Grobman, Alex Finkelstein
  • Patent number: 11445437
    Abstract: In a described example, an integrated circuit includes an input coupled to receive a plurality of beacon frames, the beacon frames include an indication of data transmissions available for a device that includes the integrated circuit. The integrated circuit also includes a controller configured to compare the plurality of beacon frames to determine a plurality of bytes prior to the indication of data transmission available that is present in each of the plurality of beacon frames and is configured to provide a signal indicating a low power mode in which the device does not receive transmissions and to provide a signal indicating a wake mode at a selected time before transmission of the plurality of bytes in a subsequent beacon transmission.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: September 13, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Oran Naftali, Avi Baum, Yuval Jakira, Asaf Even-Chen
  • Patent number: 11354563
    Abstract: A novel and useful neural network (NN) processing core adapted to implement artificial neural networks (ANNs) and incorporating configurable and programmable sliding window based memory access. The memory mapping and allocation scheme trades off random and full access in favor of high parallelism and static mapping to a subset of the overall address space. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: June 7, 2022
    Inventors: Avi Baum, Or Danon, Hadar Zeitlin, Daniel Ciubotariu, Rami Feig
  • Publication number: 20220100601
    Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well.
    Type: Application
    Filed: September 29, 2020
    Publication date: March 31, 2022
    Inventors: Avi Baum, Daniel Chibotero, Roi Seznayov, Or Danon, Ori Katz, Guy Kaminitz
  • Publication number: 20220101043
    Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well.
    Type: Application
    Filed: September 29, 2020
    Publication date: March 31, 2022
    Inventors: Ori Katz, Roi Seznayov, Daniel Chibotero, Avi Baum, Guy Kaminitz, Amir Shmul, Nir Engelberg, Yuval Adelstein, Or Danon
  • Publication number: 20220101042
    Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well.
    Type: Application
    Filed: September 29, 2020
    Publication date: March 31, 2022
    Inventors: Guy Kaminitz, Ori Katz, Or Danon, Daniel Chibotero, Roi Seznayov, Nir Engelberg, Avi Baum, Itai Resh
  • Publication number: 20220103186
    Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well.
    Type: Application
    Filed: September 29, 2020
    Publication date: March 31, 2022
    Inventors: Guy Kaminitz, Roi Seznayov, Daniel Chibotero, Ori Katz, Nir Engelberg, Yuval Adelstein, Or Danon, Avi Baum