Patents by Inventor Eri RUBIN

Eri RUBIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961003
    Abstract: A device, system, and method is provided for training a new neural network to mimic a target neural network without access to the target neural network or its original training dataset. The target neural network and the new neural network may be probed with input data to generate corresponding target and new output data. Input data may be detected that generate a maximum or above threshold difference between the corresponding target and new output data. A divergent probe training dataset may be generated comprising the input data that generate the maximum or above threshold difference and the corresponding target output data. The new neural network may be trained using the divergent probe training dataset to generate the target output data. The new neural network may be iteratively trained using an updated divergent probe training dataset dynamically adjusted as the new neural network changes during training.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: April 16, 2024
    Assignee: NANO DIMENSION TECHNOLOGIES, LTD.
    Inventors: Eli David, Eri Rubin
  • Publication number: 20230196061
    Abstract: A device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).
    Type: Application
    Filed: February 13, 2023
    Publication date: June 22, 2023
    Applicant: Nano Dimension Technologies, Ltd.
    Inventors: Eli DAVID, Eri RUBIN
  • Patent number: 11580352
    Abstract: A device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: February 14, 2023
    Assignee: Nano Dimension Technologies, Ltd.
    Inventors: Eli David, Eri Rubin
  • Publication number: 20220147828
    Abstract: A device, system, and method is provided for training or prediction using a cluster-connected neural network. The cluster-connected neural network may be divided into a plurality of clusters of artificial neurons connected by weights or convolutional channels connected by convolutional filters. Within each cluster is a locally dense sub-network of intra-cluster weights or filters with a majority of pairs of neurons or channels connected by intra-cluster weights or filters that are co-activated together as an activation block during training or prediction. Outside each cluster is a globally sparse network of inter-cluster weights or filters with a minority of pairs of neurons or channels separated by a cluster border across different clusters connected by inter-cluster weights or filters. Training or predicting is performed using the cluster-connected neural network.
    Type: Application
    Filed: October 28, 2021
    Publication date: May 12, 2022
    Applicant: DeepCube Ltd.
    Inventors: Eli DAVID, Eri RUBIN
  • Publication number: 20220012595
    Abstract: A device, system, and method is provided for training a new neural network to mimic a target neural network without access to the target neural network or its original training dataset. The target neural network and the new neural network may be probed with input data to generate corresponding target and new output data. Input data may be detected that generate a maximum or above threshold difference between the corresponding target and new output data. A divergent probe training dataset may be generated comprising the input data that generate the maximum or above threshold difference and the corresponding target output data. The new neural network may be trained using the divergent probe training dataset to generate the target output data. The new neural network may be iteratively trained using an updated divergent probe training dataset dynamically adjusted as the new neural network changes during training.
    Type: Application
    Filed: July 8, 2020
    Publication date: January 13, 2022
    Applicant: DeepCube Ltd.
    Inventors: Eli DAVID, Eri Rubin
  • Publication number: 20210406692
    Abstract: A device, system, and method for training or prediction of a neural network. A current value may be stored for each of a plurality of synapses or filters in the neural network. A historical metric of activity may be independently determined for each individual or group of the synapses or filters during one or more past iterations. A plurality of partial activations of the neural network may be iteratively executed. Each partial-activation iteration may activate a subset of the plurality of synapses or filters in the neural network. Each individual or group of synapses or filters may be activated in a portion of a total number of iterations proportional to the historical metric of activity independently determined for that individual or group of synapses or filters. Training or prediction of the neural network may be performed based on the plurality of partial activations of the neural network.
    Type: Application
    Filed: June 1, 2021
    Publication date: December 30, 2021
    Applicant: DeepCube Ltd.
    Inventors: Eli DAVID, Eri RUBIN
  • Patent number: 11164084
    Abstract: A device, system, and method is provided for training or prediction using a cluster-connected neural network. The cluster-connected neural network may be divided into a plurality of clusters of artificial neurons connected by weights or convolutional channels connected by convolutional filters. Within each cluster is a locally dense sub-network of intra-cluster weights or filters with a majority of pairs of neurons or channels connected by intra-cluster weights or filters that are co-activated together as an activation block during training or prediction. Outside each cluster is a globally sparse network of inter-cluster weights or filters with a minority of pairs of neurons or channels separated by a cluster border across different clusters connected by inter-cluster weights or filters. Training or predicting is performed using the cluster-connected neural network.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: November 2, 2021
    Assignee: DEEPCUBE LTD.
    Inventors: Eli David, Eri Rubin
  • Patent number: 11055617
    Abstract: A device, system, and method for training or prediction of a neural network. A current value may be stored for each of a plurality of synapses or filters in the neural network. A historical metric of activity may be independently determined for each individual or group of the synapses or filters during one or more past iterations. A plurality of partial activations of the neural network may be iteratively executed. Each partial-activation iteration may activate a subset of the plurality of synapses or filters in the neural network. Each individual or group of synapses or filters may be activated in a portion of a total number of iterations proportional to the historical metric of activity independently determined for that individual or group of synapses or filters. Training or prediction of the neural network may be performed based on the plurality of partial activations of the neural network.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: July 6, 2021
    Assignee: DEEPCUBE LTD.
    Inventors: Eli David, Eri Rubin
  • Publication number: 20210117759
    Abstract: A device, system, and method for approximating a neural network comprising N synapses or filters. The neural network may be partially-activated by iteratively executing a plurality of M partial pathways of the neural network to generate M partial outputs, wherein the M partial pathways respectively comprise M different continuous sequences of synapses or filters linking an input layer to an output layer. The M partial pathways may cumulatively span only a subset of the N synapses or filters such that a significant number of the remaining the N synapses or filters are not computed. The M partial outputs of the M partial pathways may be aggregated to generate an aggregated output approximating an output generated by fully-activating the neural network by executing a single instance of all N synapses or filters of the neural network. Training or prediction of the neural network may be performed based on the aggregated output.
    Type: Application
    Filed: December 28, 2020
    Publication date: April 22, 2021
    Applicant: DeepCube Ltd.
    Inventors: Eli DAVID, Eri Rubin
  • Patent number: 10878321
    Abstract: A device, system, and method for approximating a neural network comprising N synapses or filters. The neural network may be partially-activated by iteratively executing a plurality of M partial pathways of the neural network to generate M partial outputs, wherein the M partial pathways respectively comprise M different continuous sequences of synapses or filters linking an input layer to an output layer. The M partial pathways may cumulatively span only a subset of the N synapses or filters such that a significant number of the remaining the N synapses or filters are not computed. The M partial outputs of the M partial pathways may be aggregated to generate an aggregated output approximating an output generated by fully-activating the neural network by executing a single instance of all N synapses or filters of the neural network. Training or prediction of the neural network may be performed based on the aggregated output.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: December 29, 2020
    Assignee: DEEPCUBE LTD.
    Inventors: Eli David, Eri Rubin
  • Publication number: 20200279167
    Abstract: A device, system, and method for approximating a neural network comprising N synapses or filters. The neural network may be partially-activated by iteratively executing a plurality of M partial pathways of the neural network to generate M partial outputs, wherein the M partial pathways respectively comprise M different continuous sequences of synapses or filters linking an input layer to an output layer. The M partial pathways may cumulatively span only a subset of the N synapses or filters such that a significant number of the remaining the N synapses or filters are not computed. The M partial outputs of the M partial pathways may be aggregated to generate an aggregated output approximating an output generated by fully-activating the neural network by executing a single instance of all N synapses or filters of the neural network. Training or prediction of the neural network may be performed based on the aggregated output.
    Type: Application
    Filed: December 20, 2019
    Publication date: September 3, 2020
    Applicant: DeepCube Ltd.
    Inventors: Eli DAVID, Eri Rubin
  • Patent number: 10515306
    Abstract: A device, system, and method for approximating a neural network comprising N synapses or filters. The neural network may be partially-activated by iteratively executing a plurality of M partial pathways of the neural network to generate M partial outputs, wherein the M partial pathways respectively comprise M different continuous sequences of synapses or filters linking an input layer to an output layer. The M partial pathways may cumulatively span only a subset of the N synapses or filters such that a significant number of the remaining the N synapses or filters are not computed. The M partial outputs of the M partial pathways may be aggregated to generate an aggregated output approximating an output generated by fully-activating the neural network by executing a single instance of all N synapses or filters of the neural network. Training or prediction of the neural network may be performed based on the aggregated output.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: December 24, 2019
    Assignee: DeepCube Ltd.
    Inventors: Eli David, Eri Rubin
  • Publication number: 20190347536
    Abstract: A device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).
    Type: Application
    Filed: July 29, 2019
    Publication date: November 14, 2019
    Applicant: DeepCube Ltd.
    Inventors: Eli DAVID, Eri Rubin
  • Patent number: 10366322
    Abstract: A device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: July 30, 2019
    Assignee: DeepCube Ltd.
    Inventors: Eli David, Eri Rubin
  • Publication number: 20190108436
    Abstract: A device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).
    Type: Application
    Filed: July 20, 2018
    Publication date: April 11, 2019
    Applicant: DeepCube Ltd
    Inventors: Eli DAVID, Eri RUBIN