Patents Assigned to DeepCube LTD.
  • Publication number: 20220147828
    Abstract: A device, system, and method is provided for training or prediction using a cluster-connected neural network. The cluster-connected neural network may be divided into a plurality of clusters of artificial neurons connected by weights or convolutional channels connected by convolutional filters. Within each cluster is a locally dense sub-network of intra-cluster weights or filters with a majority of pairs of neurons or channels connected by intra-cluster weights or filters that are co-activated together as an activation block during training or prediction. Outside each cluster is a globally sparse network of inter-cluster weights or filters with a minority of pairs of neurons or channels separated by a cluster border across different clusters connected by inter-cluster weights or filters. Training or predicting is performed using the cluster-connected neural network.
    Type: Application
    Filed: October 28, 2021
    Publication date: May 12, 2022
    Applicant: DeepCube Ltd.
    Inventors: Eli DAVID, Eri RUBIN
  • Publication number: 20220012595
    Abstract: A device, system, and method is provided for training a new neural network to mimic a target neural network without access to the target neural network or its original training dataset. The target neural network and the new neural network may be probed with input data to generate corresponding target and new output data. Input data may be detected that generate a maximum or above threshold difference between the corresponding target and new output data. A divergent probe training dataset may be generated comprising the input data that generate the maximum or above threshold difference and the corresponding target output data. The new neural network may be trained using the divergent probe training dataset to generate the target output data. The new neural network may be iteratively trained using an updated divergent probe training dataset dynamically adjusted as the new neural network changes during training.
    Type: Application
    Filed: July 8, 2020
    Publication date: January 13, 2022
    Applicant: DeepCube Ltd.
    Inventors: Eli DAVID, Eri Rubin
  • Publication number: 20210406692
    Abstract: A device, system, and method for training or prediction of a neural network. A current value may be stored for each of a plurality of synapses or filters in the neural network. A historical metric of activity may be independently determined for each individual or group of the synapses or filters during one or more past iterations. A plurality of partial activations of the neural network may be iteratively executed. Each partial-activation iteration may activate a subset of the plurality of synapses or filters in the neural network. Each individual or group of synapses or filters may be activated in a portion of a total number of iterations proportional to the historical metric of activity independently determined for that individual or group of synapses or filters. Training or prediction of the neural network may be performed based on the plurality of partial activations of the neural network.
    Type: Application
    Filed: June 1, 2021
    Publication date: December 30, 2021
    Applicant: DeepCube Ltd.
    Inventors: Eli DAVID, Eri RUBIN
  • Patent number: 11164084
    Abstract: A device, system, and method is provided for training or prediction using a cluster-connected neural network. The cluster-connected neural network may be divided into a plurality of clusters of artificial neurons connected by weights or convolutional channels connected by convolutional filters. Within each cluster is a locally dense sub-network of intra-cluster weights or filters with a majority of pairs of neurons or channels connected by intra-cluster weights or filters that are co-activated together as an activation block during training or prediction. Outside each cluster is a globally sparse network of inter-cluster weights or filters with a minority of pairs of neurons or channels separated by a cluster border across different clusters connected by inter-cluster weights or filters. Training or predicting is performed using the cluster-connected neural network.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: November 2, 2021
    Assignee: DEEPCUBE LTD.
    Inventors: Eli David, Eri Rubin
  • Patent number: 11055617
    Abstract: A device, system, and method for training or prediction of a neural network. A current value may be stored for each of a plurality of synapses or filters in the neural network. A historical metric of activity may be independently determined for each individual or group of the synapses or filters during one or more past iterations. A plurality of partial activations of the neural network may be iteratively executed. Each partial-activation iteration may activate a subset of the plurality of synapses or filters in the neural network. Each individual or group of synapses or filters may be activated in a portion of a total number of iterations proportional to the historical metric of activity independently determined for that individual or group of synapses or filters. Training or prediction of the neural network may be performed based on the plurality of partial activations of the neural network.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: July 6, 2021
    Assignee: DEEPCUBE LTD.
    Inventors: Eli David, Eri Rubin
  • Publication number: 20210117759
    Abstract: A device, system, and method for approximating a neural network comprising N synapses or filters. The neural network may be partially-activated by iteratively executing a plurality of M partial pathways of the neural network to generate M partial outputs, wherein the M partial pathways respectively comprise M different continuous sequences of synapses or filters linking an input layer to an output layer. The M partial pathways may cumulatively span only a subset of the N synapses or filters such that a significant number of the remaining the N synapses or filters are not computed. The M partial outputs of the M partial pathways may be aggregated to generate an aggregated output approximating an output generated by fully-activating the neural network by executing a single instance of all N synapses or filters of the neural network. Training or prediction of the neural network may be performed based on the aggregated output.
    Type: Application
    Filed: December 28, 2020
    Publication date: April 22, 2021
    Applicant: DeepCube Ltd.
    Inventors: Eli DAVID, Eri Rubin
  • Patent number: 10878321
    Abstract: A device, system, and method for approximating a neural network comprising N synapses or filters. The neural network may be partially-activated by iteratively executing a plurality of M partial pathways of the neural network to generate M partial outputs, wherein the M partial pathways respectively comprise M different continuous sequences of synapses or filters linking an input layer to an output layer. The M partial pathways may cumulatively span only a subset of the N synapses or filters such that a significant number of the remaining the N synapses or filters are not computed. The M partial outputs of the M partial pathways may be aggregated to generate an aggregated output approximating an output generated by fully-activating the neural network by executing a single instance of all N synapses or filters of the neural network. Training or prediction of the neural network may be performed based on the aggregated output.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: December 29, 2020
    Assignee: DEEPCUBE LTD.
    Inventors: Eli David, Eri Rubin
  • Publication number: 20200320400
    Abstract: A device, system, and method is provided to mimic a pre-trained target model without access to the pre-trained target model or its original training dataset. A set of random or semi-random input data may be sent to randomly probe the pre-trained target model at a remote device. A set of corresponding output data may be received from the remote device that is generated by applying the pre-trained target model to the set of random or semi-random input data. A random probe training dataset may be generated comprising the set of random or semi-random input data and corresponding output data generated by randomly probing the pre-trained target model. A new model may be trained with the random probe training dataset so that the new model generates substantially the same corresponding output data in response to said input data to mimic the pre-trained target model.
    Type: Application
    Filed: June 24, 2020
    Publication date: October 8, 2020
    Applicant: DeepCube Ltd.
    Inventor: Eli DAVID
  • Publication number: 20200279167
    Abstract: A device, system, and method for approximating a neural network comprising N synapses or filters. The neural network may be partially-activated by iteratively executing a plurality of M partial pathways of the neural network to generate M partial outputs, wherein the M partial pathways respectively comprise M different continuous sequences of synapses or filters linking an input layer to an output layer. The M partial pathways may cumulatively span only a subset of the N synapses or filters such that a significant number of the remaining the N synapses or filters are not computed. The M partial outputs of the M partial pathways may be aggregated to generate an aggregated output approximating an output generated by fully-activating the neural network by executing a single instance of all N synapses or filters of the neural network. Training or prediction of the neural network may be performed based on the aggregated output.
    Type: Application
    Filed: December 20, 2019
    Publication date: September 3, 2020
    Applicant: DeepCube Ltd.
    Inventors: Eli DAVID, Eri Rubin
  • Patent number: 10699194
    Abstract: A device, system, and method is provided to mimic a pre-trained target model without access to the pre-trained target model or its original training dataset. A set of random or semi-random input data may be sent to randomly probe the pre-trained target model at a remote device. A set of corresponding output data may be received from the remote device that is generated by applying the pre-trained target model to the set of random or semi-random input data. A random probe training dataset may be generated comprising the set of random or semi-random input data and corresponding output data generated by randomly probing the pre-trained target model. A new model may be trained with the random probe training dataset so that the new model generates substantially the same corresponding output data in response to said input data to mimic the pre-trained target model.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: June 30, 2020
    Assignee: DeepCube Ltd.
    Inventor: Eli David
  • Patent number: 10515306
    Abstract: A device, system, and method for approximating a neural network comprising N synapses or filters. The neural network may be partially-activated by iteratively executing a plurality of M partial pathways of the neural network to generate M partial outputs, wherein the M partial pathways respectively comprise M different continuous sequences of synapses or filters linking an input layer to an output layer. The M partial pathways may cumulatively span only a subset of the N synapses or filters such that a significant number of the remaining the N synapses or filters are not computed. The M partial outputs of the M partial pathways may be aggregated to generate an aggregated output approximating an output generated by fully-activating the neural network by executing a single instance of all N synapses or filters of the neural network. Training or prediction of the neural network may be performed based on the aggregated output.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: December 24, 2019
    Assignee: DeepCube Ltd.
    Inventors: Eli David, Eri Rubin
  • Publication number: 20190370665
    Abstract: A device, system, and method is provided to mimic a pre-trained target model without access to the pre-trained target model or its original training dataset. A set of random or semi-random input data may be sent to randomly probe the pre-trained target model at a remote device. A set of corresponding output data may be received from the remote device that is generated by applying the pre-trained target model to the set of random or semi-random input data. A random probe training dataset may be generated comprising the set of random or semi-random input data and corresponding output data generated by randomly probing the pre-trained target model. A new model may be trained with the random probe training dataset so that the new model generates substantially the same corresponding output data in response to said input data to mimic the pre-trained target model.
    Type: Application
    Filed: December 6, 2018
    Publication date: December 5, 2019
    Applicant: DeepCube Ltd.
    Inventor: Eli DAVID
  • Publication number: 20190347536
    Abstract: A device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).
    Type: Application
    Filed: July 29, 2019
    Publication date: November 14, 2019
    Applicant: DeepCube Ltd.
    Inventors: Eli DAVID, Eri Rubin
  • Publication number: 20190325317
    Abstract: An efficient technique of machine learning is provided for training a plurality of convolutional neural networks (CNNs) with increased speed and accuracy using a genetic evolutionary model. A plurality of artificial chromosomes may be stored representing weights of artificial neuron connections of the plurality of respective CNNs. A plurality of pairs of the chromosomes may be recombined to generate, for each pair, a new chromosome (with a different set of weights than in either chromosome of the pair) by selecting entire filters as inseparable groups of a plurality of weights from each of the pair of chromosomes (e.g., “filter-by-filter” recombination). A plurality of weights of each of the new or original plurality of chromosomes may be mutated by propagating recursive error corrections incrementally throughout the CNN. A small random sampling of weights may optionally be further mutated to zero, random values, or a sum of current and random values.
    Type: Application
    Filed: July 1, 2019
    Publication date: October 24, 2019
    Applicant: DeepCube Ltd.
    Inventor: Eli DAVID
  • Patent number: 10366322
    Abstract: A device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: July 30, 2019
    Assignee: DeepCube Ltd.
    Inventors: Eli David, Eri Rubin
  • Patent number: 10339450
    Abstract: An efficient technique of machine learning is provided for training a plurality of convolutional neural networks (CNNs) with increased speed and accuracy using a genetic evolutionary model. A plurality of artificial chromosomes may be stored representing weights of artificial neuron connections of the plurality of respective CNNs. A plurality of pairs of the chromosomes may be recombined to generate, for each pair, a new chromosome (with a different set of weights than in either chromosome of the pair) by selecting entire filters as inseparable groups of a plurality of weights from each of the pair of chromosomes (e.g., “filter-by-filter” recombination). A plurality of weights of each of the new or original plurality of chromosomes may be mutated by propagating recursive error corrections incrementally throughout the CNN. A small random sampling of weights may optionally be further mutated to zero, random values, or a sum of current and random values.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: July 2, 2019
    Assignee: DeepCube Ltd.
    Inventor: Eli David
  • Publication number: 20190108436
    Abstract: A device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).
    Type: Application
    Filed: July 20, 2018
    Publication date: April 11, 2019
    Applicant: DeepCube Ltd
    Inventors: Eli DAVID, Eri RUBIN
  • Publication number: 20190080243
    Abstract: An efficient technique of machine learning is provided for training a plurality of convolutional neural networks (CNNs) with increased speed and accuracy using a genetic evolutionary model. A plurality of artificial chromosomes may be stored representing weights of artificial neuron connections of the plurality of respective CNNs. A plurality of pairs of the chromosomes may be recombined to generate, for each pair, a new chromosome (with a different set of weights than in either chromosome of the pair) by selecting entire filters as inseparable groups of a plurality of weights from each of the pair of chromosomes (e.g., “filter-by-filter” recombination). A plurality of weights of each of the new or original plurality of chromosomes may be mutated by propagating recursive error corrections incrementally throughout the CNN. A small random sampling of weights may optionally be further mutated to zero, random values, or a sum of current and random values.
    Type: Application
    Filed: September 4, 2018
    Publication date: March 14, 2019
    Applicant: DeepCube LTD.
    Inventor: Eli DAVID