Patents Assigned to DeepCube LTD.
-
Publication number: 20220147828Abstract: A device, system, and method is provided for training or prediction using a cluster-connected neural network. The cluster-connected neural network may be divided into a plurality of clusters of artificial neurons connected by weights or convolutional channels connected by convolutional filters. Within each cluster is a locally dense sub-network of intra-cluster weights or filters with a majority of pairs of neurons or channels connected by intra-cluster weights or filters that are co-activated together as an activation block during training or prediction. Outside each cluster is a globally sparse network of inter-cluster weights or filters with a minority of pairs of neurons or channels separated by a cluster border across different clusters connected by inter-cluster weights or filters. Training or predicting is performed using the cluster-connected neural network.Type: ApplicationFiled: October 28, 2021Publication date: May 12, 2022Applicant: DeepCube Ltd.Inventors: Eli DAVID, Eri RUBIN
-
Publication number: 20220012595Abstract: A device, system, and method is provided for training a new neural network to mimic a target neural network without access to the target neural network or its original training dataset. The target neural network and the new neural network may be probed with input data to generate corresponding target and new output data. Input data may be detected that generate a maximum or above threshold difference between the corresponding target and new output data. A divergent probe training dataset may be generated comprising the input data that generate the maximum or above threshold difference and the corresponding target output data. The new neural network may be trained using the divergent probe training dataset to generate the target output data. The new neural network may be iteratively trained using an updated divergent probe training dataset dynamically adjusted as the new neural network changes during training.Type: ApplicationFiled: July 8, 2020Publication date: January 13, 2022Applicant: DeepCube Ltd.Inventors: Eli DAVID, Eri Rubin
-
Publication number: 20210406692Abstract: A device, system, and method for training or prediction of a neural network. A current value may be stored for each of a plurality of synapses or filters in the neural network. A historical metric of activity may be independently determined for each individual or group of the synapses or filters during one or more past iterations. A plurality of partial activations of the neural network may be iteratively executed. Each partial-activation iteration may activate a subset of the plurality of synapses or filters in the neural network. Each individual or group of synapses or filters may be activated in a portion of a total number of iterations proportional to the historical metric of activity independently determined for that individual or group of synapses or filters. Training or prediction of the neural network may be performed based on the plurality of partial activations of the neural network.Type: ApplicationFiled: June 1, 2021Publication date: December 30, 2021Applicant: DeepCube Ltd.Inventors: Eli DAVID, Eri RUBIN
-
Patent number: 11164084Abstract: A device, system, and method is provided for training or prediction using a cluster-connected neural network. The cluster-connected neural network may be divided into a plurality of clusters of artificial neurons connected by weights or convolutional channels connected by convolutional filters. Within each cluster is a locally dense sub-network of intra-cluster weights or filters with a majority of pairs of neurons or channels connected by intra-cluster weights or filters that are co-activated together as an activation block during training or prediction. Outside each cluster is a globally sparse network of inter-cluster weights or filters with a minority of pairs of neurons or channels separated by a cluster border across different clusters connected by inter-cluster weights or filters. Training or predicting is performed using the cluster-connected neural network.Type: GrantFiled: November 11, 2020Date of Patent: November 2, 2021Assignee: DEEPCUBE LTD.Inventors: Eli David, Eri Rubin
-
Patent number: 11055617Abstract: A device, system, and method for training or prediction of a neural network. A current value may be stored for each of a plurality of synapses or filters in the neural network. A historical metric of activity may be independently determined for each individual or group of the synapses or filters during one or more past iterations. A plurality of partial activations of the neural network may be iteratively executed. Each partial-activation iteration may activate a subset of the plurality of synapses or filters in the neural network. Each individual or group of synapses or filters may be activated in a portion of a total number of iterations proportional to the historical metric of activity independently determined for that individual or group of synapses or filters. Training or prediction of the neural network may be performed based on the plurality of partial activations of the neural network.Type: GrantFiled: June 30, 2020Date of Patent: July 6, 2021Assignee: DEEPCUBE LTD.Inventors: Eli David, Eri Rubin
-
Publication number: 20210117759Abstract: A device, system, and method for approximating a neural network comprising N synapses or filters. The neural network may be partially-activated by iteratively executing a plurality of M partial pathways of the neural network to generate M partial outputs, wherein the M partial pathways respectively comprise M different continuous sequences of synapses or filters linking an input layer to an output layer. The M partial pathways may cumulatively span only a subset of the N synapses or filters such that a significant number of the remaining the N synapses or filters are not computed. The M partial outputs of the M partial pathways may be aggregated to generate an aggregated output approximating an output generated by fully-activating the neural network by executing a single instance of all N synapses or filters of the neural network. Training or prediction of the neural network may be performed based on the aggregated output.Type: ApplicationFiled: December 28, 2020Publication date: April 22, 2021Applicant: DeepCube Ltd.Inventors: Eli DAVID, Eri Rubin
-
Patent number: 10878321Abstract: A device, system, and method for approximating a neural network comprising N synapses or filters. The neural network may be partially-activated by iteratively executing a plurality of M partial pathways of the neural network to generate M partial outputs, wherein the M partial pathways respectively comprise M different continuous sequences of synapses or filters linking an input layer to an output layer. The M partial pathways may cumulatively span only a subset of the N synapses or filters such that a significant number of the remaining the N synapses or filters are not computed. The M partial outputs of the M partial pathways may be aggregated to generate an aggregated output approximating an output generated by fully-activating the neural network by executing a single instance of all N synapses or filters of the neural network. Training or prediction of the neural network may be performed based on the aggregated output.Type: GrantFiled: December 20, 2019Date of Patent: December 29, 2020Assignee: DEEPCUBE LTD.Inventors: Eli David, Eri Rubin
-
Publication number: 20200320400Abstract: A device, system, and method is provided to mimic a pre-trained target model without access to the pre-trained target model or its original training dataset. A set of random or semi-random input data may be sent to randomly probe the pre-trained target model at a remote device. A set of corresponding output data may be received from the remote device that is generated by applying the pre-trained target model to the set of random or semi-random input data. A random probe training dataset may be generated comprising the set of random or semi-random input data and corresponding output data generated by randomly probing the pre-trained target model. A new model may be trained with the random probe training dataset so that the new model generates substantially the same corresponding output data in response to said input data to mimic the pre-trained target model.Type: ApplicationFiled: June 24, 2020Publication date: October 8, 2020Applicant: DeepCube Ltd.Inventor: Eli DAVID
-
Publication number: 20200279167Abstract: A device, system, and method for approximating a neural network comprising N synapses or filters. The neural network may be partially-activated by iteratively executing a plurality of M partial pathways of the neural network to generate M partial outputs, wherein the M partial pathways respectively comprise M different continuous sequences of synapses or filters linking an input layer to an output layer. The M partial pathways may cumulatively span only a subset of the N synapses or filters such that a significant number of the remaining the N synapses or filters are not computed. The M partial outputs of the M partial pathways may be aggregated to generate an aggregated output approximating an output generated by fully-activating the neural network by executing a single instance of all N synapses or filters of the neural network. Training or prediction of the neural network may be performed based on the aggregated output.Type: ApplicationFiled: December 20, 2019Publication date: September 3, 2020Applicant: DeepCube Ltd.Inventors: Eli DAVID, Eri Rubin
-
Patent number: 10699194Abstract: A device, system, and method is provided to mimic a pre-trained target model without access to the pre-trained target model or its original training dataset. A set of random or semi-random input data may be sent to randomly probe the pre-trained target model at a remote device. A set of corresponding output data may be received from the remote device that is generated by applying the pre-trained target model to the set of random or semi-random input data. A random probe training dataset may be generated comprising the set of random or semi-random input data and corresponding output data generated by randomly probing the pre-trained target model. A new model may be trained with the random probe training dataset so that the new model generates substantially the same corresponding output data in response to said input data to mimic the pre-trained target model.Type: GrantFiled: December 6, 2018Date of Patent: June 30, 2020Assignee: DeepCube Ltd.Inventor: Eli David
-
Patent number: 10515306Abstract: A device, system, and method for approximating a neural network comprising N synapses or filters. The neural network may be partially-activated by iteratively executing a plurality of M partial pathways of the neural network to generate M partial outputs, wherein the M partial pathways respectively comprise M different continuous sequences of synapses or filters linking an input layer to an output layer. The M partial pathways may cumulatively span only a subset of the N synapses or filters such that a significant number of the remaining the N synapses or filters are not computed. The M partial outputs of the M partial pathways may be aggregated to generate an aggregated output approximating an output generated by fully-activating the neural network by executing a single instance of all N synapses or filters of the neural network. Training or prediction of the neural network may be performed based on the aggregated output.Type: GrantFiled: February 28, 2019Date of Patent: December 24, 2019Assignee: DeepCube Ltd.Inventors: Eli David, Eri Rubin
-
Publication number: 20190370665Abstract: A device, system, and method is provided to mimic a pre-trained target model without access to the pre-trained target model or its original training dataset. A set of random or semi-random input data may be sent to randomly probe the pre-trained target model at a remote device. A set of corresponding output data may be received from the remote device that is generated by applying the pre-trained target model to the set of random or semi-random input data. A random probe training dataset may be generated comprising the set of random or semi-random input data and corresponding output data generated by randomly probing the pre-trained target model. A new model may be trained with the random probe training dataset so that the new model generates substantially the same corresponding output data in response to said input data to mimic the pre-trained target model.Type: ApplicationFiled: December 6, 2018Publication date: December 5, 2019Applicant: DeepCube Ltd.Inventor: Eli DAVID
-
Publication number: 20190347536Abstract: A device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).Type: ApplicationFiled: July 29, 2019Publication date: November 14, 2019Applicant: DeepCube Ltd.Inventors: Eli DAVID, Eri Rubin
-
Publication number: 20190325317Abstract: An efficient technique of machine learning is provided for training a plurality of convolutional neural networks (CNNs) with increased speed and accuracy using a genetic evolutionary model. A plurality of artificial chromosomes may be stored representing weights of artificial neuron connections of the plurality of respective CNNs. A plurality of pairs of the chromosomes may be recombined to generate, for each pair, a new chromosome (with a different set of weights than in either chromosome of the pair) by selecting entire filters as inseparable groups of a plurality of weights from each of the pair of chromosomes (e.g., “filter-by-filter” recombination). A plurality of weights of each of the new or original plurality of chromosomes may be mutated by propagating recursive error corrections incrementally throughout the CNN. A small random sampling of weights may optionally be further mutated to zero, random values, or a sum of current and random values.Type: ApplicationFiled: July 1, 2019Publication date: October 24, 2019Applicant: DeepCube Ltd.Inventor: Eli DAVID
-
Patent number: 10366322Abstract: A device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).Type: GrantFiled: July 20, 2018Date of Patent: July 30, 2019Assignee: DeepCube Ltd.Inventors: Eli David, Eri Rubin
-
Patent number: 10339450Abstract: An efficient technique of machine learning is provided for training a plurality of convolutional neural networks (CNNs) with increased speed and accuracy using a genetic evolutionary model. A plurality of artificial chromosomes may be stored representing weights of artificial neuron connections of the plurality of respective CNNs. A plurality of pairs of the chromosomes may be recombined to generate, for each pair, a new chromosome (with a different set of weights than in either chromosome of the pair) by selecting entire filters as inseparable groups of a plurality of weights from each of the pair of chromosomes (e.g., “filter-by-filter” recombination). A plurality of weights of each of the new or original plurality of chromosomes may be mutated by propagating recursive error corrections incrementally throughout the CNN. A small random sampling of weights may optionally be further mutated to zero, random values, or a sum of current and random values.Type: GrantFiled: September 4, 2018Date of Patent: July 2, 2019Assignee: DeepCube Ltd.Inventor: Eli David
-
Publication number: 20190108436Abstract: A device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).Type: ApplicationFiled: July 20, 2018Publication date: April 11, 2019Applicant: DeepCube LtdInventors: Eli DAVID, Eri RUBIN
-
Publication number: 20190080243Abstract: An efficient technique of machine learning is provided for training a plurality of convolutional neural networks (CNNs) with increased speed and accuracy using a genetic evolutionary model. A plurality of artificial chromosomes may be stored representing weights of artificial neuron connections of the plurality of respective CNNs. A plurality of pairs of the chromosomes may be recombined to generate, for each pair, a new chromosome (with a different set of weights than in either chromosome of the pair) by selecting entire filters as inseparable groups of a plurality of weights from each of the pair of chromosomes (e.g., “filter-by-filter” recombination). A plurality of weights of each of the new or original plurality of chromosomes may be mutated by propagating recursive error corrections incrementally throughout the CNN. A small random sampling of weights may optionally be further mutated to zero, random values, or a sum of current and random values.Type: ApplicationFiled: September 4, 2018Publication date: March 14, 2019Applicant: DeepCube LTD.Inventor: Eli DAVID