Patents by Inventor Subutai Ahmad
Subutai Ahmad has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11966831Abstract: Embodiments relate to a first processing node that processes an input data having a temporal sequence of spatial patterns by retaining a higher-level context of the temporal sequence. The first processing node performs temporal processing based at least on feedback inputs received from a second processing node. The first processing node determines whether learned temporal sequences are included in the input data based on sequence inputs transmitted within the same level of a hierarchy of processing nodes and the feedback inputs received from an upper level of the hierarchy of processing nodes.Type: GrantFiled: November 5, 2021Date of Patent: April 23, 2024Assignee: Numenta, Inc.Inventors: Jeffrey C. Hawkins, Subutai Ahmad
-
Publication number: 20230274150Abstract: An inference system trains and performs inference using a sparse neural network. The sparse neural network may include one or more layers, and each layer may be associated with a set of sparse weights that represent sparse connections between nodes of a layer and nodes of a previous layer. A layer output may be generated by applying the set of sparse weights associated with the layer to the layer output of a previous layer. Moreover, the one or more layers of the sparse neural network may generate sparse layer outputs. By using sparse representations of weights and layer outputs, robustness and stability of the neural network can be significantly improved, while maintaining competitive accuracy.Type: ApplicationFiled: May 4, 2023Publication date: August 31, 2023Inventors: Subutai Ahmad, Luiz Scheinkman
-
Patent number: 11681922Abstract: An inference system trains and performs inference using a sparse neural network. The sparse neural network may include one or more layers, and each layer may be associated with a set of sparse weights that represent sparse connections between nodes of a layer and nodes of a previous layer. A layer output may be generated by applying the set of sparse weights associated with the layer to the layer output of a previous layer. Moreover, the one or more layers of the sparse neural network may generate sparse layer outputs. By using sparse representations of weights and layer outputs, robustness and stability of the neural network can be significantly improved, while maintaining competitive accuracy.Type: GrantFiled: November 26, 2019Date of Patent: June 20, 2023Assignee: Numenta, Inc.Inventors: Subutai Ahmad, Luiz Scheinkman
-
Patent number: 11651277Abstract: A processing node in a temporal memory system includes a spatial pooler and a sequence processor. The spatial pooler generates a spatial pooler signal representing similarity between received spatial patterns in an input signal and stored co-occurrence patterns. The spatial pooler signal is represented by a combination of elements that are active or inactive. Each co-occurrence pattern is mapped to different subsets of elements of an input signal. The spatial pooler signal is fed to a sequence processor receiving and processed to learn, recognize and predict temporal sequences in the input signal. The sequence processor includes one or more columns, each column including one or more cells. A subset of columns may be selected by the spatial pooler signal, causing one or more cells in these columns to activate.Type: GrantFiled: November 26, 2019Date of Patent: May 16, 2023Assignee: Numenta, Inc.Inventors: Jeffrey C. Hawkins, Ronald Marianetti, II, Anosh Raj, Subutai Ahmad
-
Publication number: 20230111841Abstract: Embodiments relate to a processing node in a temporal memory system that performs temporal pooling or processing by activating cells where the activation of a cell is maintained longer if the activation of the cell were previously predicted or activation on more than a certain portion of associated cells in a lower node was correctly predicted. An active cell correctly predicted to be activated or an active cell having connections to lower node active cells that were correctly predicted to become active contribute to accurate prediction, and hence, is maintained active longer than cells activated but were not previously predicted to become active. Embodiments also relate to a temporal memory system for detecting, learning, and predicting spatial patterns and temporal sequences in input data by using action information.Type: ApplicationFiled: November 18, 2022Publication date: April 13, 2023Inventors: Jeffrey C. Hawkins, Subutai Ahmad, Yuwei Cui, Chetan Surpur
-
Publication number: 20230004788Abstract: A hardware accelerator that is efficient at performing computations related to tensors. The hardware accelerator may store a complementary dense process tensor that is combined from a plurality of sparse process tensors. The plurality of sparse process tensors have non-overlapping locations of active values. The hardware accelerator may perform elementwise operations between the complementary dense process tensor and an activation tensor to generate a product tensor. The hardware accelerator may re-arrange the product tensor based on a permutation logic to separate the products into groups. Each group corresponds to one of the sparse process tensors. Each group may be accumulated separately to generate a plurality of output values. The output values may be selected in an activation selection. The activation selection may be a dense activation or a sparse activation such as k winner activation that set non-winners to zeros.Type: ApplicationFiled: July 1, 2022Publication date: January 5, 2023Inventors: Kevin Lee Hunter, Lawrence Spracklen, Subutai Ahmad
-
Publication number: 20230004352Abstract: A hardware accelerator that is efficient at performing computations related to tensors. The hardware accelerator may store a complementary dense process tensor that is combined from a plurality of sparse process tensors. The plurality of sparse process tensors have non-overlapping locations of active values. The hardware accelerator may perform elementwise operations between the complementary dense process tensor and an activation tensor to generate a product tensor. The hardware accelerator may re-arrange the product tensor based on a permutation logic to separate the products into groups. Each group corresponds to one of the sparse process tensors. Each group may be accumulated separately to generate a plurality of output values. The output values may be selected in an activation selection. The activation selection may be a dense activation or a sparse activation such as k winner activation that set non-winners to zeros.Type: ApplicationFiled: July 1, 2022Publication date: January 5, 2023Inventors: Kevin Lee Hunter, Lawrence Spracklen, Subutai Ahmad
-
Publication number: 20230004800Abstract: A hardware accelerator that is efficient at performing computations related to tensors. The hardware accelerator may store a complementary dense process tensor that is combined from a plurality of sparse process tensors. The plurality of sparse process tensors have non-overlapping locations of active values. The hardware accelerator may perform elementwise operations between the complementary dense process tensor and an activation tensor to generate a product tensor. The hardware accelerator may re-arrange the product tensor based on a permutation logic to separate the products into groups. Each group corresponds to one of the sparse process tensors. Each group may be accumulated separately to generate a plurality of output values. The output values may be selected in an activation selection. The activation selection may be a dense activation or a sparse activation such as k winner activation that set non-winners to zeros.Type: ApplicationFiled: July 1, 2022Publication date: January 5, 2023Inventors: Kevin Lee Hunter, Lawrence Spracklen, Subutai Ahmad
-
Patent number: 11537922Abstract: Embodiments relate to a processing node in a temporal memory system that performs temporal pooling or processing by activating cells where the activation of a cell is maintained longer if the activation of the cell were previously predicted or activation on more than a certain portion of associated cells in a lower node was correctly predicted. An active cell correctly predicted to be activated or an active cell having connections to lower node active cells that were correctly predicted to become active contribute to accurate prediction, and hence, is maintained active longer than cells activated but were not previously predicted to become active. Embodiments also relate to a temporal memory system for detecting, learning, and predicting spatial patterns and temporal sequences in input data by using action information.Type: GrantFiled: April 26, 2019Date of Patent: December 27, 2022Assignee: Numenta, Inc.Inventors: Jeffrey C. Hawkins, Subutai Ahmad, Yuwei Cui, Chetan Surpur
-
Publication number: 20220237465Abstract: A sparse neural network is trained such that weights or layer outputs of the neural network satisfy sparsity constraints. The sparsity is controlled by pruning one or more subsets of weights based on their signal-to-noise ratio (SNR). During the training process, an inference system generates outputs for a current layer by applying a set of weights for the current layer to a layer output of a previous layer. The set of weights for the current layer may be modeled as random variables sampled from probability distributions. The inference system determines a loss function and updates the set of weights by backpropagating error terms obtained from the loss function. This process is repeated until a convergence criterion is reached. One or more subsets of weights are then pruned based on their SNR depending on sparsity constraints for the weights of the neural network.Type: ApplicationFiled: April 20, 2021Publication date: July 28, 2022Inventors: Marcus Anthony Lewis, Subutai Ahmad
-
Publication number: 20220108156Abstract: A hardware accelerator that is efficient at performing computations related to a sparse neural network. The sparse neural network may be associated with a plurality of nodes. One of the nodes includes one or more sparse tensors. The accelerator may compress the sparse tensor to a dense tensor. The sparse tensor may also be structured so that the dense locations in the tensor are blocked or partitioned. The accelerator may transpose the weight tensor and align the partitions of the tensor with the hardware architecture. The structured tensor has a balanced number of active values so that the active values can be processed by an efficient number of operating cycles of the accelerator. The accelerator may also perform bitwise and operation to determine the location of dense pairs in two sparse tensors to reduce the number of computations.Type: ApplicationFiled: May 27, 2021Publication date: April 7, 2022Inventors: Kevin Lee Hunter, Subutai Ahmad
-
Publication number: 20220108157Abstract: A hardware accelerator that is efficient at performing computations related to a sparse neural network. The sparse neural network may be associated with a plurality of nodes. An artificial intelligence (AI) accelerator stores, at a memory circuit, a weight tenor and an input activation tensor that corresponds to a node of the neural network. The AI accelerator performs a computation such as convolution between the weight tenor and the input activation tensor to generate an output activation tensor. The AI accelerator introduces sparsity to the output activation tensor by reducing the number of active values in the output activation tensor. The sparsity activation may be a K-winner approach, which selects the K-largest values in the output activation tensor and set the remaining values to zero.Type: ApplicationFiled: May 27, 2021Publication date: April 7, 2022Inventors: Kevin Lee Hunter, Subutai Ahmad
-
Patent number: 11270202Abstract: A processing node in a temporal memory system includes a spatial pooler and a sequence processor. The spatial pooler generates a spatial pooler signal representing similarity between received spatial patterns in an input signal and stored co-occurrence patterns. The spatial pooler signal is represented by a combination of elements that are active or inactive. Each co-occurrence pattern is mapped to different subsets of elements of an input signal. The spatial pooler signal is fed to a sequence processor receiving and processed to learn, recognize and predict temporal sequences in the input signal. The sequence processor includes one or more columns, each column including one or more cells. A subset of columns may be selected by the spatial pooler signal, causing one or more cells in these columns to activate.Type: GrantFiled: March 4, 2019Date of Patent: March 8, 2022Assignee: Numenta, Inc.Inventors: Jeffrey C. Hawkins, Ronald Marianetti, II, Anosh Raj, Subutai Ahmad
-
Publication number: 20220067488Abstract: Embodiments relate to a first processing node that processes an input data having a temporal sequence of spatial patterns by retaining a higher-level context of the temporal sequence. The first processing node performs temporal processing based at least on feedback inputs received from a second processing node. The first processing node determines whether learned temporal sequences are included in the input data based on sequence inputs transmitted within the same level of a hierarchy of processing nodes and the feedback inputs received from an upper level of the hierarchy of processing nodes.Type: ApplicationFiled: November 5, 2021Publication date: March 3, 2022Inventors: Jeffrey C. Hawkins, Subutai Ahmad
-
Patent number: 11195082Abstract: Embodiments relate to a first processing node that processes an input data having a temporal sequence of spatial patterns by retaining a higher-level context of the temporal sequence. The first processing node performs temporal processing based at least on feedback inputs received from a second processing node. The first processing node determines whether learned temporal sequences are included in the input data based on sequence inputs transmitted within the same level of a hierarchy of processing nodes and the feedback inputs received from an upper level of the hierarchy of processing nodes.Type: GrantFiled: December 3, 2019Date of Patent: December 7, 2021Assignee: Numenta, Inc.Inventors: Jeffrey C. Hawkins, Subutai Ahmad
-
Publication number: 20210374578Abstract: One or more multi-layer systems are used to perform inference. A multi-layer system may correspond to a node that receives a set of sensory input data for hierarchical processing, and may be grouped to perform processing for sensory input data. Inference systems at lower layers of a multi-layer system pass representation of objects to inference systems at higher layers. Each inference system can perform inference and form their own versions of representations of objects, regardless of the level and layer of the inference systems. The set of candidate objects for each inference system is updated to those consistent with feature-location representations for the sensors as well as object representations at lower layers. The set of candidate objects is also updated to those consistent with candidate objects from other inference systems, such as inference systems at other layers of the hierarchy or inference systems included in other multi-layer systems.Type: ApplicationFiled: July 20, 2021Publication date: December 2, 2021Inventors: Jeffrey C. Hawkins, Subutai Ahmad
-
Patent number: 11100414Abstract: One or more multi-layer systems are used to perform inference. A multi-layer system may correspond to a node that receives a set of sensory input data for hierarchical processing, and may be grouped to perform processing for sensory input data. Inference systems at lower layers of a multi-layer system pass representation of objects to inference systems at higher layers. Each inference system can perform inference and form their own versions of representations of objects, regardless of the level and layer of the inference systems. The set of candidate objects for each inference system is updated to those consistent with feature-location representations for the sensors as well as object representations at lower layers. The set of candidate objects is also updated to those consistent with candidate objects from other inference systems, such as inference systems at other layers of the hierarchy or inference systems included in other multi-layer systems.Type: GrantFiled: February 5, 2019Date of Patent: August 24, 2021Assignee: Numenta, Inc.Inventors: Jeffrey C. Hawkins, Subutai Ahmad
-
Publication number: 20210201181Abstract: Embodiments relate to performing inference, such as object recognition, based on sensory inputs received from sensors and location information associated with the sensory inputs. The sensory inputs describe one or more features of the objects. The location information describes known or potential locations of the sensors generating the sensory inputs. An inference system learns representations of objects by characterizing a plurality of feature-location representations of the objects, and then performs inference by identifying or updating candidate objects consistent with feature-location representations observed from the sensory input data and location information. In one instance, the inference system learns representations of objects for each sensor. The set of candidate objects for each sensor is updated to those consistent with candidate objects for other sensors, as well as the observed feature-location representations for the sensor.Type: ApplicationFiled: March 11, 2021Publication date: July 1, 2021Inventors: Jeffrey C. Hawkins, Subutai Ahmad, Yuwei Cui, Marcus Anthony Lewis
-
Publication number: 20210158168Abstract: An inference system trains and performs inference using a sparse neural network. The sparse neural network may include one or more layers, and each layer may be associated with a set of sparse weights that represent sparse connections between nodes of a layer and nodes of a previous layer. A layer output may be generated by applying the set of sparse weights associated with the layer to the layer output of a previous layer. Moreover, the one or more layers of the sparse neural network may generate sparse layer outputs. By using sparse representations of weights and layer outputs, robustness and stability of the neural network can be significantly improved, while maintaining competitive accuracy.Type: ApplicationFiled: November 26, 2019Publication date: May 27, 2021Inventors: Subutai Ahmad, Luiz Scheinkman
-
Patent number: 10977566Abstract: Embodiments relate to performing inference, such as object recognition, based on sensory inputs received from sensors and location information associated with the sensory inputs. The sensory inputs describe one or more features of the objects. The location information describes known or potential locations of the sensors generating the sensory inputs. An inference system learns representations of objects by characterizing a plurality of feature-location representations of the objects, and then performs inference by identifying or updating candidate objects consistent with feature-location representations observed from the sensory input data and location information. In one instance, the inference system learns representations of objects for each sensor. The set of candidate objects for each sensor is updated to those consistent with candidate objects for other sensors, as well as the observed feature-location representations for the sensor.Type: GrantFiled: May 12, 2017Date of Patent: April 13, 2021Assignee: Numenta, Inc.Inventors: Jeffrey C. Hawkins, Subutai Ahmad, Yuwei Cui, Marcus Anthony Lewis