Patents by Inventor Dharmendra S. Modha
Dharmendra S. Modha has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250028534Abstract: According to embodiments of the present disclosure, processor chips adapted for efficient massively-concurrent conditional computation are provided. In various embodiments, a chip comprises at least one processing core; a controller operatively coupled to the at least one processing core; and an instruction memory in communication with the controller. The controller is configured to: concurrently compute a plurality of relational operators on a plurality of inputs, resulting in a plurality of results; combine the plurality of results to determine an index; select an operation based on the index; and cause the at least one processing core to execute the selected operation.Type: ApplicationFiled: October 21, 2020Publication date: January 23, 2025Inventors: Nathaniel Joseph McClatchey, Andrew Stephen Cassidy, Arnon Amir, Dharmendra S. Modha, Jun Sawada, Pallab Datta, Rathinakumar Appuswamy
-
Patent number: 12182687Abstract: Systems for neural network computation are provided. A neural network processor comprises a plurality of neural cores. The neural network processor has one or more processor precisions per activation. The processor is configured to accept data having a processor feature dimension. A transformation circuit is coupled to the neural network processor, and is adapted to: receive an input data tensor having an input precision per channel at one or more features; transform the input data tensor from the input precision to the processor precision; divide the input data into a plurality of blocks, each block conforming to one of the processor feature dimensions; provide each of the plurality of blocks to one of the plurality of neural cores. The neural network processor is adapted to compute, by the plurality of neural cores, output of one or more neural network layers.Type: GrantFiled: October 11, 2018Date of Patent: December 31, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: John V. Arthur, Andrew S. Cassidy, Myron D. Flickner, Pallab Datta, Hartmut Penner, Rathinakumar Appuswamy, Jun Sawada, Dharmendra S. Modha, Steven K. Esser, Brian Taba, Jennifer Klamo
-
Patent number: 12182686Abstract: Networks and encodings therefor are provided that are adapted to provide increased energy efficiency and speed for convolutional operations. In various embodiments, a neural network comprises a plurality of neural cores. Each of the plurality of neural cores comprises a memory. A network interconnects the plurality of neural cores. The memory of each of the plurality of neural cores comprises at least a portion of a weight tensor. The weight tensor comprising a plurality of weights. Each neural core is adapted to retrieve locally or receive a portion of an input image, apply the portion of the weight tensor thereto, and store locally or send a result therefrom via the network to other of the plurality of neural cores.Type: GrantFiled: April 30, 2018Date of Patent: December 31, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Dharmendra S. Modha
-
Patent number: 12165050Abstract: Networks for distributing parameters and data to neural network compute cores. In various embodiments, a neural inference chip comprises a plurality of neural cores and at least one network interconnecting the plurality of neural cores. Each of the plurality of neural cores is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The at least one network is adapted to simultaneously deliver synaptic weights and/or input activations to the plurality of neural cores.Type: GrantFiled: October 11, 2018Date of Patent: December 10, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: John V. Arthur, Brian Taba, Rathinakumar Appuswamy, Andrew S. Cassidy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada
-
Patent number: 12067472Abstract: Defect resistant designs for location-sensitive neural network processor arrays are provided. In various embodiments, plurality of neural network processor cores are arrayed in a grid. The grid has a plurality of rows and a plurality of columns. A network interconnects at least those of the plurality of neural network processor cores that are adjacent within the grid. The network is adapted to bypass a defective core of the plurality of neural network processor cores by providing a connection between two non-adjacent rows or columns of the grid, and transparently routing messages between the two non-adjacent rows or columns, past the defective core.Type: GrantFiled: March 30, 2018Date of Patent: August 20, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
-
Patent number: 12056598Abstract: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.Type: GrantFiled: October 13, 2022Date of Patent: August 6, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Andrew S. Cassidy, Rathinakumar Appuswamy, John V. Arthur, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
-
Patent number: 11847553Abstract: Neural network processing hardware using parallel computational architectures with reconfigurable core-level and vector-level parallelism is provided. In various embodiments, a neural network model memory is adapted to store a neural network model comprising a plurality of layers. Each layer has at least one dimension and comprises a plurality of synaptic weights. A plurality of neural cores is provided. Each neural core includes a computation unit and an activation memory. The computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The computation unit has a plurality of vector units. The activation memory is adapted to store the input activations and the output activations. The system is adapted to partition the plurality of cores into a plurality of partitions based on dimensions of the layer and the vector units.Type: GrantFiled: June 14, 2018Date of Patent: December 19, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Andrew S. Cassidy, Myron D. Flickner, Pallab Datta, Hartmut Penner, Rathinakumar Appuswamy, Jun Sawada, John V. Arthur, Dharmendra S. Modha, Steven K. Esser, Brian Taba, Jennifer Klamo
-
Patent number: 11823054Abstract: Learned step size quantization in artificial neural network is provided. In various embodiments, a system comprises an artificial neural network and a computing node. The artificial neural network comprises: a quantizer having a configurable step size, the quantizer adapted to receive a plurality of input values and quantize the plurality of input values according to the configurable step size to produce a plurality of quantized input values, at least one matrix multiplier configured to receive the plurality of quantized input values from the quantizer and to apply a plurality of weights to the quantized input values to determine a plurality of output values having a first precision, and a multiplier configured to scale the output values to a second precision.Type: GrantFiled: February 20, 2020Date of Patent: November 21, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Steve Esser, Jeffrey L. McKinstry, Deepika Bablani, Rathinakumar Appuswamy, Dharmendra S. Modha
-
Patent number: 11663461Abstract: Instruction distribution in an array of neural network cores is provided. In various embodiments, a neural inference chip is initialized with core microcode. The chip comprises a plurality of neural cores. The core microcode is executable by the neural cores to execute a tensor operation of a neural network. The core microcode is distributed to the plurality of neural cores via an on-chip network. The core microcode is executed synchronously by the plurality of neural cores to compute a neural network layer.Type: GrantFiled: July 5, 2018Date of Patent: May 30, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Hartmut Penner, Dharmendra S. Modha, John V. Arthur, Andrew S. Cassidy, Rathinakumar Appuswamy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Jun Sawada, Brian Taba
-
Patent number: 11636317Abstract: Long-short term memory (LSTM) cells on spiking neuromorphic hardware are provided. In various embodiments, such systems comprise a spiking neurosynaptic core. The neurosynaptic core comprises a memory cell, an input gate operatively coupled to the memory cell and adapted to selectively admit an input to the memory cell, and an output gate operatively coupled to the memory cell an adapted to selectively release an output from the memory cell. The memory cell is adapted to maintain a value in the absence of input.Type: GrantFiled: February 16, 2017Date of Patent: April 25, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Rathinakumar Appuswamy, Michael Beyeler, Pallab Datta, Myron Flickner, Dharmendra S. Modha
-
Publication number: 20230062217Abstract: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.Type: ApplicationFiled: October 13, 2022Publication date: March 2, 2023Inventors: Andrew S. Cassidy, Rathinakumar Appuswamy, John V. Arthur, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
-
Patent number: 11586893Abstract: Core utilization optimization by dividing computational blocks across neurosynaptic cores is provided. In some embodiments, a neural network description describing a neural network is read. The neural network comprises a plurality of functional units on a plurality of cores. A functional unit is selected from the plurality of functional units. The functional unit is divided into a plurality of subunits. The plurality of subunits are connected to the neural network in place of the functional unit. The plurality of functional units and the plurality of subunits are reallocated between the plurality of cores. One or more unused cores are removed from the plurality of cores. An optimized neural network description is written based on the reallocation.Type: GrantFiled: March 30, 2020Date of Patent: February 21, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Arnon Amir, Pallab Datta, Nimrod Megiddo, Dharmendra S. Modha
-
Patent number: 11580366Abstract: An event-driven neural network including a plurality of interconnected core circuits is provided. Each core circuit includes an electronic synapse array that has multiple digital synapses interconnecting a plurality of digital electronic neurons. A synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron. A neuron integrates input spikes and generates a spike event in response to the integrated input spikes exceeding a threshold. Each core circuit also has a scheduler that receives a spike event and delivers the spike event to a selected axon in the synapse array based on a schedule for deterministic event delivery.Type: GrantFiled: October 28, 2019Date of Patent: February 14, 2023Assignee: International Business Machines CorporationInventors: Filipp Akopyan, John V. Arthur, Rajit Manohar, Paul A. Merolla, Dharmendra S. Modha, Alyosha Molnar, William P. Risk, III
-
Patent number: 11537859Abstract: Neural inference chips are provided. A neural core of the neural inference chip comprises a vector-matrix multiplier; a vector processor; and an activation unit operatively coupled to the vector processor. The vector-matrix multiplier, vector processor, and/or activation unit is adapted to operate at variable precision.Type: GrantFiled: December 6, 2019Date of Patent: December 27, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Andrew S. Cassidy, Rathinakumar Appuswamy, John V. Arthur, Pallab Datta, Steve Esser, Myron D. Flickner, Jeffrey McKinstry, Dharmendra S. Modha, Jun Sawada, Brian Taba
-
Patent number: 11521085Abstract: Neural inference chips for computing neural activations are provided. In various embodiments, a neural inference chip comprises at least one neural core, a memory array, an instruction buffer, and an instruction memory. The instruction buffer has a position corresponding to each of a plurality of elements of the memory array. The instruction memory provides at least one instruction to the instruction buffer. The instruction buffer advances the at least one instruction between positions in the instruction buffer. The instruction buffer provides the at least one instruction to at least one of the plurality of elements of the memory array from its associated position in the instruction buffer when the memory of the at least one of the plurality of elements contains data associated with the at least one instruction. Each element of the memory array provides a data block from its memory to its horizontal buffer in response to the arrival of an associated instruction from the instruction buffer.Type: GrantFiled: April 7, 2020Date of Patent: December 6, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jun Sawada, Dharmendra S. Modha, Andrew S. Cassidy, John V. Arthur, Tapan K. Nayak, Carlos O. Otero, Brian Taba, Filipp A. Akopyan, Pallab Datta
-
Patent number: 11501140Abstract: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.Type: GrantFiled: June 19, 2018Date of Patent: November 15, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Andrew S. Cassidy, Rathinakumar Appuswamy, John V. Arthur, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
-
Patent number: 11481621Abstract: The present invention relates to unsupervised, supervised and reinforced learning via spiking computation. The neural network comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of edges interconnects the plurality of neural modules. Each edge interconnects a first neural module to a second neural module, and each edge comprises a weighted synaptic connection between every neuron in the first neural module and a corresponding neuron in the second neural module.Type: GrantFiled: August 16, 2019Date of Patent: October 25, 2022Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Patent number: 11410017Abstract: Embodiments of the invention provide a neural network comprising multiple functional neural core circuits, and a dynamically reconfigurable switch interconnect between the functional neural core circuits. The interconnect comprises multiple connectivity neural core circuits. Each functional neural core circuit comprises a first and a second core module. Each core module comprises a plurality of electronic neurons, a plurality of incoming electronic axons, and multiple electronic synapses interconnecting the incoming axons to the neurons. Each neuron has a corresponding outgoing electronic axon. In one embodiment, zero or more sets of connectivity neural core circuits interconnect outgoing axons in a functional neural core circuit to incoming axons in the same functional neural core circuit.Type: GrantFiled: August 16, 2019Date of Patent: August 9, 2022Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Publication number: 20220180177Abstract: A neural inference chip is provided, including at least one neural inference core. The at least one neural inference core is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of intermediate outputs. The at least one neural inference core comprises a plurality of activation units configured to receive the plurality of intermediate outputs and produce a plurality of activations. Each of the plurality of activation units is configured to apply a configurable activation function to its input. The configurable activation function has at least a re-ranging term and a scaling term, the re-ranging term determining the range of the activations and the scaling term determining the scale of the activations. Each of the plurality of activations units is configured to obtain the re-ranging term and the scaling term from one or more look up tables.Type: ApplicationFiled: December 8, 2020Publication date: June 9, 2022Inventors: Jun Sawada, Myron D. Flickner, Andrew Stephen Cassidy, John Vernon Arthur, Pallab Datta, Dharmendra S. Modha, Steven Kyle Esser, Brian Seisho Taba, Jennifer Klamo, Rathinakumar Appuswamy, Filipp Akopyan, Carlos Ortega Otero
-
Patent number: 11341401Abstract: Embodiments of the invention relate to a neural network system for simulating neurons of a neural model. One embodiment comprises a memory device that maintains neuronal states for multiple neurons, a lookup table that maintains state transition information for multiple neuronal states, and a controller unit that manages the memory device. The controller unit updates a neuronal state for each neuron based on incoming spike events targeting said neuron and state transition information corresponding to said neuronal state.Type: GrantFiled: February 28, 2019Date of Patent: May 24, 2022Assignee: International Business Machines CorporationInventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Paul A. Merolla, Dharmendra S. Modha