Patents by Inventor Dharmendra S. Modha

Dharmendra S. Modha has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11847553
    Abstract: Neural network processing hardware using parallel computational architectures with reconfigurable core-level and vector-level parallelism is provided. In various embodiments, a neural network model memory is adapted to store a neural network model comprising a plurality of layers. Each layer has at least one dimension and comprises a plurality of synaptic weights. A plurality of neural cores is provided. Each neural core includes a computation unit and an activation memory. The computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The computation unit has a plurality of vector units. The activation memory is adapted to store the input activations and the output activations. The system is adapted to partition the plurality of cores into a plurality of partitions based on dimensions of the layer and the vector units.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: December 19, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Andrew S. Cassidy, Myron D. Flickner, Pallab Datta, Hartmut Penner, Rathinakumar Appuswamy, Jun Sawada, John V. Arthur, Dharmendra S. Modha, Steven K. Esser, Brian Taba, Jennifer Klamo
  • Patent number: 11823054
    Abstract: Learned step size quantization in artificial neural network is provided. In various embodiments, a system comprises an artificial neural network and a computing node. The artificial neural network comprises: a quantizer having a configurable step size, the quantizer adapted to receive a plurality of input values and quantize the plurality of input values according to the configurable step size to produce a plurality of quantized input values, at least one matrix multiplier configured to receive the plurality of quantized input values from the quantizer and to apply a plurality of weights to the quantized input values to determine a plurality of output values having a first precision, and a multiplier configured to scale the output values to a second precision.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: November 21, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Steve Esser, Jeffrey L. McKinstry, Deepika Bablani, Rathinakumar Appuswamy, Dharmendra S. Modha
  • Patent number: 11663461
    Abstract: Instruction distribution in an array of neural network cores is provided. In various embodiments, a neural inference chip is initialized with core microcode. The chip comprises a plurality of neural cores. The core microcode is executable by the neural cores to execute a tensor operation of a neural network. The core microcode is distributed to the plurality of neural cores via an on-chip network. The core microcode is executed synchronously by the plurality of neural cores to compute a neural network layer.
    Type: Grant
    Filed: July 5, 2018
    Date of Patent: May 30, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Hartmut Penner, Dharmendra S. Modha, John V. Arthur, Andrew S. Cassidy, Rathinakumar Appuswamy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Jun Sawada, Brian Taba
  • Patent number: 11636317
    Abstract: Long-short term memory (LSTM) cells on spiking neuromorphic hardware are provided. In various embodiments, such systems comprise a spiking neurosynaptic core. The neurosynaptic core comprises a memory cell, an input gate operatively coupled to the memory cell and adapted to selectively admit an input to the memory cell, and an output gate operatively coupled to the memory cell an adapted to selectively release an output from the memory cell. The memory cell is adapted to maintain a value in the absence of input.
    Type: Grant
    Filed: February 16, 2017
    Date of Patent: April 25, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Rathinakumar Appuswamy, Michael Beyeler, Pallab Datta, Myron Flickner, Dharmendra S. Modha
  • Publication number: 20230062217
    Abstract: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.
    Type: Application
    Filed: October 13, 2022
    Publication date: March 2, 2023
    Inventors: Andrew S. Cassidy, Rathinakumar Appuswamy, John V. Arthur, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
  • Patent number: 11586893
    Abstract: Core utilization optimization by dividing computational blocks across neurosynaptic cores is provided. In some embodiments, a neural network description describing a neural network is read. The neural network comprises a plurality of functional units on a plurality of cores. A functional unit is selected from the plurality of functional units. The functional unit is divided into a plurality of subunits. The plurality of subunits are connected to the neural network in place of the functional unit. The plurality of functional units and the plurality of subunits are reallocated between the plurality of cores. One or more unused cores are removed from the plurality of cores. An optimized neural network description is written based on the reallocation.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: February 21, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Arnon Amir, Pallab Datta, Nimrod Megiddo, Dharmendra S. Modha
  • Patent number: 11580366
    Abstract: An event-driven neural network including a plurality of interconnected core circuits is provided. Each core circuit includes an electronic synapse array that has multiple digital synapses interconnecting a plurality of digital electronic neurons. A synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron. A neuron integrates input spikes and generates a spike event in response to the integrated input spikes exceeding a threshold. Each core circuit also has a scheduler that receives a spike event and delivers the spike event to a selected axon in the synapse array based on a schedule for deterministic event delivery.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: February 14, 2023
    Assignee: International Business Machines Corporation
    Inventors: Filipp Akopyan, John V. Arthur, Rajit Manohar, Paul A. Merolla, Dharmendra S. Modha, Alyosha Molnar, William P. Risk, III
  • Patent number: 11537859
    Abstract: Neural inference chips are provided. A neural core of the neural inference chip comprises a vector-matrix multiplier; a vector processor; and an activation unit operatively coupled to the vector processor. The vector-matrix multiplier, vector processor, and/or activation unit is adapted to operate at variable precision.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: December 27, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Andrew S. Cassidy, Rathinakumar Appuswamy, John V. Arthur, Pallab Datta, Steve Esser, Myron D. Flickner, Jeffrey McKinstry, Dharmendra S. Modha, Jun Sawada, Brian Taba
  • Patent number: 11521085
    Abstract: Neural inference chips for computing neural activations are provided. In various embodiments, a neural inference chip comprises at least one neural core, a memory array, an instruction buffer, and an instruction memory. The instruction buffer has a position corresponding to each of a plurality of elements of the memory array. The instruction memory provides at least one instruction to the instruction buffer. The instruction buffer advances the at least one instruction between positions in the instruction buffer. The instruction buffer provides the at least one instruction to at least one of the plurality of elements of the memory array from its associated position in the instruction buffer when the memory of the at least one of the plurality of elements contains data associated with the at least one instruction. Each element of the memory array provides a data block from its memory to its horizontal buffer in response to the arrival of an associated instruction from the instruction buffer.
    Type: Grant
    Filed: April 7, 2020
    Date of Patent: December 6, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jun Sawada, Dharmendra S. Modha, Andrew S. Cassidy, John V. Arthur, Tapan K. Nayak, Carlos O. Otero, Brian Taba, Filipp A. Akopyan, Pallab Datta
  • Patent number: 11501140
    Abstract: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: November 15, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Andrew S. Cassidy, Rathinakumar Appuswamy, John V. Arthur, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
  • Patent number: 11481621
    Abstract: The present invention relates to unsupervised, supervised and reinforced learning via spiking computation. The neural network comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of edges interconnects the plurality of neural modules. Each edge interconnects a first neural module to a second neural module, and each edge comprises a weighted synaptic connection between every neuron in the first neural module and a corresponding neuron in the second neural module.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: October 25, 2022
    Assignee: International Business Machines Corporation
    Inventor: Dharmendra S. Modha
  • Patent number: 11410017
    Abstract: Embodiments of the invention provide a neural network comprising multiple functional neural core circuits, and a dynamically reconfigurable switch interconnect between the functional neural core circuits. The interconnect comprises multiple connectivity neural core circuits. Each functional neural core circuit comprises a first and a second core module. Each core module comprises a plurality of electronic neurons, a plurality of incoming electronic axons, and multiple electronic synapses interconnecting the incoming axons to the neurons. Each neuron has a corresponding outgoing electronic axon. In one embodiment, zero or more sets of connectivity neural core circuits interconnect outgoing axons in a functional neural core circuit to incoming axons in the same functional neural core circuit.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: August 9, 2022
    Assignee: International Business Machines Corporation
    Inventor: Dharmendra S. Modha
  • Publication number: 20220180177
    Abstract: A neural inference chip is provided, including at least one neural inference core. The at least one neural inference core is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of intermediate outputs. The at least one neural inference core comprises a plurality of activation units configured to receive the plurality of intermediate outputs and produce a plurality of activations. Each of the plurality of activation units is configured to apply a configurable activation function to its input. The configurable activation function has at least a re-ranging term and a scaling term, the re-ranging term determining the range of the activations and the scaling term determining the scale of the activations. Each of the plurality of activations units is configured to obtain the re-ranging term and the scaling term from one or more look up tables.
    Type: Application
    Filed: December 8, 2020
    Publication date: June 9, 2022
    Inventors: Jun Sawada, Myron D. Flickner, Andrew Stephen Cassidy, John Vernon Arthur, Pallab Datta, Dharmendra S. Modha, Steven Kyle Esser, Brian Seisho Taba, Jennifer Klamo, Rathinakumar Appuswamy, Filipp Akopyan, Carlos Ortega Otero
  • Patent number: 11341401
    Abstract: Embodiments of the invention relate to a neural network system for simulating neurons of a neural model. One embodiment comprises a memory device that maintains neuronal states for multiple neurons, a lookup table that maintains state transition information for multiple neuronal states, and a controller unit that manages the memory device. The controller unit updates a neuronal state for each neuron based on incoming spike events targeting said neuron and state transition information corresponding to said neuronal state.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: May 24, 2022
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Paul A. Merolla, Dharmendra S. Modha
  • Publication number: 20220129436
    Abstract: Systems are provided that can produce symbolic and numeric representations of the neural network outputs, such that these outputs can be used to validate correctness of the implementation of the neural network. In various embodiments, a description of an artificial neural network containing no data-dependent branching is read. Based on the description of the artificial neural network, a symbolic representation is constructed of an output of the artificial neural network, the symbolic representation comprising at least one variable. The symbolic representation is compared to a ground truth symbolic representation, thereby validating the neural network system.
    Type: Application
    Filed: October 22, 2020
    Publication date: April 28, 2022
    Inventors: Alexander Andreopoulos, Dharmendra S. Modha, Andrew Stephen Cassidy, Brian Seisho Taba, Carmelo Di Nolfo, Hartmut Penner, John Vernon Arthur, Jun Sawada, Myron D. Flickner, Pallab Datta, Rathinakumar Appuswamy
  • Publication number: 20220129769
    Abstract: Modular neural network computing apparatus are provided with distributed neural network storage. In various embodiments, a neural inference processor comprises a plurality of neural inference cores, at least one model network interconnecting the plurality of neural inference cores, and at least one activation network interconnecting the plurality of neural inference cores. Each of the plurality of neural inference cores comprises memory adapted to store input activations, output activations, and a neural network model. The neural network model comprises synaptic weights, neuron parameters, and neural network instructions. The at least one model network is configured to distribute the neural network model among the plurality of neural inference cores. Each of the plurality of neural inference cores is configured to apply the synaptic weights to input activations from its memory to produce a plurality of output activations to its memory.
    Type: Application
    Filed: October 22, 2020
    Publication date: April 28, 2022
    Inventors: Jun Sawada, Dharmendra S. Modha, John Vernon Arthur, Andrew Stephen Cassidy, Pallab Datta, Rathinakumar Appuswamy, Tapan Kumar Nayak, Brian Kumar Taba, Carlos Ortega Otero, Filipp Akopyan, Arnon Amir, Nathaniel Joseph McClatchey
  • Publication number: 20220129742
    Abstract: Simulation and validation of neural network systems is provided. In various embodiments, a description of an artificial neural network is read. A directed graph is constructed comprising a plurality of edges and a plurality of nodes, each of the plurality of edges corresponding to a queue and each of the plurality of nodes corresponding to a computing function of the neural network system. A graph state is updated over a plurality of time steps according to the description of the neural network, the graph state being defined by the contents of each of the plurality of queues. Each of a plurality of assertions is tested at each of the plurality of time steps, each of the plurality of assertions being a function of a subset of the graph state. Invalidity of the neural network system is indicated for each violation of one of the plurality of assertions.
    Type: Application
    Filed: October 22, 2020
    Publication date: April 28, 2022
    Inventors: Alexander Andreopoulos, Dharmendra S. Modha, Carmelo Di Nolfo, Myron D. Flickner, Andrew Stephen Cassidy, Brian Seisho Taba, Pallab Datta, Rathinakumar Appuswamy, Jun Sawada
  • Publication number: 20220129743
    Abstract: Neural network accelerator output ranking is provided. In various embodiments, a system comprises a data memory; a memory controller configured to access the data memory; a plurality of comparators configured in a tree; a register; and a two-way comparator. The memory controller is configured to provide a first plurality of values from the data memory to the comparator tree. The comparator tree is configured to perform a plurality of concurrent pairwise comparisons of the first plurality of values to arrive at a first greatest value of the first plurality of values. The two-way comparator is configured to output the greater of the greatest value from the comparator tree and a stored value from the register. The register is configured to store the output of the two-way comparator.
    Type: Application
    Filed: October 23, 2020
    Publication date: April 28, 2022
    Inventors: Jun Sawada, Rathinakumar Appuswamy, John Vernon Arthur, Andrew Stephen Cassidy, Pallab Datta, Michael Vincent DeBole, Steven Kyle Esser, Dharmendra S. Modha
  • Publication number: 20220121925
    Abstract: Chips supporting constant time program control of nested loops are provided. In various embodiments, a chip comprises at least one arithmetic-logic computing unit and a controller operatively coupled to the at least one arithmetic-logic computing unit. The controller is configured according to a program configuration, the program configuration comprising at least one inner loop and at least one outer loop. The controller is configured to cause the at least one arithmetic computing unit to execute a plurality of operations according to the program configuration. The controller is configured to maintain at least a first loop counter and a second loop counter, the first loop counter configured to count a number of executed iterations of the at least one outer loop, and the second loop counter configured to count a number of executed iterations of the at least one inner loop.
    Type: Application
    Filed: October 21, 2020
    Publication date: April 21, 2022
    Inventors: Arnon Amir, Andrew Stephen Cassidy, Nathaniel Joseph McClatchey, Jun Sawada, Dharmendra S. Modha, Rathinakumar Appuswamy
  • Publication number: 20220121951
    Abstract: Conflict-free, stall-free, broadcast networks on neural inference chips are provided. In various embodiments, a neural inference chip comprises a plurality of network nodes and a network on chip interconnecting the plurality of network nodes. The network comprises at least one pair of directional paths. The paths of each pair have opposite directions and a common end. The network is configured to accept data at any of the plurality of nodes. The network is configured to propagate data along a first of the pair of directional paths from a source node to the common end of the pair of directional paths and along a second of the pair of directional paths from the common end of the pair of directional paths to one or more destination node.
    Type: Application
    Filed: October 21, 2020
    Publication date: April 21, 2022
    Inventors: Andrew Stephen Cassidy, Rathinakumar Appuswamy, John Vernon Arthur, Jun Sawada, Dharmendra S. Modha, Michael Vincent DeBole, Pallab Datta, Tapan Kumar Nayak