Patents by Inventor Dharmendra S. Modha

Dharmendra S. Modha has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10528843
    Abstract: Embodiments of the invention provide a method of visual saliency estimation comprising receiving an input video of image frames. Each image frame has one or more channels, and each channel has one or more pixels. The method further comprises, for each channel of each image frame, generating corresponding neural spiking data based on a pixel intensity of each pixel of the channel, generating a corresponding multi-scale data structure based on the corresponding neural spiking data, and extracting a corresponding map of features from the corresponding multi-scale data structure. The multi-scale data structure comprises one or more data layers, wherein each data layer represents a spike representation of pixel intensities of a channel at a corresponding scale. The method further comprises encoding each map of features extracted as neural spikes.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: January 7, 2020
    Assignee: International Business Machines Corporation
    Inventors: Alexander Andreopoulos, Steven K. Esser, Dharmendra S. Modha
  • Publication number: 20200004678
    Abstract: Memory-mapped interfaces for message passing computing systems are provided. According to various embodiments, a write request is received. The write request comprises write data and a write address. The write address is a memory address within a memory map. The write address is translated into a neural network address. The neural network address identifies at least one input location of a destination neural network. The write data is sent via a network according to the neural network address to the at least one input location of the destination neural network. A message is received via the network from a source neural network. The message comprises data and at least one address. A location in a buffer is determined based on the at least one address. The data is stored at the location in the buffer. The buffer is accessible via the memory map.
    Type: Application
    Filed: September 11, 2019
    Publication date: January 2, 2020
    Inventors: Filipp A. Akopyan, John V. Arthur, Andrew S. Cassidy, Michael V. DeBole, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Publication number: 20200005155
    Abstract: Mapping of logical neural cores to physical neural cores is provided. In various embodiments, a neural network description describing a plurality of logical cores is read. A plurality of precedence relationships is determined among the plurality of logical cores. Based on the plurality of precedence relationships, a directed acyclic graph among the plurality of logical cores is generated. By breadth first search of the directed acyclic graph, a schedule is generated. The schedule maps each of the plurality of logical cores to one of a plurality of physical cores at one of a plurality of time slices. Execution of the schedule is simulated.
    Type: Application
    Filed: June 29, 2018
    Publication date: January 2, 2020
    Inventors: Pallab Datta, Dharmendra S. Modha
  • Patent number: 10521714
    Abstract: Embodiments of the invention provide a neural core circuit comprising a synaptic interconnect network including plural electronic synapses for interconnecting one or more source electronic neurons with one or more target electronic neurons. The interconnect network further includes multiple axon paths and multiple dendrite paths. Each synapse is at a cross-point junction of the interconnect network between a dendrite path and an axon path. The core circuit further comprises a routing module maintaining routing information. The routing module routes output from a source electronic neuron to one or more selected axon paths. Each synapse provides a configurable level of signal conduction from an axon path of a source electronic neuron to a dendrite path of a target electronic neuron.
    Type: Grant
    Filed: January 21, 2016
    Date of Patent: December 31, 2019
    Assignee: International Business Machines Corporation
    Inventors: Steven K. Esser, Dharmendra S. Modha
  • Publication number: 20190385046
    Abstract: Neural network processing hardware using parallel computational architectures with reconfigurable core-level and vector-level parallelism is provided. In various embodiments, a neural network model memory is adapted to store a neural network model comprising a plurality of layers. Each layer has at least one dimension and comprises a plurality of synaptic weights. A plurality of neural cores is provided. Each neural core includes a computation unit and an activation memory. The computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The computation unit has a plurality of vector units. The activation memory is adapted to store the input activations and the output activations. The system is adapted to partition the plurality of cores into a plurality of partitions based on dimensions of the layer and the vector units.
    Type: Application
    Filed: June 14, 2018
    Publication date: December 19, 2019
    Inventors: Andrew S. Cassidy, Myron D. Flickner, Pallab Datta, Hartmut Penner, Rathinakumar Appuswamy, Jun Sawada, John V. Arthur, Dharmendra S. Modha, Steven K. Esser, Brian Taba, Jennifer Klamo
  • Publication number: 20190385048
    Abstract: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.
    Type: Application
    Filed: June 19, 2018
    Publication date: December 19, 2019
    Inventors: Andrew S. Cassidy, Rathinakumar Appuswamy, John V. Arthur, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
  • Publication number: 20190377997
    Abstract: Embodiments of the invention provide a neural network comprising multiple functional neural core circuits, and a dynamically reconfigurable switch interconnect between the functional neural core circuits. The interconnect comprises multiple connectivity neural core circuits. Each functional neural core circuit comprises a first and a second core module. Each core module comprises a plurality of electronic neurons, a plurality of incoming electronic axons, and multiple electronic synapses interconnecting the incoming axons to the neurons. Each neuron has a corresponding outgoing electronic axon. In one embodiment, zero or more sets of connectivity neural core circuits interconnect outgoing axons in a functional neural core circuit to incoming axons in the same functional neural core circuit.
    Type: Application
    Filed: August 16, 2019
    Publication date: December 12, 2019
    Inventor: Dharmendra S. Modha
  • Patent number: 10504021
    Abstract: An event-driven neural network includes a plurality of interconnected core circuits is provided. Each core circuit includes an electronic synapse array has multiple digital synapses interconnecting a plurality of digital electronic neurons. A synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron. A neuron integrates input spikes and generates a spike event in response to the integrated input spikes exceeding a threshold. Each core circuit also has a scheduler that receives a spike event and delivers the spike event to a selected axon in the synapse array based on a schedule for deterministic event delivery.
    Type: Grant
    Filed: January 6, 2016
    Date of Patent: December 10, 2019
    Assignees: International Business Machines Corporation, Cornell University
    Inventors: Filipp Akopyan, John V. Arthur, Rajit Manohar, Paul A. Merolla, Dharmendra S. Modha, Alyosha Molnar, William P. Risk, III
  • Publication number: 20190370655
    Abstract: The present invention relates to unsupervised, supervised and reinforced learning via spiking computation. The neural network comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of edges interconnects the plurality of neural modules. Each edge interconnects a first neural module to a second neural module, and each edge comprises a weighted synaptic connection between every neuron in the first neural module and a corresponding neuron in the second neural module.
    Type: Application
    Filed: August 16, 2019
    Publication date: December 5, 2019
    Inventor: Dharmendra S. Modha
  • Publication number: 20190372831
    Abstract: Embodiments of the invention provide a neurosynaptic network circuit comprising multiple neurosynaptic devices including a plurality of neurosynaptic core circuits for processing one or more data packets. The neurosynaptic devices further include a routing system for routing the data packets between the core circuits. At least one of the neurosynaptic devices is faulty. The routing system is configured for selectively bypassing each faulty neurosynaptic device when processing and routing the data packets.
    Type: Application
    Filed: August 16, 2019
    Publication date: December 5, 2019
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Publication number: 20190332925
    Abstract: Networks and encodings therefor are provided that are adapted to provide increased energy efficiency and speed for convolutional operations. In various embodiments, a neural network comprises a plurality of neural cores. Each of the plurality of neural cores comprises a memory. A network interconnects the plurality of neural cores. The memory of each of the plurality of neural cores comprises at least a portion of a weight tensor. The weight tensor comprising a plurality of weights. Each neural core is adapted to retrieve locally or receive a portion of an input image, apply the portion of the weight tensor thereto, and store locally or send a result therefrom via the network to other of the plurality of neural cores.
    Type: Application
    Filed: April 30, 2018
    Publication date: October 31, 2019
    Inventor: Dharmendra S. Modha
  • Publication number: 20190332924
    Abstract: Neural inference processors are provided. In various embodiments, a processor includes a plurality of cores. Each core includes a neural computation unit, an activation memory, and a local controller. The neural computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The activation memory is adapted to store the input activations and the output activations. The local controller is adapted to load the input activations from the activation memory to the neural computation unit and to store the plurality of output activations from the neural computation unit to the activation memory. The processor includes a neural network model memory adapted to store network parameters, including the plurality of synaptic weights. The processor includes a global scheduler operatively coupled to the plurality of cores, adapted to provide the synaptic weights from the neural network model memory to each core.
    Type: Application
    Filed: April 27, 2018
    Publication date: October 31, 2019
    Inventors: Andrew S. Cassidy, Myron D. Flickner, Pallab Datta, Hartmut Penner, Rathinakumar Appuswamy, Jun Sawada, John V. Arthur, Dharmendra S. Modha, Steven K. Esser, Brian Taba, Jennifer Klamo
  • Patent number: 10460228
    Abstract: Embodiments of the invention provide a neural network comprising multiple functional neural core circuits, and a dynamically reconfigurable switch interconnect between the functional neural core circuits. The interconnect comprises multiple connectivity neural core circuits. Each functional neural core circuit comprises a first and a second core module. Each core module comprises a plurality of electronic neurons, a plurality of incoming electronic axons, and multiple electronic synapses interconnecting the incoming axons to the neurons. Each neuron has a corresponding outgoing electronic axon. In one embodiment, zero or more sets of connectivity neural core circuits interconnect outgoing axons in a functional neural core circuit to incoming axons in the same functional neural core circuit.
    Type: Grant
    Filed: November 18, 2015
    Date of Patent: October 29, 2019
    Assignee: International Business Machines Corporation
    Inventor: Dharmendra S. Modha
  • Publication number: 20190325295
    Abstract: Neural inference chips and cores adapted to provide time, space, and energy efficient neural inference via parallelism and on-chip memory are provided. In various embodiments, the neural inference chips comprise: a plurality of neural cores interconnected by an on-chip network; a first on-chip memory for storing a neural network model, the first on-chip memory being connected to each of the plurality of cores by the on-chip network; a second on-chip memory for storing input and output data, the second on-chip memory being connected to each of the plurality of cores by the on-chip network.
    Type: Application
    Filed: April 20, 2018
    Publication date: October 24, 2019
    Inventors: Dharmendra S. Modha, John V. Arthur, Jun Sawada, Steven K. Esser, Rathinakumar Appuswamy, Brian Taba, Andrew S. Cassidy, Pallab Datta, Myron D. Flickner, Hartmut Penner, Jennifer Klamo
  • Patent number: 10452540
    Abstract: Memory-mapped interfaces for message passing computing systems are provided. According to various embodiments, a write request is received. The write request comprises write data and a write address. The write address is a memory address within a memory map. The write address is translated into a neural network address. The neural network address identifies at least one input location of a destination neural network. The write data is sent via a network according to the neural network address to the at least one input location of the destination neural network. A message is received via the network from a source neural network. The message comprises data and at least one address. A location in a buffer is determined based on the at least one address. The data is stored at the location in the buffer. The buffer is accessible via the memory map.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: October 22, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Filipp A. Akopyan, John V. Arthur, Andrew S. Cassidy, Michael V. DeBole, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Patent number: 10454759
    Abstract: Embodiments of the invention provide a neurosynaptic network circuit comprising multiple neurosynaptic devices including a plurality of neurosynaptic core circuits for processing one or more data packets. The neurosynaptic devices further include a routing system for routing the data packets between the core circuits. At least one of the neurosynaptic devices is faulty. The routing system is configured for selectively bypassing each faulty neurosynaptic device when processing and routing the data packets.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: October 22, 2019
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Patent number: 10445642
    Abstract: The present invention relates to unsupervised, supervised and reinforced learning via spiking computation. The neural network comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of edges interconnects the plurality of neural modules. Each edge interconnects a first neural module to a second neural module, and each edge comprises a weighted synaptic connection between every neuron in the first neural module and a corresponding neuron in the second neural module.
    Type: Grant
    Filed: May 23, 2016
    Date of Patent: October 15, 2019
    Assignee: International Business Machines Corporation
    Inventor: Dharmendra S. Modha
  • Publication number: 20190303749
    Abstract: Massively parallel neural inference computing elements are provided. A plurality of multipliers is arranged in a plurality of equal-sized groups. Each of the plurality of multipliers is adapted to, in parallel, apply a weight to an input activation to generate an output. A plurality of adders is operatively coupled to one of the groups of multipliers. Each of the plurality of adders is adapted to, in parallel, add the outputs of the multipliers within its associated group to generate a partial sum. A plurality of function blocks is operatively coupled to one of the plurality of adders. Each of the plurality of function blocks is adapted to, in parallel, apply a function to the partial sum of its associated adder to generate an output value.
    Type: Application
    Filed: March 30, 2018
    Publication date: October 3, 2019
    Inventors: Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
  • Publication number: 20190303741
    Abstract: Defect resistant designs for location-sensitive neural network processor arrays are provided. In various embodiments, plurality of neural network processor cores are arrayed in a grid. The grid has a plurality of rows and a plurality of columns. A network interconnects at least those of the plurality of neural network processor cores that are adjacent within the grid. The network is adapted to bypass a defective core of the plurality of neural network processor cores by providing a connection between two non-adjacent rows or columns of the grid, and transparently routing messages between the two non-adjacent rows or columns, past the defective core.
    Type: Application
    Filed: March 30, 2018
    Publication date: October 3, 2019
    Inventors: Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
  • Publication number: 20190303740
    Abstract: Block transfer of neuron output values through data memory for neurosynaptic processors is provided, which in some embodiments includes time-multiplexing. A neurosynaptic core is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. Synaptic weights for one of a plurality of logical cores are read. The neurosynaptic core is configured to implement the one of the plurality of logical cores using the synaptic weights. At least one data block is provided as contiguous input activations to the neurosynaptic core. The input activations are processed by the neurosynaptic core to determine at least one contiguous block of output activations.
    Type: Application
    Filed: March 30, 2018
    Publication date: October 3, 2019
    Inventors: John V. Arthur, Pallab Datta, Steven K. Esser, Dharmendra S. Modha, Jun Sawada