Patents by Inventor John V. Arthur

John V. Arthur has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200019836
    Abstract: Networks of distributed neural cores are provided with hierarchical parallelism. In various embodiments, a plurality of neural cores is provided. Each of the plurality of neural cores comprises a plurality of vector compute units configured to operate in parallel. Each of the plurality of neural cores is configured to compute in parallel output activations by applying its plurality of vector compute units to input activations. Each of the plurality of neural cores is assigned a subset of output activations of a layer of a neural network for computation. Upon receipt of a subset of input activations of the layer of the neural network, each of the plurality of neural cores computes a partial sum for each of its assigned output activations, and computes its assigned output activations from at least the computed partial sums.
    Type: Application
    Filed: July 12, 2018
    Publication date: January 16, 2020
    Inventors: John V. Arthur, Andrew S. Cassidy, Myron D. Flickner, Pallab Datta, Hartmut Penner, Rathinakumar Appuswamy, Jun Sawada, Dharmendra S. Modha, Steven K. Esser, Brian Taba, Jennifer Klamo
  • Publication number: 20200012929
    Abstract: Instruction distribution in an array of neural network cores is provided. In various embodiments, a neural inference chip is initialized with core microcode. The chip comprises a plurality of neural cores. The core microcode is executable by the neural cores to execute a tensor operation of a neural network. The core microcode is distributed to the plurality of neural cores via an on-chip network. The core microcode is executed synchronously by the plurality of neural cores to compute a neural network layer.
    Type: Application
    Filed: July 5, 2018
    Publication date: January 9, 2020
    Inventors: Hartmut Penner, Dharmendra S. Modha, John V. Arthur, Andrew S. Cassidy, Rathinakumar Appuswamy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Jun Sawada, Brian Taba
  • Publication number: 20200004678
    Abstract: Memory-mapped interfaces for message passing computing systems are provided. According to various embodiments, a write request is received. The write request comprises write data and a write address. The write address is a memory address within a memory map. The write address is translated into a neural network address. The neural network address identifies at least one input location of a destination neural network. The write data is sent via a network according to the neural network address to the at least one input location of the destination neural network. A message is received via the network from a source neural network. The message comprises data and at least one address. A location in a buffer is determined based on the at least one address. The data is stored at the location in the buffer. The buffer is accessible via the memory map.
    Type: Application
    Filed: September 11, 2019
    Publication date: January 2, 2020
    Inventors: Filipp A. Akopyan, John V. Arthur, Andrew S. Cassidy, Michael V. DeBole, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Publication number: 20190385046
    Abstract: Neural network processing hardware using parallel computational architectures with reconfigurable core-level and vector-level parallelism is provided. In various embodiments, a neural network model memory is adapted to store a neural network model comprising a plurality of layers. Each layer has at least one dimension and comprises a plurality of synaptic weights. A plurality of neural cores is provided. Each neural core includes a computation unit and an activation memory. The computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The computation unit has a plurality of vector units. The activation memory is adapted to store the input activations and the output activations. The system is adapted to partition the plurality of cores into a plurality of partitions based on dimensions of the layer and the vector units.
    Type: Application
    Filed: June 14, 2018
    Publication date: December 19, 2019
    Inventors: Andrew S. Cassidy, Myron D. Flickner, Pallab Datta, Hartmut Penner, Rathinakumar Appuswamy, Jun Sawada, John V. Arthur, Dharmendra S. Modha, Steven K. Esser, Brian Taba, Jennifer Klamo
  • Publication number: 20190385048
    Abstract: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.
    Type: Application
    Filed: June 19, 2018
    Publication date: December 19, 2019
    Inventors: Andrew S. Cassidy, Rathinakumar Appuswamy, John V. Arthur, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
  • Patent number: 10504021
    Abstract: An event-driven neural network includes a plurality of interconnected core circuits is provided. Each core circuit includes an electronic synapse array has multiple digital synapses interconnecting a plurality of digital electronic neurons. A synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron. A neuron integrates input spikes and generates a spike event in response to the integrated input spikes exceeding a threshold. Each core circuit also has a scheduler that receives a spike event and delivers the spike event to a selected axon in the synapse array based on a schedule for deterministic event delivery.
    Type: Grant
    Filed: January 6, 2016
    Date of Patent: December 10, 2019
    Assignees: International Business Machines Corporation, Cornell University
    Inventors: Filipp Akopyan, John V. Arthur, Rajit Manohar, Paul A. Merolla, Dharmendra S. Modha, Alyosha Molnar, William P. Risk, III
  • Publication number: 20190372831
    Abstract: Embodiments of the invention provide a neurosynaptic network circuit comprising multiple neurosynaptic devices including a plurality of neurosynaptic core circuits for processing one or more data packets. The neurosynaptic devices further include a routing system for routing the data packets between the core circuits. At least one of the neurosynaptic devices is faulty. The routing system is configured for selectively bypassing each faulty neurosynaptic device when processing and routing the data packets.
    Type: Application
    Filed: August 16, 2019
    Publication date: December 5, 2019
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Publication number: 20190332924
    Abstract: Neural inference processors are provided. In various embodiments, a processor includes a plurality of cores. Each core includes a neural computation unit, an activation memory, and a local controller. The neural computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The activation memory is adapted to store the input activations and the output activations. The local controller is adapted to load the input activations from the activation memory to the neural computation unit and to store the plurality of output activations from the neural computation unit to the activation memory. The processor includes a neural network model memory adapted to store network parameters, including the plurality of synaptic weights. The processor includes a global scheduler operatively coupled to the plurality of cores, adapted to provide the synaptic weights from the neural network model memory to each core.
    Type: Application
    Filed: April 27, 2018
    Publication date: October 31, 2019
    Inventors: Andrew S. Cassidy, Myron D. Flickner, Pallab Datta, Hartmut Penner, Rathinakumar Appuswamy, Jun Sawada, John V. Arthur, Dharmendra S. Modha, Steven K. Esser, Brian Taba, Jennifer Klamo
  • Publication number: 20190325295
    Abstract: Neural inference chips and cores adapted to provide time, space, and energy efficient neural inference via parallelism and on-chip memory are provided. In various embodiments, the neural inference chips comprise: a plurality of neural cores interconnected by an on-chip network; a first on-chip memory for storing a neural network model, the first on-chip memory being connected to each of the plurality of cores by the on-chip network; a second on-chip memory for storing input and output data, the second on-chip memory being connected to each of the plurality of cores by the on-chip network.
    Type: Application
    Filed: April 20, 2018
    Publication date: October 24, 2019
    Inventors: Dharmendra S. Modha, John V. Arthur, Jun Sawada, Steven K. Esser, Rathinakumar Appuswamy, Brian Taba, Andrew S. Cassidy, Pallab Datta, Myron D. Flickner, Hartmut Penner, Jennifer Klamo
  • Patent number: 10454759
    Abstract: Embodiments of the invention provide a neurosynaptic network circuit comprising multiple neurosynaptic devices including a plurality of neurosynaptic core circuits for processing one or more data packets. The neurosynaptic devices further include a routing system for routing the data packets between the core circuits. At least one of the neurosynaptic devices is faulty. The routing system is configured for selectively bypassing each faulty neurosynaptic device when processing and routing the data packets.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: October 22, 2019
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Patent number: 10452540
    Abstract: Memory-mapped interfaces for message passing computing systems are provided. According to various embodiments, a write request is received. The write request comprises write data and a write address. The write address is a memory address within a memory map. The write address is translated into a neural network address. The neural network address identifies at least one input location of a destination neural network. The write data is sent via a network according to the neural network address to the at least one input location of the destination neural network. A message is received via the network from a source neural network. The message comprises data and at least one address. A location in a buffer is determined based on the at least one address. The data is stored at the location in the buffer. The buffer is accessible via the memory map.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: October 22, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Filipp A. Akopyan, John V. Arthur, Andrew S. Cassidy, Michael V. DeBole, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Publication number: 20190303741
    Abstract: Defect resistant designs for location-sensitive neural network processor arrays are provided. In various embodiments, plurality of neural network processor cores are arrayed in a grid. The grid has a plurality of rows and a plurality of columns. A network interconnects at least those of the plurality of neural network processor cores that are adjacent within the grid. The network is adapted to bypass a defective core of the plurality of neural network processor cores by providing a connection between two non-adjacent rows or columns of the grid, and transparently routing messages between the two non-adjacent rows or columns, past the defective core.
    Type: Application
    Filed: March 30, 2018
    Publication date: October 3, 2019
    Inventors: Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
  • Publication number: 20190303740
    Abstract: Block transfer of neuron output values through data memory for neurosynaptic processors is provided, which in some embodiments includes time-multiplexing. A neurosynaptic core is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. Synaptic weights for one of a plurality of logical cores are read. The neurosynaptic core is configured to implement the one of the plurality of logical cores using the synaptic weights. At least one data block is provided as contiguous input activations to the neurosynaptic core. The input activations are processed by the neurosynaptic core to determine at least one contiguous block of output activations.
    Type: Application
    Filed: March 30, 2018
    Publication date: October 3, 2019
    Inventors: John V. Arthur, Pallab Datta, Steven K. Esser, Dharmendra S. Modha, Jun Sawada
  • Publication number: 20190303749
    Abstract: Massively parallel neural inference computing elements are provided. A plurality of multipliers is arranged in a plurality of equal-sized groups. Each of the plurality of multipliers is adapted to, in parallel, apply a weight to an input activation to generate an output. A plurality of adders is operatively coupled to one of the groups of multipliers. Each of the plurality of adders is adapted to, in parallel, add the outputs of the multipliers within its associated group to generate a partial sum. A plurality of function blocks is operatively coupled to one of the plurality of adders. Each of the plurality of function blocks is adapted to, in parallel, apply a function to the partial sum of its associated adder to generate an output value.
    Type: Application
    Filed: March 30, 2018
    Publication date: October 3, 2019
    Inventors: Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
  • Publication number: 20190294950
    Abstract: Embodiments of the invention provide a system and circuit interconnecting peripheral devices to neurosynaptic core circuits. The neurosynaptic system includes an interconnect that includes different types of communication channels. A device connects to the neurosynaptic system via the interconnect.
    Type: Application
    Filed: May 30, 2019
    Publication date: September 26, 2019
    Inventors: Filipp A. Akopyan, Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Patent number: 10410109
    Abstract: Embodiments of the invention provide a system and circuit interconnecting peripheral devices to neurosynaptic core circuits. The neurosynaptic system includes an interconnect that includes different types of communication channels. A device connects to the neurosynaptic system via the interconnect.
    Type: Grant
    Filed: August 25, 2014
    Date of Patent: September 10, 2019
    Assignee: International Business Machines Corporation
    Inventors: Filipp A. Akopyan, Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Publication number: 20190228289
    Abstract: Embodiments of the invention relate to a time-division multiplexed neurosynaptic module with implicit memory addressing for implementing a neural network. One embodiment comprises maintaining neuron attributes for multiple neurons and maintaining incoming firing events for different time steps. For each time step, incoming firing events for said time step are integrated in a time-division multiplexing manner. Incoming firing events are integrated based on the neuron attributes maintained. For each time step, the neuron attributes maintained are updated in parallel based on the integrated incoming firing events for said time step.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Inventors: John V. Arthur, Bernard V. Brezzo, Leland Chang, Daniel J. Friedman, Paul A. Merolla, Dharmendra S. Modha, Robert K. Montoye, Jae-sun Seo, Jose A. Tierno
  • Publication number: 20190197394
    Abstract: Embodiments of the invention relate to a neural network system for simulating neurons of a neural model. One embodiment comprises a memory device that maintains neuronal states for multiple neurons, a lookup table that maintains state transition information for multiple neuronal states, and a controller unit that manages the memory device. The controller unit updates a neuronal state for each neuron based on incoming spike events targeting said neuron and state transition information corresponding to said neuronal state.
    Type: Application
    Filed: February 28, 2019
    Publication date: June 27, 2019
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Paul A. Merolla, Dharmendra S. Modha
  • Patent number: 10331998
    Abstract: Embodiments of the invention relate to a time-division multiplexed neurosynaptic module with implicit memory addressing for implementing a neural network. One embodiment comprises maintaining neuron attributes for multiple neurons and maintaining incoming firing events for different time steps. For each time step, incoming firing events for said time step are integrated in a time-division multiplexing manner. Incoming firing events are integrated based on the neuron attributes maintained. For each time step, the neuron attributes maintained are updated in parallel based on the integrated incoming firing events for said time step.
    Type: Grant
    Filed: December 8, 2015
    Date of Patent: June 25, 2019
    Assignee: International Business Machines Corporation
    Inventors: John V. Arthur, Bernard V. Brezzo, Leland Chang, Daniel J. Friedman, Paul A. Merolla, Dharmendra S. Modha, Robert K. Montoye, Jae-sun Seo, Jose A. Tierno
  • Publication number: 20190156209
    Abstract: Embodiments of the invention relate to a scalable neural hardware for the noisy-OR model of Bayesian networks. One embodiment comprises a neural core circuit including a pseudo-random number generator for generating random numbers. The neural core circuit further comprises a plurality of incoming electronic axons, a plurality of neural modules, and a plurality of electronic synapses interconnecting the axons to the neural modules. Each synapse interconnects an axon with a neural module. Each neural module receives incoming spikes from interconnected axons. Each neural module represents a noisy-OR gate. Each neural module spikes probabilistically based on at least one random number generated by the pseudo-random number generator unit.
    Type: Application
    Filed: November 30, 2018
    Publication date: May 23, 2019
    Inventors: John V. Arthur, Steven K. Esser, Paul A. Merolla, Dharmendra S. Modha