Patents by Inventor John V. Arthur

John V. Arthur has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10929747
    Abstract: One embodiment provides a system comprising a memory device for maintaining deterministic neural data relating to a digital neuron and a logic circuit for deterministic neural computation and stochastic neural computation. Deterministic neural computation comprises processing a neuronal state of the neuron based on the deterministic neural data maintained. Stochastic neural computation comprises generating stochastic neural data relating to the neuron and processing the neuronal state of the neuron based on the stochastic neural data generated.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: February 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Publication number: 20200379841
    Abstract: A computer-implemented method according to one embodiment includes, prior to an execution of a deterministic program, determining a pre-computed check sequence for a first plurality of values associated with the execution of the deterministic program, during the execution of the deterministic program, determining a runtime check sequence for a second plurality of values associated with the execution of the deterministic program, comparing the pre-computed check sequence to the runtime check sequence; and identifying one or more errors associated with the execution of the deterministic program, based on the comparing.
    Type: Application
    Filed: May 31, 2019
    Publication date: December 3, 2020
    Inventors: Andrew S. Cassidy, Dharmendra S. Modha, John V. Arthur, Jun Sawada
  • Publication number: 20200364535
    Abstract: Embodiments of the invention relate to a globally asynchronous and locally synchronous neuromorphic network. One embodiment comprises generating a synchronization signal that is distributed to a plurality of neural core circuits. In response to the synchronization signal, in at least one core circuit, incoming spike events maintained by said at least one core circuit are processed to generate an outgoing spike event. Spike events are asynchronously communicated between the core circuits via a routing fabric comprising multiple asynchronous routers.
    Type: Application
    Filed: October 29, 2018
    Publication date: November 19, 2020
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Paul A. Merolla, Dharmendra S. Modha
  • Patent number: 10838860
    Abstract: Memory-mapped interfaces for message passing computing systems are provided. According to various embodiments, a write request is received. The write request comprises write data and a write address. The write address is a memory address within a memory map. The write address is translated into a neural network address. The neural network address identifies at least one input location of a destination neural network. The write data is sent via a network according to the neural network address to the at least one input location of the destination neural network. A message is received via the network from a source neural network. The message comprises data and at least one address. A location in a buffer is determined based on the at least one address. The data is stored at the location in the buffer. The buffer is accessible via the memory map.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: November 17, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Filipp A. Akopyan, John V. Arthur, Andrew S. Cassidy, Michael V. DeBole, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Patent number: 10839287
    Abstract: Embodiments of the invention relate to a globally asynchronous and locally synchronous neuromorphic network. One embodiment comprises generating a synchronization signal that is distributed to a plurality of neural core circuits. In response to the synchronization signal, in at least one core circuit, incoming spike events maintained by said at least one core circuit are processed to generate an outgoing spike event. Spike events are asynchronously communicated between the core circuits via a routing fabric comprising multiple asynchronous routers.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: November 17, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Paul A. Merolla, Dharmendra S. Modha
  • Patent number: 10831595
    Abstract: A computer-implemented method according to one embodiment includes, prior to an execution of a deterministic program, determining a pre-computed check sequence for a first plurality of values associated with the execution of the deterministic program, during the execution of the deterministic program, determining a runtime check sequence for a second plurality of values associated with the execution of the deterministic program, comparing the pre-computed check sequence to the runtime check sequence; and identifying one or more errors associated with the execution of the deterministic program, based on the comparing.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Andrew S. Cassidy, Dharmendra S. Modha, John V. Arthur, Jun Sawada
  • Patent number: 10785745
    Abstract: Embodiments of the invention provide a system for scaling multi-core neurosynaptic networks. The system comprises multiple network circuits. Each network circuit comprises a plurality of neurosynaptic core circuits. Each core circuit comprises multiple electronic neurons interconnected with multiple electronic axons via a plurality of electronic synapse devices. An interconnect fabric couples the network circuits. Each network circuit has at least one network interface. Each network interface for each network circuit enables data exchange between the network circuit and another network circuit by tagging each data packet from the network circuit with corresponding routing information.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: September 22, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Patent number: 10769519
    Abstract: One embodiment of the invention provides a system comprising at least one data-to-spike converter unit for converting input numeric data received by the system to spike event data. Each data-to-spike converter unit is configured to support one or more spike codes.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: September 8, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Steven K. Esser, Myron D. Flickner, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada, Benjamin G. Shaw
  • Patent number: 10755165
    Abstract: One embodiment of the invention provides a system comprising at least one spike-to-data converter unit for converting spike event data generated by neurons to output numeric data. Each spike-to-data converter unit is configured to support one or more spike codes.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: August 25, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Steven K. Esser, Myron D. Flickner, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada, Benjamin G. Shaw
  • Patent number: 10740282
    Abstract: Embodiments of the invention relate to processor arrays, and in particular, a processor array with interconnect circuits for bonding semiconductor dies. One embodiment comprises multiple semiconductor dies and at least one interconnect circuit for exchanging signals between the dies. Each die comprises at least one processor core circuit. Each interconnect circuit corresponds to a die of the processor array. Each interconnect circuit comprises one or more attachment pads for interconnecting a corresponding die with another die, and at least one multiplexor structure configured for exchanging bus signals in a reversed order.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: August 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, John E. Barth, Jr., Andrew S. Cassidy, Subramanian S. Iyer, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Patent number: 10713561
    Abstract: Embodiments of the invention relate to a multiplexed neural core circuit. One embodiment comprises a core circuit including a memory device that maintains neuronal attributes for multiple neurons. The memory device has multiple entries. Each entry maintains neuronal attributes for a corresponding neuron. The core circuit further comprises a controller for managing the memory device. In response to neuronal firing events targeting one of said neurons, the controller retrieves neuronal attributes for the target neuron from a corresponding entry of the memory device, and integrates said firing events based on the retrieved neuronal attributes to generate a firing event for the target neuron.
    Type: Grant
    Filed: September 3, 2015
    Date of Patent: July 14, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Paul A. Merolla, Dharmendra S. Modha
  • Publication number: 20200202205
    Abstract: Massively parallel neural inference computing elements are provided. A plurality of multipliers is arranged in a plurality of equal-sized groups. Each of the plurality of multipliers is adapted to, in parallel, apply a weight to an input activation to generate an output. A plurality of adders is operatively coupled to one of the groups of multipliers. Each of the plurality of adders is adapted to, in parallel, add the outputs of the multipliers within its associated group to generate a partial sum. A plurality of function blocks is operatively coupled to one of the plurality of adders. Each of the plurality of function blocks is adapted to, in parallel, apply a function to the partial sum of its associated adder to generate an output value.
    Type: Application
    Filed: March 4, 2020
    Publication date: June 25, 2020
    Inventors: Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
  • Publication number: 20200167158
    Abstract: A device for controlling neural inference processor cores is provided, including a compound instruction set architecture. The device comprises an instruction memory, which comprises a plurality of instructions for controlling a neural inference processor core. Each of the plurality of instructions comprises a control operation. The device further comprises a program counter. The device further comprises at least one loop counter register. The device is adapted to execute the plurality of instructions. Executing the plurality of instructions comprises: reading an instruction from the instruction memory based on a value of the program counter; updating the at least one loop counter register according to the control operation of the instruction; and updating the program counter according to the control operation of the instruction and a value of the at least one loop counter register.
    Type: Application
    Filed: November 28, 2018
    Publication date: May 28, 2020
    Inventors: Andrew S. Cassidy, Rathinakumar Appuswamy, John V. Arthur, Pallab Datta, Michael V. Debole, Steven K. Esser, Myron D. Flickner, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
  • Patent number: 10650301
    Abstract: Embodiments of the invention provide a neurosynaptic system comprising a delay unit for receiving and buffering axonal inputs, and a neural computation unit for generating neuronal outputs by performing a set of computations based on at least one axonal input received by the delay unit. The system further comprises a permutation unit for receiving external inputs to the system, and transmitting external outputs from the system. The permutation unit maps each external input received as either an axonal input to the delay unit or an external output from the system. The permutation unit maps each neuronal output generated by the neural computation unit as either an axonal input to the delay unit or an external output from the system. The neural computation unit comprises multiple electronic neurons, multiple electronic axons, and a plurality of electronic synapse devices interconnecting the neurons with the axons.
    Type: Grant
    Filed: May 8, 2014
    Date of Patent: May 12, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza Rivera, Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Publication number: 20200117465
    Abstract: Multi-agent instruction execution engines for neural inference processing are provided. In various embodiments, a neural core is provided. The neural core includes an instruction memory. The instruction memory comprises a plurality of instruction streams, each instruction stream associated with one of a plurality of agents. The instruction memory further comprises a plurality of shared functional units. The neural core is adapted to concurrently execute the plurality of instruction streams on the plurality of associated agents. The execution includes maintaining a separate program counter for each of the plurality of agents, determining a plurality of operations from the instructions of each instruction stream, and directing the operations to the shared functional units. The instructions of each instruction stream are statically scheduled prior to runtime to ensure their execution is conflict free.
    Type: Application
    Filed: October 16, 2018
    Publication date: April 16, 2020
    Inventors: Andrew S. Cassidy, Simon J. Hollis, Hartmut Penner, Jun Sawada, Pallab Datta, John V. Arthur
  • Publication number: 20200117988
    Abstract: Networks for distributing parameters and data to neural network compute cores. In various embodiments, a neural inference chip comprises a plurality of neural cores and at least one network interconnecting the plurality of neural cores. Each of the plurality of neural cores is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The at least one network is adapted to simultaneously deliver synaptic weights and/or input activations to the plurality of neural cores.
    Type: Application
    Filed: October 11, 2018
    Publication date: April 16, 2020
    Inventors: John V. Arthur, Brian Taba, Rathinakumar Appuswamy, Andrew S. Cassidy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada
  • Publication number: 20200117981
    Abstract: Systems for neural network computation are provided. A neural network processor comprises a plurality of neural cores. The neural network processor has one or more processor precisions per activation. The processor is configured to accept data having a processor feature dimension. A transformation circuit is coupled to the neural network processor, and is adapted to: receive an input data tensor having an input precision per channel at one or more features; transform the input data tensor from the input precision to the processor precision; divide the input data into a plurality of blocks, each block conforming to one of the processor feature dimensions; provide each of the plurality of blocks to one of the plurality of neural cores. The neural network processor is adapted to compute, by the plurality of neural cores, output of one or more neural network layers.
    Type: Application
    Filed: October 11, 2018
    Publication date: April 16, 2020
    Inventors: John V. Arthur, Andrew S. Cassidy, Myron D. Flickner, Pallab Datta, Hartmut Penner, Rathinakumar Appuswamy, Jun Sawada, Dharmendra S. Modha, Steven K. Esser, Brian Taba, Jennifer Klamo
  • Patent number: 10621489
    Abstract: Massively parallel neural inference computing elements are provided. A plurality of multipliers is arranged in a plurality of equal-sized groups. Each of the plurality of multipliers is adapted to, in parallel, apply a weight to an input activation to generate an output. A plurality of adders is operatively coupled to one of the groups of multipliers. Each of the plurality of adders is adapted to, in parallel, add the outputs of the multipliers within its associated group to generate a partial sum. A plurality of function blocks is operatively coupled to one of the plurality of adders. Each of the plurality of function blocks is adapted to, in parallel, apply a function to the partial sum of its associated adder to generate an output value.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: April 14, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
  • Publication number: 20200104718
    Abstract: Parallel processing among arrays of physical neural cores is provided. An array of neural cores is adapted to compute, in parallel, an output activation tensor of a neural network layer. A network is operatively connected to each of the neural cores. The output activation tensor is distributed across the neural cores. An input activation tensor is distributed across the neural cores. A weight tensor is distributed across the neural cores. Each neural core's computation comprises multiplying elements of a portion of the input activation tensor at that core with elements of a portion of the weight tensor at that core, and storing the summed products in a partial sum corresponding to an element of the output activation tensor. Each element of the output activation tensor is computed by accumulating all of the partial sums corresponding to that element via the network.
    Type: Application
    Filed: September 28, 2018
    Publication date: April 2, 2020
    Inventors: Brian Taba, Andrew S. Cassidy, Myron D. Flickner, Pallab Datta, Hartmut Penner, Rathinakumar Appuswamy, Jun Sawada, John V. Arthur, Dharmendra S. Modha, Steven K. Esser, Jennifer Klamo
  • Publication number: 20200065658
    Abstract: An event-driven neural network includes a plurality of interconnected core circuits is provided. Each core circuit includes an electronic synapse array has multiple digital synapses interconnecting a plurality of digital electronic neurons. A synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron. A neuron integrates input spikes and generates a spike event in response to the integrated input spikes exceeding a threshold. Each core circuit also has a scheduler that receives a spike event and delivers the spike event to a selected axon in the synapse array based on a schedule for deterministic event delivery.
    Type: Application
    Filed: October 28, 2019
    Publication date: February 27, 2020
    Inventors: Filipp Akopyan, John V. Arthur, Rajit Manohar, Paul A. Merolla, Dharmendra S. Modha, Alyosha Molnar, William P. Risk, III