Patents by Inventor Jun Sawada

Jun Sawada has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210312305
    Abstract: Neural inference chips for computing neural activations are provided. In various embodiments, a neural inference chip comprises at least one neural core, a memory array, an instruction buffer, and an instruction memory. The instruction buffer has a position corresponding to each of a plurality of elements of the memory array. The instruction memory provides at least one instruction to the instruction buffer. The instruction buffer advances the at least one instruction between positions in the instruction buffer. The instruction buffer provides the at least one instruction to at least one of the plurality of elements of the memory array from its associated position in the instruction buffer when the memory of the at least one of the plurality of elements contains data associated with the at least one instruction. Each element of the memory array provides a data block from its memory to its horizontal buffer in response to the arrival of an associated instruction from the instruction buffer.
    Type: Application
    Filed: April 7, 2020
    Publication date: October 7, 2021
    Inventors: Jun Sawada, Dharmendra S. Modha, Andrew S. Cassidy, John V. Arthur, Tapan K. Nayak, Carlos O. Otero, Brian Taba, Filipp A. Akopyan, Pallab Datta
  • Publication number: 20210209450
    Abstract: A neural inference chip includes a global weight memory; a neural core; and a network connecting the global weight memory to the at least one neural core. The neural core comprises a local weight memory. The local weight memory comprises a plurality of memory banks. Each of the plurality of memory banks is uniquely addressable by at least one index. The neural inference chip is adapted to store in the global weight memory a compressed weight block comprising at least one compressed weight matrix. The neural inference chip is adapted to transmit the compressed weight block from the global weight memory to the core via the network. The core is adapted to decode the at least one compressed weight matrix into a decoded weight matrix and store the decoded weight matrix in its local weight memory. The at core is adapted to apply the decoded weight matrix to a plurality of input activations to produce a plurality of output activations.
    Type: Application
    Filed: January 3, 2020
    Publication date: July 8, 2021
    Inventors: Andrew S. Cassidy, Rathinakumar Appuswamy, John V. Arthur, Pallab Datta, Steve Esser, Myron D. Flickner, Dharmendra S. Modha, Jun Sawada
  • Patent number: 11049001
    Abstract: The present invention provides a system comprising multiple core circuits. Each core circuit comprises multiple electronic axons for receiving event packets, multiple electronic neurons for generating event packets, and a fanout crossbar including multiple electronic synapse devices for interconnecting the neurons with the axons. The system further comprises a routing system for routing event packets between the core circuits. The routing system virtually connects each neuron with one or more programmable target axons for the neuron by routing each event packet generated by the neuron to the target axons. Each target axon for each neuron of each core circuit is an axon located on the same core circuit as, or a different core circuit than, the neuron.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: June 29, 2021
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Publication number: 20210174176
    Abstract: Neural inference chips are provided. A neural core of the neural inference chip comprises a vector-matrix multiplier; a vector processor; and an activation unit operatively coupled to the vector processor. The vector-matrix multiplier, vector processor, and/or activation unit is adapted to operate at variable precision.
    Type: Application
    Filed: December 6, 2019
    Publication date: June 10, 2021
    Inventors: Andrew S. Cassidy, Rathinakumar Appuswamy, John V. Arthur, Pallab Datta, Steve Esser, Myron D. Flickner, Jeffrey McKinstry, Dharmendra S. Modha, Jun Sawada, Brian Taba
  • Publication number: 20210166107
    Abstract: The present invention provides a system comprising multiple core circuits. Each core circuit comprises multiple electronic axons for receiving event packets, multiple electronic neurons for generating event packets, and a fanout crossbar including multiple electronic synapse devices for interconnecting the neurons with the axons. The system further comprises a routing system for routing event packets between the core circuits. The routing system virtually connects each neuron with one or more programmable target axons for the neuron by routing each event packet generated by the neuron to the target axons. Each target axon for each neuron of each core circuit is an axon located on the same core circuit as, or a different core circuit than, the neuron.
    Type: Application
    Filed: July 30, 2018
    Publication date: June 3, 2021
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Patent number: 11010662
    Abstract: Massively parallel neural inference computing elements are provided. A plurality of multipliers is arranged in a plurality of equal-sized groups. Each of the plurality of multipliers is adapted to, in parallel, apply a weight to an input activation to generate an output. A plurality of adders is operatively coupled to one of the groups of multipliers. Each of the plurality of adders is adapted to, in parallel, add the outputs of the multipliers within its associated group to generate a partial sum. A plurality of function blocks is operatively coupled to one of the plurality of adders. Each of the plurality of function blocks is adapted to, in parallel, apply a function to the partial sum of its associated adder to generate an output value.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: May 18, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
  • Publication number: 20210125040
    Abstract: Three-dimensional neural inference processing units are provided. A first tier comprises a plurality of neural cores. Each core comprises a neural computation unit. The neural computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. A second tier comprises a first neural network model memory adapted to store the plurality of synaptic weights. A communication network is operatively coupled to the first neural network model memory and to each of the plurality of neural cores, and adapted to provide the synaptic weights from the first neural network model memory to each of the plurality of neural cores.
    Type: Application
    Filed: October 24, 2019
    Publication date: April 29, 2021
    Inventors: Andrew S. Cassidy, Filipp A. Akopyan, Rathinakumar Appuswamy, John V. Arthur, Pallab Datta, Michael V. DeBole, Steve K. Esser, Myron D. Flickner, Dharmendra S. Modha, Carlos O. Otero, Jun Sawada
  • Patent number: 10990872
    Abstract: A multiplexed neural core circuit according to one embodiment comprises, for an integer multiplexing factor T that is greater than zero, T sets of electronic neurons, T sets of electronic axons, where each set of the T sets of electronic axons corresponds to one of the T sets of electronic neurons, and a synaptic interconnection network comprising a plurality of electronic synapses that each interconnect a single electronic axon to a single electronic neuron, where the interconnection network interconnects each set of the T sets of electronic axons to its corresponding set of electronic neurons.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: April 27, 2021
    Assignee: International Business Machines Corporation
    Inventors: Filipp A. Akopyan, Rodrigo Alvarez-Icaza, John V. Arthur, Andrew S. Cassidy, Steven K. Esser, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Patent number: 10984307
    Abstract: Embodiments of the invention provide a system and circuit interconnecting peripheral devices to neurosynaptic core circuits. The neurosynaptic system includes an interconnect that includes different types of communication channels. A device connects to the neurosynaptic system via the interconnect.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: April 20, 2021
    Assignee: International Business Machines Corporation
    Inventors: Filipp A. Akopyan, Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Publication number: 20210110245
    Abstract: Neural inference chips for computing neural activations are provided. In various embodiments, the neural inference chip is adapted to: receive an input activation tensor comprising a plurality of input activations; receive a weight tensor comprising a plurality of weights; Booth recode each of the plurality of weights into a plurality of Booth-coded weights, each Booth coded value having an order; multiply the input activation tensor by the Booth coded weights, yielding a plurality of results for each input activation, each of the plurality of results corresponding to the orders of the Booth-coded weights; for each order of the Booth-coded weights, sum the corresponding results, yielding a plurality of partial sums, one for each order; and compute a neural activation from a sum of the plurality of partial sums.
    Type: Application
    Filed: October 15, 2019
    Publication date: April 15, 2021
    Inventors: Jun Sawada, Filipp A. Akopyan, Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Dharmendra S. Modha, Tapan K. Nayak, Carlos O. Otero
  • Patent number: 10929747
    Abstract: One embodiment provides a system comprising a memory device for maintaining deterministic neural data relating to a digital neuron and a logic circuit for deterministic neural computation and stochastic neural computation. Deterministic neural computation comprises processing a neuronal state of the neuron based on the deterministic neural data maintained. Stochastic neural computation comprises generating stochastic neural data relating to the neuron and processing the neuronal state of the neuron based on the stochastic neural data generated.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: February 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Publication number: 20200379841
    Abstract: A computer-implemented method according to one embodiment includes, prior to an execution of a deterministic program, determining a pre-computed check sequence for a first plurality of values associated with the execution of the deterministic program, during the execution of the deterministic program, determining a runtime check sequence for a second plurality of values associated with the execution of the deterministic program, comparing the pre-computed check sequence to the runtime check sequence; and identifying one or more errors associated with the execution of the deterministic program, based on the comparing.
    Type: Application
    Filed: May 31, 2019
    Publication date: December 3, 2020
    Inventors: Andrew S. Cassidy, Dharmendra S. Modha, John V. Arthur, Jun Sawada
  • Patent number: 10838860
    Abstract: Memory-mapped interfaces for message passing computing systems are provided. According to various embodiments, a write request is received. The write request comprises write data and a write address. The write address is a memory address within a memory map. The write address is translated into a neural network address. The neural network address identifies at least one input location of a destination neural network. The write data is sent via a network according to the neural network address to the at least one input location of the destination neural network. A message is received via the network from a source neural network. The message comprises data and at least one address. A location in a buffer is determined based on the at least one address. The data is stored at the location in the buffer. The buffer is accessible via the memory map.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: November 17, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Filipp A. Akopyan, John V. Arthur, Andrew S. Cassidy, Michael V. DeBole, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Patent number: 10834024
    Abstract: According to one embodiment, a computer program product for performing selective multicast delivery includes a computer readable storage medium having program instructions embodied therewith, wherein the computer readable storage medium is not a transitory signal per se, and where the program instructions are executable by a selector of an intelligent processing unit (IPU) to cause the selector to perform a method comprising identifying, by the selector, an address header appended to an instance of data, comparing, by the selector, address data in the address header to identifier data stored at the selector, and conditionally delivering, by the selector, the instance of data, based on the comparing.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Simon J. Hollis, Hartmut Penner, Andrew S. Cassidy, Jun Sawada, Pallab Datta
  • Patent number: 10831595
    Abstract: A computer-implemented method according to one embodiment includes, prior to an execution of a deterministic program, determining a pre-computed check sequence for a first plurality of values associated with the execution of the deterministic program, during the execution of the deterministic program, determining a runtime check sequence for a second plurality of values associated with the execution of the deterministic program, comparing the pre-computed check sequence to the runtime check sequence; and identifying one or more errors associated with the execution of the deterministic program, based on the comparing.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Andrew S. Cassidy, Dharmendra S. Modha, John V. Arthur, Jun Sawada
  • Patent number: 10785745
    Abstract: Embodiments of the invention provide a system for scaling multi-core neurosynaptic networks. The system comprises multiple network circuits. Each network circuit comprises a plurality of neurosynaptic core circuits. Each core circuit comprises multiple electronic neurons interconnected with multiple electronic axons via a plurality of electronic synapse devices. An interconnect fabric couples the network circuits. Each network circuit has at least one network interface. Each network interface for each network circuit enables data exchange between the network circuit and another network circuit by tagging each data packet from the network circuit with corresponding routing information.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: September 22, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Patent number: 10769519
    Abstract: One embodiment of the invention provides a system comprising at least one data-to-spike converter unit for converting input numeric data received by the system to spike event data. Each data-to-spike converter unit is configured to support one or more spike codes.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: September 8, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Steven K. Esser, Myron D. Flickner, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada, Benjamin G. Shaw
  • Patent number: 10755165
    Abstract: One embodiment of the invention provides a system comprising at least one spike-to-data converter unit for converting spike event data generated by neurons to output numeric data. Each spike-to-data converter unit is configured to support one or more spike codes.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: August 25, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Steven K. Esser, Myron D. Flickner, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada, Benjamin G. Shaw
  • Patent number: 10740282
    Abstract: Embodiments of the invention relate to processor arrays, and in particular, a processor array with interconnect circuits for bonding semiconductor dies. One embodiment comprises multiple semiconductor dies and at least one interconnect circuit for exchanging signals between the dies. Each die comprises at least one processor core circuit. Each interconnect circuit corresponds to a die of the processor array. Each interconnect circuit comprises one or more attachment pads for interconnecting a corresponding die with another die, and at least one multiplexor structure configured for exchanging bus signals in a reversed order.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: August 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, John E. Barth, Jr., Andrew S. Cassidy, Subramanian S. Iyer, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Publication number: 20200202205
    Abstract: Massively parallel neural inference computing elements are provided. A plurality of multipliers is arranged in a plurality of equal-sized groups. Each of the plurality of multipliers is adapted to, in parallel, apply a weight to an input activation to generate an output. A plurality of adders is operatively coupled to one of the groups of multipliers. Each of the plurality of adders is adapted to, in parallel, add the outputs of the multipliers within its associated group to generate a partial sum. A plurality of function blocks is operatively coupled to one of the plurality of adders. Each of the plurality of function blocks is adapted to, in parallel, apply a function to the partial sum of its associated adder to generate an output value.
    Type: Application
    Filed: March 4, 2020
    Publication date: June 25, 2020
    Inventors: Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba