Patents by Inventor John Vernon Arthur

John Vernon Arthur has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220180177
    Abstract: A neural inference chip is provided, including at least one neural inference core. The at least one neural inference core is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of intermediate outputs. The at least one neural inference core comprises a plurality of activation units configured to receive the plurality of intermediate outputs and produce a plurality of activations. Each of the plurality of activation units is configured to apply a configurable activation function to its input. The configurable activation function has at least a re-ranging term and a scaling term, the re-ranging term determining the range of the activations and the scaling term determining the scale of the activations. Each of the plurality of activations units is configured to obtain the re-ranging term and the scaling term from one or more look up tables.
    Type: Application
    Filed: December 8, 2020
    Publication date: June 9, 2022
    Inventors: Jun Sawada, Myron D. Flickner, Andrew Stephen Cassidy, John Vernon Arthur, Pallab Datta, Dharmendra S. Modha, Steven Kyle Esser, Brian Seisho Taba, Jennifer Klamo, Rathinakumar Appuswamy, Filipp Akopyan, Carlos Ortega Otero
  • Publication number: 20220129436
    Abstract: Systems are provided that can produce symbolic and numeric representations of the neural network outputs, such that these outputs can be used to validate correctness of the implementation of the neural network. In various embodiments, a description of an artificial neural network containing no data-dependent branching is read. Based on the description of the artificial neural network, a symbolic representation is constructed of an output of the artificial neural network, the symbolic representation comprising at least one variable. The symbolic representation is compared to a ground truth symbolic representation, thereby validating the neural network system.
    Type: Application
    Filed: October 22, 2020
    Publication date: April 28, 2022
    Inventors: Alexander Andreopoulos, Dharmendra S. Modha, Andrew Stephen Cassidy, Brian Seisho Taba, Carmelo Di Nolfo, Hartmut Penner, John Vernon Arthur, Jun Sawada, Myron D. Flickner, Pallab Datta, Rathinakumar Appuswamy
  • Publication number: 20220129743
    Abstract: Neural network accelerator output ranking is provided. In various embodiments, a system comprises a data memory; a memory controller configured to access the data memory; a plurality of comparators configured in a tree; a register; and a two-way comparator. The memory controller is configured to provide a first plurality of values from the data memory to the comparator tree. The comparator tree is configured to perform a plurality of concurrent pairwise comparisons of the first plurality of values to arrive at a first greatest value of the first plurality of values. The two-way comparator is configured to output the greater of the greatest value from the comparator tree and a stored value from the register. The register is configured to store the output of the two-way comparator.
    Type: Application
    Filed: October 23, 2020
    Publication date: April 28, 2022
    Inventors: Jun Sawada, Rathinakumar Appuswamy, John Vernon Arthur, Andrew Stephen Cassidy, Pallab Datta, Michael Vincent DeBole, Steven Kyle Esser, Dharmendra S. Modha
  • Publication number: 20220129769
    Abstract: Modular neural network computing apparatus are provided with distributed neural network storage. In various embodiments, a neural inference processor comprises a plurality of neural inference cores, at least one model network interconnecting the plurality of neural inference cores, and at least one activation network interconnecting the plurality of neural inference cores. Each of the plurality of neural inference cores comprises memory adapted to store input activations, output activations, and a neural network model. The neural network model comprises synaptic weights, neuron parameters, and neural network instructions. The at least one model network is configured to distribute the neural network model among the plurality of neural inference cores. Each of the plurality of neural inference cores is configured to apply the synaptic weights to input activations from its memory to produce a plurality of output activations to its memory.
    Type: Application
    Filed: October 22, 2020
    Publication date: April 28, 2022
    Inventors: Jun Sawada, Dharmendra S. Modha, John Vernon Arthur, Andrew Stephen Cassidy, Pallab Datta, Rathinakumar Appuswamy, Tapan Kumar Nayak, Brian Kumar Taba, Carlos Ortega Otero, Filipp Akopyan, Arnon Amir, Nathaniel Joseph McClatchey
  • Publication number: 20220121951
    Abstract: Conflict-free, stall-free, broadcast networks on neural inference chips are provided. In various embodiments, a neural inference chip comprises a plurality of network nodes and a network on chip interconnecting the plurality of network nodes. The network comprises at least one pair of directional paths. The paths of each pair have opposite directions and a common end. The network is configured to accept data at any of the plurality of nodes. The network is configured to propagate data along a first of the pair of directional paths from a source node to the common end of the pair of directional paths and along a second of the pair of directional paths from the common end of the pair of directional paths to one or more destination node.
    Type: Application
    Filed: October 21, 2020
    Publication date: April 21, 2022
    Inventors: Andrew Stephen Cassidy, Rathinakumar Appuswamy, John Vernon Arthur, Jun Sawada, Dharmendra S. Modha, Michael Vincent DeBole, Pallab Datta, Tapan Kumar Nayak
  • Publication number: 20220101108
    Abstract: A neural network processor system is provided comprising at least one neural network processing core, an activation memory, an instruction memory, and at least one control register, the neural network processing core adapted to implement neural network computation, control and communication primitives. A memory map is included which comprises regions corresponding to each of the activation memory, instruction memory, and at least one control register. Additionally, an interface operatively connected to the neural network processor system is included, with the interface being adapted to communicate with a host and to expose the memory map.
    Type: Application
    Filed: September 30, 2020
    Publication date: March 31, 2022
    Inventors: Filipp Akopyan, John Vernon Arthur, Andrew Stephen Cassidy, Michael Vincent DeBole, Carmelo Di Nolfo, Myron D. Flickner, Jeffrey A. Kusnitz, Dharmendra S. Modha, Carlos Ortega Otero, Jun Sawada, Benjamin Gordon Shaw, Brian Seisho Taba
  • Patent number: 7834527
    Abstract: Disclosed are electroactive polymer fibers, processes of preparing electroactive polymer fibers, and devices containing electroactive polymer fibers. Devices can be used as actuators and sensors, generators and transducers. Applications include inter alia artificial muscles, prosthetics and robotics.
    Type: Grant
    Filed: May 5, 2006
    Date of Patent: November 16, 2010
    Assignee: SmartMotion Technologies, Inc.
    Inventors: Rodrigo Alvarez Icaza Rivera, Juan Manuel Alvarez Sanches, Kevin Chalgren Galloway, Howard Scott Katzenberg, Rahul Kothari, John Vernon Arthur
  • Publication number: 20090085444
    Abstract: Disclosed are electroactive polymer fibers, processes of preparing electroactive polymer fibers, and devices containing electroactive polymer fibers. Devices can be used as actuators and sensors, generators and transducers. Applications include inter alia artificial muscles, prosthetics and robotics.
    Type: Application
    Filed: May 5, 2006
    Publication date: April 2, 2009
    Inventors: Rodrigo Alvarez Icaza Rivera, Juan Manuel Alvarez Sanches, Kevin Chalgren Galloway, Howard Scott Katzenberg, Rahul Kothari, John Vernon Arthur