Patents by Inventor Christopher David Eliasmith

Christopher David Eliasmith has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230359861
    Abstract: The present invention relates to methods and systems for improving the training and inference speed of recurrently connected artificial neural networks by parallelizing application of one or more network layer’s recurrent connection weights across all items in the layer’s input sequence. More specifically, the present invention specifies methods and systems for carrying out this parallelization for any recurrent network layer that implements a linear time-invariant (LTI) dynamical system. The method of parallelization involves first computing the impulse response of a recurrent layer, and then convolving this impulse response with all items in the layer’s input sequence, thereby producing all of the layer’s outputs simultaneously. Systems composed of one or more parallelized linear recurrent layers and one or more nonlinear layers are then operated to perform pattern classification, signal processing, data representation, or data generation tasks.
    Type: Application
    Filed: October 1, 2021
    Publication date: November 9, 2023
    Inventors: Narsimha CHILKURI, Christopher David ELIASMITH
  • Patent number: 11741098
    Abstract: The present invention relates to methods and systems for storing and querying database entries with neuromorphic computers. The system is comprised of a plurality of encoding subsystems that convert database entries and search keys into vector representations, a plurality of associative memory subsystems that match vector representations of search keys to vector representations of database entries using spike-based comparison operations, a plurality of binding subsystems that update retrieved vector representations during the execution of hierarchical queries, a plurality of unbinding subsystems that extract information from retrieved vector representations, a plurality of cleanup subsystems that remove noise from these retrieved representations, and one or more input search key representations that propagates spiking activity through the associative memory, binding, unbinding, cleanup, and readout subsystems to retrieve database entries matching the search key.
    Type: Grant
    Filed: July 15, 2020
    Date of Patent: August 29, 2023
    Assignee: APPLIED BRAIN RESEARCH INC.
    Inventors: Aaron Russell Voelker, Christopher David Eliasmith, Peter Blouw
  • Patent number: 11537856
    Abstract: The present invention relates to the digital circuits for evaluating neural engineering framework style neural networks. The digital circuits for evaluating neural engineering framework style neural networks comprised of at least one on-chip memory, a plurality of non-linear components, an external system, a first spatially parallel matrix multiplication, a second spatially parallel matrix multiplication, an error signal, plurality of set of factorized network weight, and an input signal. The plurality of sets of factorized network weights further comprise a first set factorized network weights and a second set of factorized network weights. The first spatially parallel matrix multiplication combines the input signal with the first set of factorized network weights called the encoder weight matrix to produce an encoded value. The non-linear components are hardware simulated neurons which accept said encoded value to produce a distributed neural activity.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: December 27, 2022
    Assignee: APPLIED BRAIN RESEARCH INC.
    Inventors: Benjamin Jacob Morcos, Christopher David Eliasmith, Nachiket Ganesh Kapre
  • Publication number: 20220172053
    Abstract: A method is described for designing systems that provide efficient implementations of feed-forward, recurrent, and deep networks that process dynamic signals using temporal filters and static or time-varying nonlinearities. A system design methodology is described that provides an engineered architecture. This architecture defines a core set of network components and operations for efficient computation of dynamic signals using temporal filters and static or time-varying nonlinearities. These methods apply to a wide variety of connected nonlinearities that include temporal filters in the connections. Here we apply the methods to synaptic models coupled with spiking and/or non-spiking neurons whose connection parameters are determined using a variety of methods of optimization.
    Type: Application
    Filed: December 20, 2021
    Publication date: June 2, 2022
    Applicant: Applied Brain Research Inc.
    Inventors: Aaron Russell Voelker, Christopher David Eliasmith
  • Publication number: 20220138382
    Abstract: The present invention relates to methods and systems for simulating and predicting dynamical systems with vector symbolic representations of continuous spaces. More specifically, the present invention specifies methods for simulating and predicting such dynamics through the definition of temporal fractional binding, collection, and decoding subsystems that collectively function to both create vector symbolic representations of multi-object trajectories, and decode these representations to simulate or predict the future states of these trajectories. Systems composed of one or more of these temporal fractional binding, collection, and decoding subsystems are combined to simulate or predict the behavior of at least one dynamical system that involves the motion of at least one object.
    Type: Application
    Filed: November 5, 2021
    Publication date: May 5, 2022
    Inventors: Aaron Russell VOELKER, Christopher David ELIASMITH, Peter BLOUW, Terrance STEWART, NICOLE SANDRA-YAFFE DUMONT
  • Publication number: 20220083867
    Abstract: The present invention relates to methods and systems for using neural networks to simulate dynamical systems for purposes of solving optimization problems. More specifically, the present invention defines methods and systems that perform a process of “synaptic descent” for performing “synaptic descent”, wherein the state of a given synapse in a neural network is a variable being optimized, the input to the synapse is a gradient defined with respect to this state, and the synapse implements the computations of an optimizer that performs gradient descent over time. Synapse models regulate the dynamics of a given neural network by governing how the output of one neuron is passed as input to another, and since the process of synaptic descent performs gradient descent with respect to state variables defining these dynamics, it can be harnessed to evolve the neural network towards a state or sequence of states that encodes the solution to an optimization problem.
    Type: Application
    Filed: September 14, 2021
    Publication date: March 17, 2022
    Inventors: Aaron Russell VOELKER, Christopher David ELIASMITH
  • Patent number: 11238345
    Abstract: Neural network architectures, with connection weights determined using Legendre Memory Unit equations, are trained while optionally keeping the determined weights fixed. Networks may use spiking or non-spiking activation functions, may be stacked or recurrently coupled with other neural network architectures, and may be implemented in software and hardware. Embodiments of the invention provide systems for pattern classification, data representation, and signal processing, that compute using orthogonal polynomial basis functions that span sliding windows of time.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: February 1, 2022
    Assignee: Applied Brain Research Inc.
    Inventors: Aaron Russell Voelker, Christopher David Eliasmith
  • Patent number: 11238337
    Abstract: A method is described for designing systems that provide efficient implementations of feed-forward, recurrent, and deep networks that process dynamic signals using temporal filters and static or time-varying nonlinearities. A system design methodology is described that provides an engineered architecture. This architecture defines a core set of network components and operations for efficient computation of dynamic signals using temporal filters and static or time-varying nonlinearities. These methods apply to a wide variety of connected nonlinearities that include temporal filters in the connections. Here we apply the methods to synaptic models coupled with spiking and/or non-spiking neurons whose connection parameters are determined using a variety of methods of optimization.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: February 1, 2022
    Assignee: Applied Brain Research Inc.
    Inventors: Aaron Russell Voelker, Christopher David Eliasmith
  • Publication number: 20210342668
    Abstract: Recurrent neural networks are efficiently mapped to hardware computation blocks specifically designed for Legendre Memory Unit (LMU) cells, Projected LSTM cells, and Feed Forward cells. Iterative resource allocation algorithms are used to partition recurrent neural networks and time multiplex them onto a spatial distribution of computation blocks, guided by multivariable optimizations for power, performance, and accuracy. Embodiments of the invention provide systems for low power, high performance deployment of recurrent neural networks for battery sensitive applications such as automatic speech recognition (ASR), keyword spotting (KWS), biomedical signal processing, and other applications that involve processing time-series data.
    Type: Application
    Filed: April 29, 2021
    Publication date: November 4, 2021
    Inventors: Gurshaant Singh Malik, Aaron Russell VOELKER, Christopher David ELIASMITH
  • Patent number: 11126913
    Abstract: A method for implementing spiking neural network computations, the method including defining a dynamic node response function that exhibits spikes, where spikes are temporal nonlinearities for representing state over time; defining a static representation of said node response function; and using the static representation of the node response function to train a neural network. A system for implementing the method is also disclosed.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: September 21, 2021
    Assignee: Applied Brain Research Inc
    Inventors: Eric Gordon Hunsberger, Christopher David Eliasmith
  • Publication number: 20210133190
    Abstract: The present invention relates to methods and systems for storing and querying database entries with neuromorphic computers. The system is comprised of a plurality of encoding subsystems that convert database entries and search keys into vector representations, a plurality of associative memory subsystems that match vector representations of search keys to vector representations of database entries using spike-based comparison operations, a plurality of binding subsystems that update retrieved vector representations during the execution of hierarchical queries, a plurality of unbinding subsystems that extract information from retrieved vector representations, a plurality of cleanup subsystems that remove noise from these retrieved representations, and one or more input search key representations that propagates spiking activity through the associative memory, binding, unbinding, cleanup, and readout subsystems to retrieve database entries matching the search key.
    Type: Application
    Filed: July 15, 2020
    Publication date: May 6, 2021
    Inventors: Aaron Russell VOELKER, Christopher David ELIASMITH, Peter Blouw
  • Patent number: 10963785
    Abstract: Methods, systems and apparatus that provide for perceptual, cognitive, and motor behaviors in an integrated system implemented using neural architectures. Components of the system communicate using artificial neurons that implement neural networks. The connections between these networks form representations—referred to as semantic pointers—which model the various firing patterns of biological neural network connections. Semantic pointers can be thought of as elements of a neural vector space, and can implement a form of abstraction level filtering or compression, in which high-dimensional structures can be abstracted one or more times thereby reducing the number of dimensions needed to represent a particular structure.
    Type: Grant
    Filed: January 11, 2018
    Date of Patent: March 30, 2021
    Assignee: Applied Brain Research Inc.
    Inventors: Christopher David Eliasmith, Terrence Charles Stewart, Feng-Xuan Choo, Trevor William Bekolay, Travis Crncich-DeWolf, Yichuan Tang, Daniel Halden Rasmussen
  • Publication number: 20210089912
    Abstract: Neural network architectures, with connection weights determined using Legendre Memory Unit equations, are trained while optionally keeping the determined weights fixed. Networks may use spiking or non-spiking activation functions, may be stacked or recurrently coupled with other neural network architectures, and may be implemented in software and hardware. Embodiments of the invention provide systems for pattern classification, data representation, and signal processing, that compute using orthogonal polynomial basis functions that span sliding windows of time.
    Type: Application
    Filed: March 6, 2020
    Publication date: March 25, 2021
    Inventors: Aaron Russell VOELKER, Christopher David ELIASMITH
  • Patent number: 10860630
    Abstract: A system for generating and performing inference over graphs of sentences standing in directed discourse relations to one another, comprising a computer process, and a computer readable medium having computer executable instructions for providing: tree-structured encoder networks that convert an input sentence or a query into a vector representation; tree-structured decoder networks that convert a vector representation into a predicted sentence standing in a specified discourse relation to the input sentence; couplings of encoder and decoder networks that permit an input sentence and a “query” sentence to constrain a decoder network to predict a novel sentence that satisfies a specific discourse relation and thereby implements an instance of graph traversal; couplings of encoder and decoder networks that implement traversal over graphs of multiple linguistic relations, including entailment, contradiction, explanation, elaboration, contrast, and parallelism, for the purposes of answering questions or performin
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: December 8, 2020
    Assignee: Applied Brain Research Inc.
    Inventors: Peter Blouw, Christopher David Eliasmith
  • Publication number: 20200302281
    Abstract: The present invention relates to methods and systems for encoding and processing representations that include continuous structures using vector-symbolic representations. The system is comprised of a plurality of binding subsystems that implement a fractional binding operation, a plurality of unbinding subsystems that implement a fractional unbinding operation, and at least one input symbol representation that propagates activity through a binding subsystem and an unbinding subsystem to produce a high-dimensional vector representation of a continuous space.
    Type: Application
    Filed: March 18, 2020
    Publication date: September 24, 2020
    Inventors: Aaron Russell VOELKER, Christopher David ELIASMITH, Brent KOMER, Terrence STEWART
  • Publication number: 20200050919
    Abstract: The present invention relates to methods and systems for encoding and processing symbol structures using vector-derived transformation binding. The system comprises a plurality of binding subsystems that implement a vector-derived transformation binding operation, a plurality of unbinding subsystems that implement a vector-derived transformation unbinding operation, a plurality of cleanup subsystems that match noisy or corrupted vectors to their uncorrupted counterparts, and at least one input symbol representation that propagates activity through the binding subsystem, the unbinding subsystem, and the cleanup subsystem to produce high-dimensional vector representations of symbolic structures. The binding, the unbinding, and the cleanup subsystems are artificial neural networks implemented in network layers.
    Type: Application
    Filed: August 9, 2019
    Publication date: February 13, 2020
    Inventors: Jan Gosmann, Christopher David Eliasmith
  • Publication number: 20200050926
    Abstract: The present invention relates to the digital circuits for evaluating neural engineering framework style neural networks. The digital circuits for evaluating neural engineering framework style neural networks comprised of at least one on-chip memory, a plurality of non-linear components, an external system, a first spatially parallel matrix multiplication, a second spatially parallel matrix multiplication, an error signal, plurality of set of factorized network weight, and an input signal. The plurality of sets of factorized network weights further comprise a first set factorized network weights and a second set of factorized network weights. The first spatially parallel matrix multiplication combines the input signal with the first set of factorized network weights called the encoder weight matrix to produce an encoded value. The non-linear components are hardware simulated neurons which accept said encoded value to produce a distributed neural activity.
    Type: Application
    Filed: August 8, 2019
    Publication date: February 13, 2020
    Inventors: Benjamin Jacob Morcos, Christopher David Eliasmith, Nachiket Ganesh Kapre
  • Publication number: 20200019837
    Abstract: Methods and apparatus for spiking neural network computing based on e.g., a multi-layer kernel architecture, shared dendritic encoding, and/or thresholding of accumulated spiking signals. In one exemplary embodiment, a multi-layer mixed-signal kernel is disclosed that uses different characteristics of its constituent stages to perform neuromorphic computing. Specifically, analog domain processing inexpensively provides diversity, speed, and efficiency, whereas digital domain processing enables a variety of complex logical manipulations (e.g., digital noise rejection, error correction, arithmetic manipulations, etc.). Isolating different processing techniques into different stages between the layers of a multi-layer kernel results in substantial operational efficiencies.
    Type: Application
    Filed: July 10, 2019
    Publication date: January 16, 2020
    Inventors: Kwabena Adu Boahen, Sam Brian Fok, Alexander Smith Neckar, Ben Varkey Benjamin Pottayil, Terrence Charles Stewart, Nick Nirmal Oza, Rajit Manohar, Christopher David Eliasmith
  • Publication number: 20200019838
    Abstract: Methods and apparatus for spiking neural network computing based on e.g., a multi-layer kernel architecture, shared dendritic encoding, and/or thresholding of accumulated spiking signals. A shared dendrite is disclosed that represents the encoding weights of a spiking neural network as tap locations within a mesh of resistive elements. Instead of calculating encoded digital spikes with arithmetic operations, the shared dendrite attenuates current signals as an inherent physical property of tap distance. The disclosed embodiments can approach a desired distribution (e.g., uniform distribution on the D-dimensional unit hypersphere's surface) given a large enough population of computational primitives.
    Type: Application
    Filed: July 10, 2019
    Publication date: January 16, 2020
    Inventors: Kwabena Adu Boahen, Sam Brian Fok, Alexandar Smith Neckar, Ben Varkey Benjamin Pottayill, Terrence Charles Stewart, Nick Nirmal Oza, Rajit Manohar, Christopher David Eliasmith
  • Publication number: 20200019839
    Abstract: Methods and apparatus for spiking neural network computing based on e.g., a multi-layer kernel architecture, shared dendritic encoding, and/or thresholding of accumulated spiking signals. In one embodiment, a thresholding accumulator is disclosed that reduces spiking activity between different stages of a neuromorphic processor. Spiking activity can be directly related to power consumption and signal-to-noise ratio (SNR); thus, various embodiments trade-off the costs and benefits associated with threshold accumulation. For example, reducing spiking activity (e.g., by a factor of 10) during an encoding stage can have minimal impact on downstream fidelity (SNR) for a decoding stage, while yielding substantial improvements in power consumption.
    Type: Application
    Filed: July 10, 2019
    Publication date: January 16, 2020
    Inventors: Kwabena Adu Boahen, Sam Brian Fok, Alexander Smith Neckar, Ben Varkey Benjamin Pottayil, Terrence Stewart, Nick Nirmal Oza, Rajit Manohar, Christopher David Eliasmith