Digital Neuron Processor Patents (Class 706/43)
  • Patent number: 11775805
    Abstract: A log circuit for piecewise linear approximation is disclosed. The log circuit identifies an input associated with a logarithm operation to be performed using piecewise linear approximation. The log circuit then identifies a range that the input falls within from various ranges associated with piecewise linear approximation (PLA) equations for the logarithm operation, where the identified range corresponds to one of the PLA equations. The log circuit computes a result of the corresponding PLA equation based on the respective operands of the equation. The log circuit then returns an output associated with the logarithm operation, which is based at least partially on the result of the PLA equation.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: October 3, 2023
    Assignee: Intel Coroporation
    Inventors: Kamlesh Pillai, Gurpreet S. Kalsi, Amit Mishra
  • Patent number: 11687765
    Abstract: In one aspect, a system for analog in-memory compute for a neural network includes an array of neurons. Each neuron of the array of neurons receives a pulse of magnitude xi, and duration t, wherein a product xi*yi provides a current proportional to the input for a time duration t, which is a charge associated with a particular neuron in response to the input being presented to that particular neuron. A reference cell includes a gate-drain connected flash cell configuration and coupled with the array of neurons. The reference cell is programmed to a pre-determined threshold voltage Vt-ref. The reference cell receives a pre-determined current, Iref, wherein, based on the Iref and a pre-determined threshold voltage Vt-ref, a voltage is created at a drain of the reference cell.
    Type: Grant
    Filed: February 22, 2020
    Date of Patent: June 27, 2023
    Inventors: Vishal Sarin, Vikram Kowshik, Sankha Subhra Saha
  • Patent number: 11354562
    Abstract: Numerous embodiments for processing the current output of a vector-by-matrix multiplication (VMM) array in an artificial neural network are disclosed. The embodiments comprise a summer circuit and an activation function circuit. The summer circuit and/or the activation function circuit comprise circuit elements that can be adjusted in response to the total possible current received from the VMM to optimize power consumption.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: June 7, 2022
    Assignee: SILICON STORAGE TECHNOLOGY, INC.
    Inventors: Hieu Van Tran, Stanley Hong, Anh Ly, Thuan Vu, Hien Pham, Kha Nguyen, Han Tran
  • Patent number: 11270196
    Abstract: Neural inference chips for computing neural activations are provided. In various embodiments, the neural inference chip is adapted to: receive an input activation tensor comprising a plurality of input activations; receive a weight tensor comprising a plurality of weights; Booth recode each of the plurality of weights into a plurality of Booth-coded weights, each Booth coded value having an order; multiply the input activation tensor by the Booth coded weights, yielding a plurality of results for each input activation, each of the plurality of results corresponding to the orders of the Booth-coded weights; for each order of the Booth-coded weights, sum the corresponding results, yielding a plurality of partial sums, one for each order; and compute a neural activation from a sum of the plurality of partial sums.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: March 8, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jun Sawada, Filipp A. Akopyan, Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Dharmendra S. Modha, Tapan K. Nayak, Carlos O. Otero
  • Patent number: 11080593
    Abstract: An implementation of neural networks on silicon for the processing of various signals comprises multidimensional signals such as images. The efficient implementation on silicon of a complete processing chain for the signal via the approach using neural networks is provided. The circuit comprises at least: a series of neuro-blocks grouped together in branches composed of a group of neuro-blocks and a broadcasting bus, the neuro-blocks connected to the broadcasting bus; a routing unit connected to the broadcasting bus of the branches, carrying out the routing and broadcasting of data to and from the branches; a transformation module connected to the routing unit via an internal bus and designed to be connected at the input of the circuit to an external databus, the module carrying out the transformation of input data into serial coded data. The processing operations internal to the circuit are carried out according to a serial communications protocol.
    Type: Grant
    Filed: September 29, 2014
    Date of Patent: August 3, 2021
    Assignee: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES
    Inventors: Marc Duranton, Jean-Marc Philippe, Michel Paindavoine
  • Patent number: 10769527
    Abstract: Systems and methods for accelerating artificial neural network computation are disclosed. An example may comprise selecting, by a controller communicatively coupled to a selector and an arithmetic unit and based on a criterion, an input value from the stream of input values of a neuron, configuring, by the controller, the selector to provide, dynamically, the selected input value to the arithmetic unit, providing, by the controller to the arithmetic unit, an information of the selected input value, acquiring, by the arithmetic unit and based on the information, a weight from a set of weights, and performing, by the arithmetic unit a mathematical operation on the selected input value and the weight to obtain a result, wherein the result is to be used to compute an output of the neuron. The criterion may include a comparison between the input value and a reference value. The reference value may include zero.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: September 8, 2020
    Assignee: Mipsology SAS
    Inventors: Sebastien Delerse, Ludovic Larzul, Benoit Chappet de Vangel, Taoufik Chouta
  • Patent number: 10255205
    Abstract: Systems and methods of implementing a mixed-signal integrated circuit includes sourcing, by a reference signal source, a plurality of analog reference signals along a shared signal communication path to a plurality of local accumulators; producing an electrical charge, at each of the plurality of local accumulators, based on each of the plurality of analog reference signals; adding or subtracting, by each of the plurality of local accumulators, the electrical charge to an energy storage device of each of the plurality of local accumulators over a predetermined period; summing along the shared communication path the electrical charge from the energy storage device of each of the plurality of local accumulators at an end of the predetermined period; and generating an output based on a sum of the electrical charge from each of the plurality of local accumulators.
    Type: Grant
    Filed: September 11, 2018
    Date of Patent: April 9, 2019
    Assignee: Mythic, Inc.
    Inventors: Laura Fick, Manar El-Chammas, Skylar Skrzyniarz, David Fick
  • Patent number: 9628517
    Abstract: An approach is provided that, upon receiving a keyboard event, reduces a volume of an audio input channel from a first volume level to a lower volume level. After the volume of the audio input channel is reduced, the approach waits until a system event occurs, with the system event based at least in part on the occurrence of a nondeterministic event.
    Type: Grant
    Filed: March 30, 2010
    Date of Patent: April 18, 2017
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: Carlos Munoz-Bustamante, Joseph Michael Pennisi, Randall Scott Springfield, Ephraim D. Starr, Yasushi Tsukamoto, Rod D. Waltermann
  • Patent number: 9129220
    Abstract: Certain embodiments of the present disclosure support implementation of a digital neural processor with discrete-level synapses and probabilistic synapse weight training.
    Type: Grant
    Filed: July 7, 2010
    Date of Patent: September 8, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Vladimir Aparin, Subramaniam Venkatraman
  • Patent number: 8676734
    Abstract: Certain embodiments of the present disclosure support techniques for storing synaptic weights separately from a neuro-processor chip into a replaceable storage. The replaceable synaptic memory gives a unique functionality to the neuro-processor and improves its flexibility for supporting a large variety of applications. In addition, the replaceable synaptic storage can provide more choices for the type of memory used, and might decrease the area and implementation cost of the overall neuro-processor chip.
    Type: Grant
    Filed: July 7, 2010
    Date of Patent: March 18, 2014
    Assignee: QUALCOMM, Incorporated
    Inventor: Vladimir Aparin
  • Patent number: 7908235
    Abstract: A special purpose processor (SPP) can use a Field Programmable Gate Array (FPGA) to model a large number of neural elements. The FPGAs or similar programmable device can have multiple cores doing presynaptic, postsynaptic, and plasticity calculations in parallel. Each core can implement multiple neural elements of the neural model.
    Type: Grant
    Filed: November 17, 2009
    Date of Patent: March 15, 2011
    Assignee: Neurosciences Research Foundation, Inc.
    Inventors: James A. Snook, Richard W. Schermerhorn
  • Publication number: 20100161533
    Abstract: A special purpose processor (SPP) can use a Field Programmable Gate Array (FPGA) to model a large number of neural elements. The FPGAs or similar programmable device can have multiple cores doing presynaptic, postsynaptic, and plasticity calculations in parallel. Each core can implement multiple neural elements of the neural model.
    Type: Application
    Filed: November 17, 2009
    Publication date: June 24, 2010
    Applicant: NEUROSCIENCES RESEARCH FOUNDATION, INC.
    Inventors: James A. Snook, Richard W. Schermerhorn
  • Patent number: 7620819
    Abstract: We develop a system consisting of a neural architecture resulting in classifying regions corresponding to users' keystroke patterns. We extend the adaptation properties to classification phase resulting in learning of changes over time. Classification results on login attempts of 43 users (216 valid, 657 impersonation samples) show considerable improvements over existing methods.
    Type: Grant
    Filed: September 29, 2005
    Date of Patent: November 17, 2009
    Assignees: The Penn State Research Foundation, Louisiana Tech University Foundation, Inc.
    Inventors: Vir V. Phoha, Sunil Babu, Asok Ray, Shashi P. Phoba
  • Patent number: 7516152
    Abstract: A computing system and method for generating and selecting data mining models. The computing system comprises a computer readable medium and computing devices electrically coupled through an interface apparatus. A data mining modeling algorithm is stored on the computer readable medium. Each of the computing devices comprises at least one central processing unit (CPU) and an associated memory device. Each of the associated memory devices comprises a data subset from a plurality of data subsets. A technique is selected for generating a data mining model applied to each of the data subsets. The data mining modeling algorithm is run simultaneously, on each of the computing devices, using the selected technique to generate an associated data mining model on each of the computing devices. A best data mining model from the generated data mining models is determined in accordance with the selected technique.
    Type: Grant
    Filed: July 5, 2005
    Date of Patent: April 7, 2009
    Assignee: International Business Machines Corporation
    Inventors: Milind Chitgupakar, Mark S. Ramsey, David A. Selby
  • Patent number: 7512572
    Abstract: A memory configuration for use in a computer system includes a plurality of address decoders each of which is allocated an identifier having a predetermined number of bits, each bit having first and second selectable states. A data memory having a plurality of word lines of predetermined length, is also included in each of the address decoders and is activatable to select one of the plurality of word lines. The address decoders receive an input address having a predetermined number of bits and compare the identifier of an address decoder with the input address wherein the memory further activates an address decoder if at least a predetermined minimum number of bits set to the first selectable state in the input address correspond to bits set to the first selectable state in the decoder identifier.
    Type: Grant
    Filed: October 14, 2002
    Date of Patent: March 31, 2009
    Assignee: Cogniscience Limited
    Inventor: Stephen B. Furber
  • Patent number: 7430546
    Abstract: An information processing system having neuron-like signal processors that are interconnected by synapse-like processing junctions that simulates and extends capabilities of biological neural networks. The information processing systems uses integrate-and-fire neurons and Temporally Asymmetric Hebbian learning (spike timing-dependent learning) to adapt the synaptic strengths. The synaptic strengths of each neuron are guaranteed to become optimal during the course of learning either for estimating the parameters of a dynamic system (system identification) or for computing the first principal component. This neural network is well-suited for hardware implementations, since the learning rule for the synaptic strengths only requires computing either spike-time differences or correlations. Such hardware implementation may be used for predicting and recognizing audiovisual information or for improving cortical processing by a prosthetic device.
    Type: Grant
    Filed: June 2, 2004
    Date of Patent: September 30, 2008
    Inventor: Roland Erwin Suri
  • Patent number: 7313551
    Abstract: The invention relates to a processing device for an automatic perception system that involves the use of STN calculation units (1) which receive data from a data bus (7) and which are interconnected by a backannotation bus (6). According to the invention, the units are grouped together in hierarchical sets, in which the set of the order of 0 is formed by a single unit, the set of the order of 1 is formed by the combination of several order 0 set, the sets of the order of P greater than 1 are formed by a combination of lower P?1 order sets, the hierarchised sets of a given order P sharing a backannotation bus. The backannotation buses between a lower order P and a greater order P+1 are interconnected by means of a connection unit. The invention also relates to the method of using the device.
    Type: Grant
    Filed: August 9, 2002
    Date of Patent: December 25, 2007
    Inventor: Patrick Pirim
  • Patent number: 7174325
    Abstract: Disclosed is a digital neural processor comprising at least one neural processing element. The neural processing elements including at least one simulated dendrite and a simulated axon. Each of the simulated dendrites may include: a dendrite input capable of receiving at least one dendrite input signal and a dendrite signal propagation function capable of calculating a dendrite output signal in discrete time steps from each dendrite input signal. The signal propagation function may further include a delay parameter; a duration parameter; and an amplitude parameter. The simulated axon includes an axon input capable of receiving dendrite output signals, an axon function, capable of calculating an axon output signal from dendrite output signal(s) and an axon output capable of outputting the axon output signal.
    Type: Grant
    Filed: May 29, 2003
    Date of Patent: February 6, 2007
    Assignee: George Mason Intellectual Properties, Inc.
    Inventor: Giorgio A. Ascoli
  • Patent number: 7080054
    Abstract: An artificial neuron is formed from an input subcircuit, a capacitor free leaky integrator subcircuit, and an output switching subcircuit. The input subcircuit is configured to supply a pulsed input signal. The capacitor free leaky integrator subcircuit is configured to supply a parasitic capacitance and to utilize the parasitic capacitance to provide differing time constants for the rising and falling edges of an output signal produced in response to the pulsed input signal. The output switching subcircuit s configured to, upon receipt of a sufficient output signal from the capacitor free leaky integrator subcircuit, switch off the input subcircuit and to release a neuron firing signal.
    Type: Grant
    Filed: July 16, 2004
    Date of Patent: July 18, 2006
    Assignee: Idaho Research Foundation, Inc.
    Inventors: Richard B. Wells, Bruce Calvert Barnes
  • Patent number: 6947916
    Abstract: A computing machine capable of performing multiple operations using a universal computing unit is provided. The universal computing unit maps an input signal to an output signal. The mapping is initiated using an instruction that includes the input signal, a weight matrix, and an activation function. Using the instruction, the universal computing unit may perform multiple operations using the same hardware configuration. The computation that is performed by the universal computing unit is determined by the weight matrix and activation function used. Accordingly, the universal computing unit does not require any programming to perform a type of computing operation because the type of operation is determined by the parameters of the instruction, specifically, the weight matrix and the activation function.
    Type: Grant
    Filed: December 21, 2001
    Date of Patent: September 20, 2005
    Assignee: Quicksilver Technology, Inc.
    Inventors: Fa-Long Luo, Bohumir Uvacek
  • Patent number: 6501294
    Abstract: A neuron circuit that can be served as a building block for a neural network implemented in an integrated circuit is disclosed. The neuron circuit includes a synapse circuit block and a neuron body circuit block. The synapse circuit block has three transistors, and the body of one of the three transistors is controlled by a weighted input. The neuron body circuit block includes a current mirror circuit, a summing circuit, and an invertor circuit. The neuron body circuit is coupled to the synapse circuit block to generate an output pulse.
    Type: Grant
    Filed: April 26, 2001
    Date of Patent: December 31, 2002
    Assignee: International Business Machines Corporation
    Inventors: Kerry Bernstein, Norman Jay Rohrer
  • Patent number: 6434541
    Abstract: An engine diagnostic system includes a bit-serial based recurrent neuroprocessor for processing data from an internal combustion engine in order to diagnose misfires in real-time and reduces the number of neurons required to perform the task by time multiplexing groups of neurons from a candidate pool of neurons to achieve the successive hidden layers of the recurrent network topology.
    Type: Grant
    Filed: April 21, 1999
    Date of Patent: August 13, 2002
    Assignee: Ford Global Technologies, Inc.
    Inventors: Raoul Tawel, Nazeeh Aranki, Lee A. Feldkamp, Gintaras V. Puskorius, Kenneth A. Marko, John V. James
  • Patent number: 6151594
    Abstract: An artificial neuron, which may be implemented either in hardware or software, has only one significant processing element in the form of a multiplier. Inputs are first fed through gating functions to produce gated inputs. These gated inputs are then multiplied together to produce a product which is multiplied by a weight to produce the neuron output.
    Type: Grant
    Filed: August 22, 1994
    Date of Patent: November 21, 2000
    Assignee: Motorola, Inc.
    Inventor: Shay-Ping Thomas Wang
  • Patent number: 6078190
    Abstract: The threshold value logic has a non-inverting circuit path (S) that and an inverting circuit path (S') are connected to at least one comparative weighting subcircuit (BC, BS). The non-inverting circuit path and the inverting circuit path preferably are of identical construction and each contain at least one neuron transistor (NT1, NT1'). The corresponding neuron transistor gates in the non-inverting circuit path and in the inverting circuit path are driven inversely with respect to one another.
    Type: Grant
    Filed: August 6, 1998
    Date of Patent: June 20, 2000
    Assignee: Siemens Aktiengesellschaft
    Inventors: Werner Weber, Roland Thewes, Andreas Luck
  • Patent number: 6041322
    Abstract: A digital artificial neural network (ANN) reduces memory requirements by storing sample transfer function representing output values for multiple nodes. Each nodes receives an input value representing the information to be processed by the network. Additionally, the node determines threshold values indicative of boundaries for application of the sample transfer function for the node. From the input value received, the node generates an intermediate value. Based on the threshold values and the intermediate value, the node determines an output value in accordance with the sample transfer function.
    Type: Grant
    Filed: April 18, 1997
    Date of Patent: March 21, 2000
    Assignee: Industrial Technology Research Institute
    Inventors: Wan-Yu Meng, Cheng-Kai Chang, Hwai-Tsu Chang, Fang-Ru Hsu, Ming-Rong Lee
  • Patent number: 5963930
    Abstract: A method and apparatus is presented for synthesizing a network for use with pulse frequency encoded signals that has a smoothly saturating transfer characteristic for large signals based on the use of delay and an OR-gate. When connected to the output of a pulse frequency type of neuron, it results in a sigmoidal activation function.
    Type: Grant
    Filed: June 30, 1995
    Date of Patent: October 5, 1999
    Assignees: Ricoh Company Ltd., Ricaoh Corporation
    Inventors: David Geoffrey Stork, Ronald Craig Keesing
  • Patent number: 5857177
    Abstract: A neural network has a plurality of network neurons and a plurality of network connections which connect each network neuron to one or more other network neurons. Each network neuron has control parameters in the form of an associated threshold value and/or signal distribution when a signal is supplied to other network neurons, and supplies a signal to the output in response to a comparison between said threshold value and a signal received on the input. One or more network neurons serve as network inputs, which supply an output representation in dependence on the sensed parameters applied to the network input. The network has associated with it sensor neurons which register changes in the conditions under which the network works, and the control parameters of the network neurons are regulated in dependence on this.
    Type: Grant
    Filed: November 5, 1996
    Date of Patent: January 5, 1999
    Inventors: Preben Alstr.o slashed.m, Dimitris Elias Stassinopoulos
  • Patent number: 5835682
    Abstract: A dynamical system analyser (10) incorporates a computer (22) to perform a singular value decomposition of a time series of signals from a nonlinear (possibly chaotic) dynamical system (14). Relatively low-noise singular vectors from the decomposition are loaded into a finite impulse response filter (34). The time series is formed into Takens' vectors each of which is projected onto each of the singular vectors by the filter (34). Each Takens' vector thereby provides the co-ordinates of a respective point on a trajectory of the system (14) in a phase space. A heuristic processor (44) is used to transform delayed co-ordinates by QR decomposition and least squares fitting so that they are fitted to non-delayed co-ordinates. The heuristic processor (44) generates a mathematical model to implement this transformation, which predicts future system states on the basis of respective current states. A trial system is employed to generate like co-ordinates for transformation in the heuristic processor (44).
    Type: Grant
    Filed: November 1, 1995
    Date of Patent: November 10, 1998
    Assignee: The Secretary of State for Defence in Her Britannic Majesty's Government of the United Kingdom of Great Britain and Northern Ireland
    Inventors: David S. Broomhead, Robin Jones, Martin Johnson
  • Patent number: 5812993
    Abstract: A digital neural network architecture including a forward cascade of layers of neurons, having one input channel and one output channel, for forward processing of data examples that include many data packets. Backward cascade of layers of neurons, having one input channel and one output channel, for backward propagation learning of errors of the processed data examples. Each packet being of a given size. The forward cascade is adapted to be fed, through the input channel, with a succession of data examples and to deliver a succession of partially and fully processed data examples each consisting of a plurality of packets. The fully processed data examples are delivered through the one output channel. Each one of the layers is adapted to receive as input in its input channel a first number of data packets per time unit and to deliver as output in its output channel a second number of data packets per time unit.
    Type: Grant
    Filed: February 27, 1997
    Date of Patent: September 22, 1998
    Assignee: Technion Research and Development Foundation Ltd.
    Inventors: Ran Ginosar, Nitzan Weinberg
  • Patent number: 5784536
    Abstract: A neural processor includes neural calculation apparatus (30, NQ, RQ) which normalize an input data X with respect to another input data Y. It performs a division of X by Y in order to determine a quotient Q. The calculation apparatus are programmed to calculate (30) by iteration, a series of contributions .DELTA.Q.sub.i which are used (NQ, RQ) to update a partial quotient QP which becomes the quotient Q at the end of calculation. The calculation can be performed on an arbitrary arithmetic base which determines the number of neurons utilized and also the accuracy of calculation. It is also possible to utilize a partial remainder RP. Several programming modes are presented.
    Type: Grant
    Filed: June 5, 1995
    Date of Patent: July 21, 1998
    Assignee: U.S. Philips Corporation
    Inventor: Yannick Deville