Modifiable Weight Patents (Class 706/39)
  • Patent number: 11715031
    Abstract: An information processing method includes acquiring first output data for input data of first learning model, reference data for the input data, and second output data for the input data of second learning model obtained by converting first learning model; calculating first difference data corresponding to a difference between the first difference data and the reference data and second difference data corresponding to a difference between the second output data and the reference data; and training first learning model with use of the first difference data and the second difference data.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: August 1, 2023
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Yasunori Ishii, Yohei Nakata, Hiroaki Urabe
  • Patent number: 11636316
    Abstract: Broadly speaking, the present techniques exploit the properties of correlated electron materials for artificial neural networks and neuromorphic computing. In particular, the present techniques provide apparatuses/devices that comprise at least one correlated electron switch (CES) element and which may be used as, or to form, an artificial neuron or an artificial synapse.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: April 25, 2023
    Assignee: Cerfe Labs, Inc.
    Inventors: Lucian Shifren, Shidhartha Das, Naveen Suda, Carlos Alberto Paz de Araujo
  • Patent number: 11112785
    Abstract: Systems and methods for data collection and signal processing are disclosed, including a plurality of variable groups of analog sensor inputs, the analog sensors operationally coupled to an industrial environment. The inputs of the sensors may be received by an analog crosspoint switch, where the signals are monitored, data collection may be adaptively scheduled, front end signal conditioning may occur, and a noise value determined.
    Type: Grant
    Filed: December 12, 2018
    Date of Patent: September 7, 2021
    Assignee: Strong Force IoT Portfolio 2016, LLC
    Inventors: Charles Howard Cella, Gerald William Duffy, Jr., Jeffrey P. McGuckin, Mehul Desai
  • Patent number: 11074493
    Abstract: Boltzmann machine includes a plurality of circuit units each having an adder that adds weighted input signals and a comparison unit that compares an output signal of the adder with a threshold signal to output a binary output signal; and digital arithmetic units each generating the weighted input signals by weighting the binary output signal of the circuit units with a weight. The comparison unit has a first comparator that compares a thermal noise with a reference voltage to output a binary digital random signal, a DA converter that converts the digital random signal to an analog random signal and varies a magnitude of the analog random signal, and a second comparator that compares the output signal of the adder with the analog random signal to generate the binary output signal with a predetermined probability function.
    Type: Grant
    Filed: January 30, 2017
    Date of Patent: July 27, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Takumi Danjo, Sanroku Tsukamoto, Hirotaka Tamura
  • Patent number: 11055608
    Abstract: A convolutional neural network is provided comprising artificial neurons arranged in layers, each comprising output matrices. An output matrix comprises output neurons and is connected to an input matrix, comprising input neurons, by synapses associated with a convolution matrix comprising weight coefficients associated with the output neurons of an output matrix. Each synapse consists of a set of memristive devices storing a weight coefficient of the convolution matrix. In response to a change of the output value of an input neuron, the neural network dynamically associates each set of memristive devices with an output neuron connected to the input neuron. The neural network comprises accumulator(s) for each output neuron; to accumulate the values of the weight coefficients stored in the sets of memristive devices dynamically associated with the output neuron, the output value of the output neuron being determined from the value accumulated in the accumulator(s).
    Type: Grant
    Filed: August 18, 2015
    Date of Patent: July 6, 2021
    Assignee: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES
    Inventor: Olivier Bichler
  • Patent number: 10628699
    Abstract: Event-based image feature extraction includes reducing an accumulated magnitude of a leaky integrate and fire (LIF) neuron based on a difference between a current time and a previous time; receiving an event input from a dynamic vision sensor (DVS) pixel at the current time; weighting the received input; adding the weighted input to the reduced magnitude to form an accumulated magnitude of the LIF neuron at the current time; and, if the accumulated magnitude reaches a threshold, firing the neuron and decreasing the accumulated magnitude.
    Type: Grant
    Filed: June 13, 2017
    Date of Patent: April 21, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Lior Zamir, Nathan Henri Levy
  • Patent number: 10565500
    Abstract: A spiking neural network (SNN) is implemented on a neuromorphic computers and includes a plurality of neurons, a first set of the plurality of synapses defining feed-forward connections from a first subset of the neurons to a second subset of the neurons, a second subset of the plurality of synapses to define recurrent connections between the second subset of neurons, and a third subset of the plurality of synapses to define feedback connections from the second subset of neurons to the first subset of neurons. A set of input vectors are provided to iteratively modify weight values of the plurality of synapses. Each iteration involves selectively enabling and disabling the third subset of synapses with a different one of the input vectors applied to the SNN. The weight values are iteratively adjusted to derive a solution to an equation comprising an unknown matrix variable and an unknown vector variable.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: February 18, 2020
    Assignee: Intel Corporation
    Inventor: Tsung-Han Lin
  • Patent number: 9092736
    Abstract: Certain embodiments of the present disclosure support techniques for training of synapses in biologically inspired networks. Only one device based on a memristor can be used as a synaptic connection between a pair of neurons. The training of synaptic weights can be achieved with a low current consumption. A proposed synapse training circuit may be shared by a plurality of incoming/outgoing connections, while only one digitally implemented pulse-width modulation (PWM) generator can be utilized per neuron circuit for generating synapse-training pulses. Only up to three phases of a slow clock can be used for both the neuron-to-neuron communications and synapse training. Some special control signals can be also generated for setting up synapse training events. By means of these signals, the synapse training circuit can be in a high-impedance state outside the training events, thus the synaptic resistance (i.e., the synaptic weight) is not affected outside the training process.
    Type: Grant
    Filed: July 7, 2010
    Date of Patent: July 28, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Vladimir Aparin, Yi Tang
  • Patent number: 8959040
    Abstract: A spike timing dependent plasticity (STDP) apparatus, neuromorphic synapse system and a method provide STDP processing of spike signals. The STDP apparatus includes a first leaky integrator to receive a first spike signal and a second leaky integrator to receive a second spike signal. An output of the first leaky integrator is gated according to the second spike signal to produce a first gated integrated signal and an output of the second leaky integrator is gated according to the first spike signal to produce a second gated integrated signal. The STDP apparatus further includes an output integrator to integrate a difference of the first and second gated integrated signals to produce a weighted signal. The system includes a synapse core and the STDP apparatus. The method includes integrating the spike signals, gating the integrated signals and integrating a difference of the gated integrated signals.
    Type: Grant
    Filed: March 8, 2012
    Date of Patent: February 17, 2015
    Assignee: HRL Laboratories, LLC
    Inventors: Jose Cruz-Albrecht, Peter Petre, Narayan Srinivasa
  • Patent number: 8768921
    Abstract: Embodiments of the present invention relate to an approach for reusing information/knowledge. Specifically, embodiments of the present invention provide an approach for retrieving previously stored data to satisfy queries (e.g., jobs/tickets) for solutions to problems while maintaining privacy/security of the data as well as ensuring the quality of the results. In a typical embodiment, a query for a solution to a problem is received and details are extracted therefrom. Using the details, a search is performed on a set of data stored in at least one computer storage device. Based on the search, a set of results will be generated and classified into a set of categories. In any event, the quality of each of the set of results will be assessed based on the usefulness of the set of results.
    Type: Grant
    Filed: October 20, 2011
    Date of Patent: July 1, 2014
    Assignee: International Business Machines Corporation
    Inventors: Sugata Ghosal, Anup K. Ghosh, Nandakishore Kambhatla, Rose C. Kanjirathinkal, Asidhara Lahiri, Debapriyo Majumdar, Shajith I. Mohamed, Karthik Visweswariah
  • Patent number: 8706624
    Abstract: Disclosed is a system and method for Facilitating Credit Transactions, which may allow for the division of a given purchase or cash-withdrawal transaction amount, into periodical installments by enabling the financing of said transaction.
    Type: Grant
    Filed: October 4, 2012
    Date of Patent: April 22, 2014
    Assignee: Pay It Simple Ltd.
    Inventors: Gil Don, Alon Feit, Victoria Niel Kraine
  • Patent number: 7904398
    Abstract: Neuron component and method for use in artificial neural networks (ANNs) with input synapses (204, 204b . . . 204n), each synapse includes multiple weights called synapse weights (206-1, 206-2, 206-3). Each synapse further includes a facility to modulate, or gate, an input signal connected to the synapses, by each of the respective synapse weights within the synapse, supplying the result of each modulating operation. The neuron also sums the results of all modulating operations, and subjects the results to a transfer function. Each of the multiple weights associated with a given synapse, may be specified to have its own weight-adjustment facility (214, 214b, 214c), with its own error-values (216, 216b, 216c), and its own specified learning and aspect (1000) includes a separate sum (1018, 1018b) and transfer function (1020, 1020b) for each synapse weight.
    Type: Grant
    Filed: March 22, 2007
    Date of Patent: March 8, 2011
    Inventor: Dominic John Repici
  • Patent number: 7730086
    Abstract: A method of allocation a computer to service a request for a data set in a system having a plurality of computers. The method is implemented on a neural network having only an input layer having input nodes and an output layer having output nodes, where each output node is associated with a specific computer. Connecting the input nodes to the output nodes are weights w(j,k). The method includes the steps of receiving a request for data set ā€œIā€ and inputting to the input layer a vector R(I) dependent upon the number of requests for the requested data over a predetermined period of time and selecting a computer assignment associated with of one of the output nodes to service the data request, where the output node selected is associated with a specific weight selected to minimize a predetermined metric measuring the distance between the vector entry R(I) and the weights(I,k).
    Type: Grant
    Filed: February 1, 2007
    Date of Patent: June 1, 2010
    Assignees: Louisiana Tech University Foundation, Inc., Board of Supervisors of Louisiana State University Agricultural and Mechanical College on Behalf of the Louisiana State University Health Sciences Center
    Inventors: Vir V. Phoha, Sitharama S. Iyengar, Rajgopal Kannan
  • Patent number: 7620819
    Abstract: We develop a system consisting of a neural architecture resulting in classifying regions corresponding to users' keystroke patterns. We extend the adaptation properties to classification phase resulting in learning of changes over time. Classification results on login attempts of 43 users (216 valid, 657 impersonation samples) show considerable improvements over existing methods.
    Type: Grant
    Filed: September 29, 2005
    Date of Patent: November 17, 2009
    Assignees: The Penn State Research Foundation, Louisiana Tech University Foundation, Inc.
    Inventors: Vir V. Phoha, Sunil Babu, Asok Ray, Shashi P. Phoba
  • Publication number: 20080243740
    Abstract: An apparatus includes a circuit element that requires calibration, a calibration circuit for use in calibrating the circuit element, and a damping diode electrically connectable in a first path that includes the calibration circuit and electrically connectable in a second path that excludes the calibration circuit. The first path is for electrically connecting the calibration circuit and the circuit element, and the second path is for use in protecting the apparatus from electrostatic discharge. A switching circuit is used to switch the clamping diode between the first path and the second path.
    Type: Application
    Filed: April 2, 2007
    Publication date: October 2, 2008
    Inventor: Steven L. Hauptman
  • Patent number: 7398260
    Abstract: An Effector machine is a new kind of computing machine. When implemented in hardware, the Effector machine can execute multiple instructions simultaneously because every one of its computing elements is active. This greatly enhances the computing speed. By executing a meta program whose instructions change the connections in a dynamic Effector machine, the Effector machine can perform tasks that digital computers are unable to compute.
    Type: Grant
    Filed: March 2, 2004
    Date of Patent: July 8, 2008
    Assignee: Fiske Software LLC
    Inventor: Michael Stephen Fiske
  • Patent number: 7222112
    Abstract: A method, system and machine-readable storage medium for monitoring an engine using a cascaded neural network that includes a plurality of neural networks is disclosed. In operation, the method, system and machine-readable storage medium store data corresponding to the cascaded neural network. Signals generated by a plurality of engine sensors are then inputted into the cascaded neural network. Next, a second neural network is updated at a first rate, with an output of a first neural network, wherein the output is based on the inputted signals. In response, the second neural network outputs at a second rate, at least one engine control signal, wherein the second rate is faster than the first rate.
    Type: Grant
    Filed: January 27, 2006
    Date of Patent: May 22, 2007
    Assignee: Caterpillar Inc.
    Inventor: Evan Earl Jacobson
  • Patent number: 7143073
    Abstract: The invention relates to generating a test suite of instructions for testing the operation of a processor. A fuzzy finite state machine with a plurality of states 2 and transitions 4 determined by weights W1, W2 . . . W10 is used to generate a sequence of instructions. The weights determine the next state as well as an instruction and operands for each state. The weights may be adapted based on the generated sequence and further sequences are generated.
    Type: Grant
    Filed: April 4, 2002
    Date of Patent: November 28, 2006
    Assignee: Broadcom Corporation
    Inventor: Geoff Barrett
  • Patent number: 7143071
    Abstract: A method for changing the CPU frequency under control of a neural network. The neural network has m basis functions and n basis points that are connected together. Using the learning capability of the neural network to deduce basis weights based on dummy environmental parameters and a dummy output vector. In an application procedure, environmental parameters are input to the basis points and basis vectors are calculated based on the basis functions. Integrating the multiplication of each basis vector and its corresponding basis weight, an output vector can be generated to determine a control signal so that the CPU can be controlled to raise or lower its operating frequency. In addition, if the user has to change the parameters due to behavior, a fast learning function of a radial neural network can be used for complying with each user's behavior.
    Type: Grant
    Filed: March 8, 2002
    Date of Patent: November 28, 2006
    Assignee: Via Technologies, Inc.
    Inventors: I-Larn Chen, Yuh-Dar Tseng
  • Patent number: 7080053
    Abstract: A method for evolving appropriate connections among units in a neural network includes a) calculating weight changes at each existing connection and incipient connections between units for each training example; and b) determining a K ratio using the weight changes, wherein said K ratio comprises the weight change of existing connections, and wherein if the K ratio the weight change of incipient connections exceeds a threshold, further including b1) increasing a weight of the existing connection; b2) creating new connections at the incipient connections. The method further includes c) pruning weak connections between the units.
    Type: Grant
    Filed: August 16, 2001
    Date of Patent: July 18, 2006
    Assignee: Research Foundation of State University of New York
    Inventors: Paul Adams, Kingsley J. A. Cox, John D. Pinezich
  • Patent number: 7035834
    Abstract: A method, system and machine-readable storage medium for monitoring an engine using a cascaded neural network that includes a plurality of neural networks is disclosed. In operation, the method, system and machine-readable storage medium store data corresponding to the cascaded neural network. Signals generated by a plurality of engine sensors are then inputted into the cascaded neural network. Next, a second neural network is updated at a first rate, with an output of a first neural network, wherein the output is based on the inputted signals. In response, the second neural network outputs at a second rate, at least one engine control signal, wherein the second rate is faster than the first rate.
    Type: Grant
    Filed: May 15, 2002
    Date of Patent: April 25, 2006
    Assignee: Caterpillar Inc.
    Inventor: Evan Earl Jacobson
  • Patent number: 7016886
    Abstract: An artificial neuron includes inputs and dendrites, a respective one of which is associated with a respective one of the inputs. A respective dendrite includes a respective power series of weights. The weights in a given power of the power series represent a maximal projection. A respective power also may include at least one switch, to identify holes in the projections. By providing maximal projections, linear scaling may be provided for the maximal projections, and quasi-linear scaling may be provided for the artificial neuron, while allowing a lossless compression of the associations. Accordingly, hetero-associative and/or auto-associative recall may be accommodated for large numbers of inputs, without requiring geometric scaling as a function of input.
    Type: Grant
    Filed: August 9, 2002
    Date of Patent: March 21, 2006
    Assignee: Saffron Technology Inc.
    Inventors: David R. Cabana, Manuel Aparicio, IV, James S. Fleming
  • Patent number: 6999953
    Abstract: An analog neural computing medium, neuron and neural networks are disclosed. The neural computing medium includes a phase change material that has the ability to cumulatively respond to multiple input signals. Input signals induce transformations among a plurality of accumulation states of the disclosed neural computing medium. The accumulation states are characterized by a high electrical resistance. Upon cumulative receipt of energy from one or more input signals that equals or exceeds a threshold value, the neural computing medium fires by transforming to a low resistance state. The disclosed neural computing medium may also be configured to perform a weighting function whereby it weights incoming signals. The disclosed neurons may also include activation units for further transforming signals transmitted by the accumulation units according to a mathematical operation. The artificial neurons, weighting units, accumulation units and activation units may be connected to form artificial neural networks.
    Type: Grant
    Filed: July 3, 2002
    Date of Patent: February 14, 2006
    Assignee: Energy Conversion Devices, Inc.
    Inventor: Stanford R. Ovhsinsky
  • Patent number: 6876989
    Abstract: A neural network system includes a feedforward network comprising at least one neuron circuit for producing an activation function and a first derivative of the activation function and a weight updating circuit for producing updated weights to the feedforward network. The system also includes an error back-propagation network for receiving the first derivative of the activation function and to provide weight change data information to the weight updating circuit.
    Type: Grant
    Filed: February 13, 2002
    Date of Patent: April 5, 2005
    Assignee: Winbond Electronics Corporation
    Inventors: Bingxue Shi, Chun Lu, Lu Chen
  • Patent number: 6844582
    Abstract: A learning method of a semiconductor device of the present invention comprises a neuro device having a multiplier as a synapse in which a weight varies according to an input weight voltage, and functioning as a neural network system that processes analog data, comprising a step A of inputting predetermined input data to the neuro device and calculating an error between a target value of an output of the neuro device with respect to the input data and an actual output, a step B of calculating variation amount in the error by varying a weight of the multiplier thereafter, and a step C of varying the weight of the multiplier based on the variation amount in the error, wherein in the steps B and C, after inputting a reset voltage for setting the weight to a substantially constant value to the multiplier as the weight voltage, the weight is varied by inputting the weight voltage corresponding to the weight to be varied.
    Type: Grant
    Filed: May 9, 2003
    Date of Patent: January 18, 2005
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Michihito Ueda, Kenji Toyoda, Takashi Ohtsuka, Kiyoyuki Morita
  • Patent number: 6792412
    Abstract: A system and method for controlling information output based on user feedback about the information that includes a plurality of information sources. At least one neural network module selects one or more of a plurality of objects to receive information from the plurality of information sources based on a plurality of inputs and a plurality of weight values during that epoch. At least one server, associated with the neural network module, provides one or more of the objects to a plurality of recipients. The recipients provide feedback during an epoch. At the conclusion of an epoch, the neural network takes the feedback that has been provided from the recipients and generates a rating value for each of the objects. Based on the rating value and selections made, the neural network redetermines the weight values. The neural network then selects the objects to receive information during a subsequent epoch.
    Type: Grant
    Filed: February 2, 1999
    Date of Patent: September 14, 2004
    Inventors: Alan Sullivan, Ivan Pope
  • Patent number: 6768927
    Abstract: The invention relates to a control system to which a state vector representing the states of a controlled system is applied. The control system provides a correcting variables vector of optimized correcting variables. The relation between state vector and correcting variables vector is defined by a matrix of weights. These weights depend on the solution of the state dependent Riccati equation. An equation solving system for solving the state dependent Riccati equation in real time is provided. The state vector is applied to this equation solving system. The solution of the state dependent Riccati equation is transferred to the control system to determine the weights.
    Type: Grant
    Filed: March 29, 2001
    Date of Patent: July 27, 2004
    Assignee: Bodenseewerk Geratetechnik GmbH
    Inventor: Uwe Krogmann
  • Patent number: 6735578
    Abstract: A system and a method for an automated intelligent information mining includes receiving product-related queries and respective product-related information from various text sources; extracting multiple key-phrases from the product-related information and received queries; generating two or more layers of contextual relation maps by mapping the extracted key-phrases to two-dimensional maps using a self organizing map, and a technique including a combination of Hessian matrix and Perturbation technique to enhance the learning process and to categorize the extracted key-phrases based on a contextual meaning. Further, the technique includes forming word clusters and constructing corresponding key phrase frequency histograms for each of the generated contextual relation maps.
    Type: Grant
    Filed: May 10, 2001
    Date of Patent: May 11, 2004
    Assignee: Honeywell International Inc.
    Inventors: Ravindra K. Shetty, Venkatesan Thyagarajan
  • Patent number: 6665651
    Abstract: A feedback control system for automatic on-line training of a controller for a plant, the system having a reinforcement learning agent connected in parallel with the controller. The learning agent comprises an actor network and a critic network operatively arranged to carry out at least one sequence of a stability phase followed by a learning phase. During the stability phase, a multi-dimensional boundary of values is determined. During the learning phase, a plurality of updated weight values is generated in connection with the on-line training, if and until one of the updated weight values reaches the boundary, at which time a next sequence is carried out to determine a next multi-dimensional boundary of values followed by a next learning phase.
    Type: Grant
    Filed: July 18, 2002
    Date of Patent: December 16, 2003
    Assignee: Colorado State University Research Foundation
    Inventors: Peter M. Young, Charles Anderson, Douglas C. Hittle, Matthew Kretchmar
  • Patent number: 6665639
    Abstract: A method and apparatus are described that allow inexpensive speech recognition in applications where this capability is not otherwise feasible because of cost or technical reasons, or because of inconvenience to the user. A relatively simple speaker independent recognition algorithm, capable of recognizing a limited number of utterances at any one time, is associated with the base unit of an electronics product. To function, the product requires information from an external medium and this medium also provides the data required to recognize several sets of utterances pertinent to other information provided by the external medium.
    Type: Grant
    Filed: January 16, 2002
    Date of Patent: December 16, 2003
    Assignee: Sensory, Inc.
    Inventors: Todd F. Mozer, Forrest S. Mozer, Thomas North
  • Patent number: 6654730
    Abstract: When neuron operations are computed in parallel using a large number of arithmetic units, arithmetic units for neuron operations and arithmetic units for error signal operations need not be provided separately, and a neural network arithmetic apparatus that consumes the bus band less is provided for updating of synapse connection weights. Operation results of arithmetic units and setting information of a master node are exchanged between them through a local bus. During neuron operations, partial sums of neuron output values from the arithmetic units are accumulated by the master node to generate and output a neuron output value, and an arithmetic unit to which neuron operations of the specific neuron are assigned receives and stores the neuron output value outputted from the master node.
    Type: Grant
    Filed: November 1, 2000
    Date of Patent: November 25, 2003
    Assignee: Fuji Xerox Co., Ltd.
    Inventors: Noriji Kato, Hirotsugu Kashimura, Hitoshi Ikeda, Nobuaki Miyakawa
  • Patent number: 6581049
    Abstract: An artificial neuron includes inputs and dendrites, a respective one of which is associated with a respective one of the inputs. Each dendrite includes a power series of weights, and each weight in a power series includes an associated count for the associated power. The power series of weights preferably is a base-two power series of weights, each weight in the base-two power series including an associated count that represents a bit position. The counts for the associated power preferably are statistical counts. More particularly, the dendrites preferably are sequentially ordered, and the power series of weights preferably includes a pair of first and second power series of weights. Each weight in the first power series includes a first count that is a function of associations of prior dendrites, and each weight of the second power series includes a second count that is a function of associations of next dendrites.
    Type: Grant
    Filed: November 8, 1999
    Date of Patent: June 17, 2003
    Assignee: Saffron Technology, Inc.
    Inventors: Manuel Aparicio, IV, James S. Fleming, Dan Ariely
  • Patent number: 6513023
    Abstract: A neural network circuit is provided having a plurality of circuits capable of charge storage. Also provided is a plurality of circuits each coupled to at least one of the plurality of charge storage circuits and constructed to generate an output in accordance with a neuron transfer function. Each of a plurality of circuits is coupled to one of the plurality of neuron transfer function circuits and constructed to generate a derivative of the output. A weight update circuit updates the charge storage circuits based upon output from the plurality of transfer function circuits and output from the plurality of derivative circuits. In preferred embodiments, separate training and validation networks share the same set of charge storage circuits and may operate concurrently.
    Type: Grant
    Filed: October 1, 1999
    Date of Patent: January 28, 2003
    Assignee: The United States of America as represented by the Administrator of the National Aeronautics and Space Administration
    Inventor: Tuan A. Duong
  • Patent number: 6366897
    Abstract: A cortronic neural network defines connections between neurons in a number of regions using target lists, which identify the output connections of each neuron and the connection strength. Neurons are preferably sparsely interconnected between regions. Training of connection weights employs a three stage process, which involves computation of the contribution to the input intensity of each neuron by every currently active neuron, a competition process that determines the next set of active neurons based on their current input intensity, and a weight adjustment process that updates and normalizes the connection weights based on which neurons won the competition process, and their connectivity with other winning neurons.
    Type: Grant
    Filed: July 26, 1999
    Date of Patent: April 2, 2002
    Assignee: HNC Software, Inc.
    Inventors: Robert W. Means, Richard Calmbach
  • Patent number: 6151592
    Abstract: A recognition apparatus and method using a neural network is provided. A neuron-like element stores a value of its inner condition. The neuron-like element also updates a values of its internal status on the basis of an output from the neuron-like element itself, outputs from other neuron-like elements and an external input, and an output value generator a value of its internal status into an external output. Accordingly, the neuron-like element itself can retain the history of input data. This enables the time series data, such as speech, to be processed without providing any special devices in the neural network.
    Type: Grant
    Filed: January 20, 1998
    Date of Patent: November 21, 2000
    Assignee: Seiko Epson Corporation
    Inventor: Mitsuhiro Inazumi
  • Patent number: 6064997
    Abstract: A family of novel multi-layer discrete-time neural net controllers is presented for the control of an multi-input multi-output (MIMO) dynamical system. No learning phase is needed. The structure of the neural net (NN) controller is derived using a filtered error/passivity approach. For guaranteed stability, the upper bound on the constant learning rate parameter for the delta rule employed in standard back propagation is shown to decrease with the number of hidden-layer neurons so that learning must slow down. This major drawback is shown to be easily overcome by using a projection algorithm in each layer. The notion of persistency of excitation for multilayer NN is defined and explored. New on-line improved tuning algorithms for discrete-time systems are derived, which are similar to e-modification for the case of continuous-time systems, that include a modification to the learning rate parameter plus a correction term. These algorithms guarantee tracking as well as bounded NN weights.
    Type: Grant
    Filed: March 19, 1997
    Date of Patent: May 16, 2000
    Assignee: University of Texas System, The Board of Regents
    Inventors: Sarangapani Jagannathan, Frank Lewis
  • Patent number: 5852815
    Abstract: Constructing and simulating artificial neural networks and components thereof within a spreadsheet environment results in user friendly neural networks which do not require algorithmic based software in order to train or operate. Such neural networks can be easily cascaded to form complex neural networks and neural network systems, including neural networks capable of self-organizing so as to self-train within a spreadsheet, neural networks which train simultaneously within a spreadsheet, and neural networks capable of autonomously moving, monitoring, analyzing, and altering data within a spreadsheet. Neural networks can also be cascaded together in self training neural network form to achieve a device prototyping system.
    Type: Grant
    Filed: May 15, 1998
    Date of Patent: December 22, 1998
    Inventor: Stephen L. Thaler
  • Patent number: RE41658
    Abstract: A neural network including a number of synaptic weighting elements, and a neuron stage; each of the synaptic weighting elements having a respective synaptic input connection supplied with a respective input signal; and the neuron stage having inputs connected to the synaptic weighting elements, and being connected to an output of the neural network supplying a digital output signal. The accumulated weighted inputs are represented as conductances, and a conductance-mode neuron is used to apply nonlinearity and produce an output. The synaptic weighting elements are formed by memory cells programmable to different threshold voltage levels, so that each presents a respective programmable conductance; and the neuron stage provides for measuring conductance on the basis of the current through the memory cells, and for generating a binary output signal on the basis of the total conductance of the synaptic elements.
    Type: Grant
    Filed: July 31, 2003
    Date of Patent: September 7, 2010
    Assignee: STMicroelectronics S.r.l.
    Inventors: Vito Fabbrizio, Gianluca Colli, Alan Kramer