Recurrent Patents (Class 706/30)
-
Patent number: 7324980Abstract: This invention relates to an information processing device and method that enable generation of an unlearned new pattern. Data xt corresponding to a predetermined time series pattern is inputted to an input layer (11) of a recurrent neural network (1), and a prediction value x*t+1 is acquired from an output layer 13. A difference between teacher data xt+1 and the prediction value x*t+1 is learned by a back propagation method, and a weighting coefficient of an intermediate layer 12 is set at a predetermined value. After the recurrent neural network is caused to learn plural time series patterns, a parameter having a different value from the value in learning is inputted to parametric bias nodes (11-2), and an unlearned time series pattern corresponding to the parameter is generated from the output layer (13). This invention can be applied to a robot.Type: GrantFiled: January 21, 2003Date of Patent: January 29, 2008Assignees: Sony Corporation, RikenInventors: Masato Ito, Jun Tani
-
Patent number: 7321882Abstract: A method for the supervised teaching of a recurrent neutral network (RNN) is disclosed. A typical embodiment of the method utilizes a large (50 units or more), randomly initialized RNN with a globally stable dynamics. During the training period, the output units of this RNN are teacher-forced to follow the desired output signal. During this period, activations from all hidden units are recorded. At the end of the teaching period, these recorded data are used as input for a method which computes new weights of those connections that feed into the output units. The method is distinguished from existing training methods for RNNs through the following characteristics: (1) Only the weights of connections to output units are changed by learning—existing methods for teaching recurrent networks adjust all network weights.Type: GrantFiled: October 5, 2001Date of Patent: January 22, 2008Assignee: Fraunhofer-Gesellschaft zur Foederung der Angewandten Forschung e.V.Inventor: Herbert Jaeger
-
Patent number: 7280989Abstract: A neural network computer (20) includes a weighting network (21) coupled to a plurality of phase-locked loop circuits (251-25N). The weighting network (21) has a plurality of weighting circuits (C11, . . . , CNN) having output terminals connected to a plurality of adder circuits (311-31N). A single weighting element (Ckj) typically has a plurality of output terminals coupled to a corresponding adder circuit (31k). Each adder circuit (31k) is coupled to a corresponding bandpass filter circuit (31k) which is in turn coupled to a corresponding phase-locked loop circuit (25k). The weighting elements (C1,1, . . . , CN,N) are programmed with connection strengths, wherein the connection strengths have phase-encoded weights. The phase relationships are used to recognize an incoming pattern.Type: GrantFiled: January 26, 2001Date of Patent: October 9, 2007Assignee: Arizona Board of RegentsInventors: Frank C. Hoppensteadt, Eugene M. Izhikevich
-
Patent number: 7089174Abstract: A first model (10), such as a architectural level model or an instruction set simulator model makes calls to a second model (12), such as a pipeline simulator for a data processing device returning cycle count or energy consumption values. The calls to the second model are relatively slow. The system stalls the returned behavioural characteristics from the second model (12) in a memo table (14) and when a sufficient number of these have been returned with sufficiently little variation between them, then they are marked as being valid for use in place of a call to the second model (12), thus speeding up modelling.Type: GrantFiled: February 21, 2003Date of Patent: August 8, 2006Assignee: ARM LimitedInventors: Syed Samin Ishtiaq, Peter Neal, John Mark Burton
-
Patent number: 7089219Abstract: An information processing apparatus includes a first recurrent neural network (RNN) for performing processing which corresponds to a time-series and a second RNN for processing another correlated time-series. The difference between a context set output by the first RNN and a context set output by the second RNN is computed by a subtractor, and the obtained difference is used as a prediction error. Backpropagation is performed based on the prediction error, thus determining a coefficient for each neuron of an output layer, an intermediate layer, and an input layer.Type: GrantFiled: March 25, 2005Date of Patent: August 8, 2006Assignee: Sony CorporationInventor: Jun Tani
-
Patent number: 7082421Abstract: An information processing apparatus includes a first recurrent neural network (RNN) for performing processing which corresponds to a time-series and a second RNN for processing another correlated time-series. The difference between a context set output by the first RNN and a context set output by the second RNN is computed by a subtractor, and the obtained difference is used as a prediction error. Backpropagation is performed based on the prediction error, thus determining a coefficient for each neuron of an output layer, an intermediate layer, and an input layer.Type: GrantFiled: March 25, 2005Date of Patent: July 25, 2006Assignee: Sony CorporationInventor: Jun Tani
-
Patent number: 7072875Abstract: An information processing apparatus includes a first recurrent neural network (RNN) for performing processing which corresponds to a time-series and a second RNN for processing another correlated time-series. The difference between a context set output by the first RNN and a context set output by the second RNN is computed by a subtractor, and the obtained difference is used as a prediction error. Backpropagation is performed based on the prediction error, thus determining a coefficient for each neuron of an output layer, an intermediate layer, and an input layer.Type: GrantFiled: March 25, 2005Date of Patent: July 4, 2006Assignee: Sony CorporationInventor: Jun Tani
-
Patent number: 7028017Abstract: A temporal summation device can be composed of one or more nanoconnections having an input and an output thereof, wherein an input signal provided to the input causes one or more of the nanoconnection to experience an increase in connection strength thereof over time. Additionally, a voltage divider is formed by the nanoconnection(s) and a resistor connected to the output of the nanoconnection(s), such that the voltage divider provides a voltage at the output the nanoonnection(s) that is in direct proportion to the connection strength of nanoconnection(s). An amplifier is also connected to the voltage divider, wherein when the voltage provided by the voltage divider attains a desired threshold voltage, the amplifier attains a high voltage output thereby providing a temporal summation device thereof.Type: GrantFiled: January 31, 2005Date of Patent: April 11, 2006Assignee: Knowm Tech, LLCInventor: Alex Nugent
-
Patent number: 6963862Abstract: A method for training a recurrent network represented by x(k+1)=f(W x(k)), where W is a weight matrix, x is the output of the network, and K is a time index includes (a) determining the weight matrix at a first time increment, (b) incrementing the time increment associated with a received data point, and (c) determining a change in the weight matrix at the incremented time interval according to the formula: ? ? ? ? W ? ( K ) = ? ? ? ? W ? ( K - 1 ) + ? ? ? ? ? ? ( K ) ? x T ? ( K - 1 ) ? ? ? V - 1 ? ( K - 1 ) - B ? ( K - 1 ) ? ? ? V - 1 ? ( K - 1 ) ? ? ? x ? ? ? ( K - 1 ) ? [ V - 1 ? ( K - 1 ) ? ? ? x ? ? ? ( K - 1 ) ] T 1 + x T ? ( K - 1 ) ? ? ? V - 1 ? ( K - 1 ) ? ? ? x ? ? ? ( K - 1 )Type: GrantFiled: March 30, 2001Date of Patent: November 8, 2005Assignee: The Texas A&M University SystemInventors: Alexander G. Parlos, Amir F. Atiya
-
Patent number: 6915283Abstract: An information processing apparatus includes a first recurrent neural network (RNN) for performing processing which corresponds to a time-series and a second RNN for processing another correlated time-series. The difference between a context set output by the first RNN and a context set output by the second RNN is computed by a subtractor, and the obtained difference is used as a prediction error. Backpropagation is performed based on the prediction error, thus determining a coefficient for each neuron of an output layer, an intermediate layer, and an input layer.Type: GrantFiled: July 2, 2001Date of Patent: July 5, 2005Assignee: Sony CorporationInventor: Jun Tani
-
Patent number: 6898582Abstract: A method and apparatus for processing a composite signal generated by a transient signal generation mechanism to extract a repetitive low SNR transient signal, such as an evoked potential (EP) appearing in an electroencephalogram (EEG) generated in response to sensory stimulation, by: (a) dynamically identifying, via a learning process, the major transient signal types in the composite signal; (b) decomposing the identified major transient signal types into their respective constituent components; (c) synthesizing a parametric model emulating the transient signal generation mechanism; and (d) utilizing the model and the constituent components to identify and extract the low SNR transient signal from the composite signal.Type: GrantFiled: September 26, 2001Date of Patent: May 24, 2005Assignee: Algodyne, Ltd.Inventor: Daniel H. Lange
-
Patent number: 6832214Abstract: Disclosed is a system, method, and program for generating a compiler to map a code set to object code capable of being executed on an operating system platform. At least one neural network is trained to convert the code set to object code. The at least one trained neural network can then be used to convert the code set to the object code.Type: GrantFiled: December 7, 1999Date of Patent: December 14, 2004Assignee: International Business Machines CorporationInventor: Chung T. Nguyen
-
Patent number: 6826550Abstract: Provided is a compiler to map application program code to object code capable of being executed on an operating system platform. A first neural network module is trained to generate characteristic output based on input information describing attributes of the application program. A second neural network module is trained to receive as input the application program code and the characteristic output and, in response, generate object code. The first and second neural network modules are used to convert the application program code to object code.Type: GrantFiled: December 15, 2000Date of Patent: November 30, 2004Assignee: International Business Machines CorporationInventors: Michael Wayne Brown, Chung Tien Nguyen
-
Patent number: 6799171Abstract: A neural network system including a plurality of tiers of interconnected computing elements. The plurality of tiers includes an input tier whereto a sequence of input speech vectors is applied at a first rate. Two of the plurality of tiers are interconnected through a decimator configured to reduce the first rate of the sequence of input vectors. Alternatively, two of the plurality of tiers are interconnected through an interpolator configured to increase the first rate of the sequence of input vectors.Type: GrantFiled: March 1, 2001Date of Patent: September 28, 2004Assignee: Swisscom AGInventor: Robert Van Kommer
-
Patent number: 6792413Abstract: This invention provides a data processing apparatus which can store and recall more complicated time-series data than those processed in related art technologies. In the data processing apparatus, a recurrent neural network (RNN) of higher layer generates long-period parameter and supplies it to an input layer of RNN of lower layer via a computing block. The RNN uses this input as a parameter and computes short-period input.Type: GrantFiled: February 6, 2002Date of Patent: September 14, 2004Assignee: Sony CorporationInventor: Jun Tani
-
Patent number: 6751601Abstract: The invention consists of a learning system (108) based on the Dynamical System Architecture (DSA). Every implementation of the DSA is composed of a dynamic system (112) and a static system (116). The dynamic system (112) generates an intermediate output (114), which together with an external input (100) is fed into the static system (116), which produces a generated output (106). Every time the dynamic system (112) is reset it produces the same trajectory: an intermediate output (114) that does not cross over itself during the temporal span of each of the events generated by a reference system (102) whose behavior has to be duplicated. The static system (116) can be anything that behaves like a trainable universal function approximator. Training uses the intermediate output (114), the external input (100), the observed output (104) produced by the reference system (102) whose behavior is going to be mimicked, and a self-organizing procedure that only modifies the parameters of the static system (116).Type: GrantFiled: July 20, 2001Date of Patent: June 15, 2004Inventor: Pablo Zegers
-
Patent number: 6735580Abstract: A neural network based universal time series prediction system for financial securities includes a pipelined recurrent ANN architecutre having a plurality of identical modules to first adjust internal weights and biases in response to a first training set representing a nonlinear financial time series of samples of a financial quantity and a target value, and then determine and store an estimated prediction error of the ANN in order to adjust short time stock price predictions in accordance with the stored prediction error. The prediction system is also designed to output upper and lower prediction bounds within a confidence region.Type: GrantFiled: August 25, 2000Date of Patent: May 11, 2004Assignee: Westport Financial LLCInventors: Liang Li, Yi Tang, Bin Li, Xiaohua Wu
-
Patent number: 6728691Abstract: Computation elements are connected to one another with a first subsystem having a first input computation element, to which time series values, which each describe one state of a system in a first state space at a time, can be supplied. The first input computation element is connected to a first intermediate computation element, by which a state of the system can be described in a second state space at a time. In a second subsystem a second intermediate computation element, by which a state of the system can be described in the second state space at a time, is connected to a first output computation element, on which a first output signal can be tapped off. In a third subsystem a third intermediate computation element, by which a state of the system can be described in the second state space at a time, is connected to a second output computation element, on which a second output signal can be tapped off.Type: GrantFiled: September 4, 2001Date of Patent: April 27, 2004Assignee: Siemens AktiengesellschaftInventors: Ralf Neuneier, Hans-Georg Zimmermann
-
Patent number: 6708160Abstract: A method, system and computer program product for implementing at least one of a learning-based diagnostics system and a control system (e.g., using a neural network). By using ObjectNets to model general object types, it is possible to design a control system that represents system components as relational structures rather than fixed vectors. Such an advance is possible by exploiting non-Euclidean principles of symmetry.Type: GrantFiled: April 6, 2000Date of Patent: March 16, 2004Inventor: Paul J. Werbos
-
Patent number: 6601051Abstract: A neural system is disclosed for processing an exogenous input process to produce a good outward output process with respect to a performance criterion, even if the range of one or both of these processes is necessarily large and/or keeps necessarily expanding during the operation of the neural system. The disclosed neural system comprises a recurrent neural network (RNN) and at least one range extender or reducer, each of which is a dynamic transformer. A range reducer transforms dynamically at least one component of the exogenous input process into inputs to at least one input neuron of said RNN. A range extender transforms dynamically outputs of at least one output neuron of said RNN into at least one component of the outward output process. There are many types of range extender and reducer, which have different degrees of effectiveness and computational costs.Type: GrantFiled: July 11, 1997Date of Patent: July 29, 2003Assignee: Maryland Technology CorporationInventors: James Ting-Ho Lo, Lei Yu
-
Patent number: 6601054Abstract: Active vibration control (AVC) systems without online path modeling and controller adjustment are provided that are able to adapt to an uncertain operating environment. The controller (250, 280, 315, 252, 282, 317, 254, 319) of such an AVC system is an adaptive recursive neural network whose weights are determined in an offline training and are held fixed online during the operation of the system. AVC feedforward, feedback, and feedforward-feedback systems in accordance with the present invention are described. An AVC feedforward system has no error sensor and an AVC feedback system has no reference sensor. All sensor outputs of an AVC system are processed by the controller for generating control signals to drive at least one secondary source (240). While an error sensor (480, 481) must be a vibrational sensor, a reference sensor (230, 270, 295, 305, 330) may be either a vibrational or nonvibrational sensor.Type: GrantFiled: August 11, 2000Date of Patent: July 29, 2003Assignee: Maryland Technology CorporationInventors: James T. Lo, Lei Yu
-
Patent number: 6463424Abstract: There is provided a basic association unit for creating an information processing apparatus capable of performing information processing like information processing that actually occurs in central nerve systems of animals including human beings. The association unit is an unit for repeating input and output signals having m input terminals and n output terminals. When a first input signal which is a rectangular wave signal in the form of a pulse is simultaneously input to input terminals in a quantity less than m, an output signal having the same contents as the first input signal is output from particular output terminals which are associated with the input terminals in advance.Type: GrantFiled: April 23, 1998Date of Patent: October 8, 2002Inventors: Norio Ogata, Koji Ataka
-
Patent number: 6434541Abstract: An engine diagnostic system includes a bit-serial based recurrent neuroprocessor for processing data from an internal combustion engine in order to diagnose misfires in real-time and reduces the number of neurons required to perform the task by time multiplexing groups of neurons from a candidate pool of neurons to achieve the successive hidden layers of the recurrent network topology.Type: GrantFiled: April 21, 1999Date of Patent: August 13, 2002Assignee: Ford Global Technologies, Inc.Inventors: Raoul Tawel, Nazeeh Aranki, Lee A. Feldkamp, Gintaras V. Puskorius, Kenneth A. Marko, John V. James
-
Patent number: 6356884Abstract: An artificial neural network-based system and method for determining desired concepts and relationships within a predefined field of endeavor, including a neural network portion, which neural network portion includes an artificial neural network that has been previously trained in accordance with a set of given training exemplars, a monitor portion associated with the neural network portion to observe the data outputs produced by the previously trained artificial neural network, and a perturbation portion for perturbing the neural network portion to effect changes, subject to design constraints of the artificial neural network that remain unperturbed, in the outputs produced by the neural network portion, the perturbation portion operable such that production of an output by the neural network portion thereafter effects a perturbation of the neural network portion by the perturbation portion, the monitor portion responsive to detection of the data outputs being produced by the previously trained neural networType: GrantFiled: July 2, 1999Date of Patent: March 12, 2002Inventor: Stephen L. Thaler
-
Patent number: 6199057Abstract: A neuroprocessor architecture employs a combination of bit-serial and serial-parallel techniques for implementing the neurons of the neuroprocessor. The neuroprocessor architecture includes a neural module containing a pool of neurons, a global controller, a sigmoid activation ROM look-up-table, a plurality of neuron state registers, and a synaptic weight RAM. The neuroprocessor reduces the number of neurons required to perform the task by time multiplexing groups of neurons from a fixed pool of neurons to achieve the successive hidden layers of a recurrent network topology.Type: GrantFiled: October 23, 1997Date of Patent: March 6, 2001Assignee: California Institute of TechnologyInventor: Raoul Tawel
-
Patent number: 6151592Abstract: A recognition apparatus and method using a neural network is provided. A neuron-like element stores a value of its inner condition. The neuron-like element also updates a values of its internal status on the basis of an output from the neuron-like element itself, outputs from other neuron-like elements and an external input, and an output value generator a value of its internal status into an external output. Accordingly, the neuron-like element itself can retain the history of input data. This enables the time series data, such as speech, to be processed without providing any special devices in the neural network.Type: GrantFiled: January 20, 1998Date of Patent: November 21, 2000Assignee: Seiko Epson CorporationInventor: Mitsuhiro Inazumi
-
Patent number: 6115701Abstract: A system and process for readily determining, for a specified knowledge domain in a given field of endeavor, perturbations applicable to an artificial neural network embodying such a specified knowledge domain that will produce a desired output, comprising a first, previously trained, artificial neural network containing training in some problem domain, which neural network is responsive to the presentment of a set of data inputs at the input portion thereof to produce a set of data outputs at the output portion thereof, a monitoring portion which constantly monitors the outputs of the first neural network to identify the desired outputs, and a network perturbation portion for effecting the application of perturbations, either externally or internally, to the first neural network to thereby effect changes in the output thereof.Type: GrantFiled: August 9, 1999Date of Patent: September 5, 2000Inventor: Stephen L. Thaler
-
Patent number: 6058206Abstract: A pattern recognition device having modifiable feature detectors (28) which respond to a transduced input signal (26) and communicate a feature activity signal (30) to allow classification and an appropriate output action (70). A memory (40) stores a set of comparison patterns, and is used by an assigner (66) to find likely features, or parts, in the current input signal (26). Each part is assigned to a feature detector (28[m]) judged to be responsible for it. An updater (42) modifies each responsible feature detector (28[m]) so as to make its preferred feature more similar to its assigned part. The modification embodies a strong constraint on the feature learning process, in particular an assumption that the ideal features for describing the pattern domain occur independently. This constraint allows improved learning speed and potentially improved scaling properties.Type: GrantFiled: December 1, 1997Date of Patent: May 2, 2000Inventor: Chris Alan Kortge
-
Patent number: 6018727Abstract: A device for generating useful information employing a first neural network trained to produce input-output maps within a predetermined initial knowledge domain, an apparatus for subjecting the neural network to perturbations which may produce changes in the predetermined knowledge domain, the neural network having an optional output for feeding the outputs of the first neural network to a second neural network that evaluates the outputs based on training within the second neural network. The device may also include a reciprocal feed back connection from the output of the second neural network to the first neural network to further influence and change what takes place in the aforesaid neural network.Type: GrantFiled: August 13, 1997Date of Patent: January 25, 2000Inventor: Stephen L. Thaler
-
Patent number: 5963929Abstract: A recursive neurofilter comprising a recursive neural network (NN) is disclosed for processing an information process to estimate a signal process with respect to an estimation error criterion. The information process either consists of a measurement process, or if the signal and measurement processes are time-variant, consists of the measurement process as well as a time variance process, that describes the time-variant properties of the signal and measurement processes. The recursive neurofilter is synthesized from exemplary realizations of the signal and information processes. No assumptions such as Gaussian distribution, linear dynamics, additive noise, and Markov property are required. The synthesis is performed essentially through training recursive NNs. The training criterion is constructed to reflect the mentioned estimation error criterion with the exemplary realizations.Type: GrantFiled: July 11, 1997Date of Patent: October 5, 1999Assignee: Maryland Technology CorporationInventor: James Ting-Ho Lo
-
Patent number: 5943659Abstract: Based on the encoding of deterministic finite-state automata (DFA) in discrete-time, second-order recurrent neural networks, an algorithm constructs an augmented recurrent neural network that encodes a FFA and recognizes a given fuzzy regular language with arbitrary accuracy.Type: GrantFiled: October 3, 1995Date of Patent: August 24, 1999Assignee: NEC Research Institute, Inc.Inventors: C. Lee Giles, Christian Walter Omlin, Karvel Kuhn Thornber