Multilayer Feedforward Patents (Class 706/31)
-
Patent number: 8121817Abstract: Process control system for detecting abnormal events in a process having one or more independent variables and one or more dependent variables. The system includes a device for measuring values of the one or more independent and dependent variables, a process controller having a predictive model for calculating predicted values of the one or more dependent variables from the measured values of the one or more independent variables, a calculator for calculating residual values for the one or more dependent variables from the difference between the predicted and measured values of the one or more dependent variables, and an analyzer for performing a principal component analysis on the residual values. The process controller is a multivariable predictive control means and the principal component analysis results in the output of one or more scores values, T2 values and Q values.Type: GrantFiled: October 16, 2007Date of Patent: February 21, 2012Assignee: BP Oil International LimitedInventors: Keith Landells, Zaid Rawi
-
Patent number: 8103606Abstract: An architecture, systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the input layer, at least one hidden layer, and output layer. In a particular case, the architecture includes a back-propagation subsystem that is configured to adjust weights in the scalable artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements.Type: GrantFiled: December 10, 2007Date of Patent: January 24, 2012Inventors: Medhat Moussa, Antony Savich, Shawki Areibi
-
Patent number: 8065022Abstract: Embodiments of the invention can include methods and systems for controlling clearances in a turbine. In one embodiment, a method can include applying at least one operating parameter as an input to at least one neural network model, modeling via the neural network model a thermal expansion of at least one turbine component, and taking a control action based at least in part on the modeled thermal expansion of the one or more turbine components. An example system can include a controller operable to determine and apply the operating parameters as inputs to the neural network model, model thermal expansion via the neural network model, and generate a control action based at least in part on the modeled thermal expansion.Type: GrantFiled: January 8, 2008Date of Patent: November 22, 2011Assignee: General Electric CompanyInventors: Karl Dean Minto, Jianbo Zhang, Erhan Karaca
-
Patent number: 8015130Abstract: In a hierarchical neural network having a module structure, learning necessary for detection of a new feature class is executed by a processing module which has not finished learning yet and includes a plurality of neurons which should learn an unlearned feature class and have an undetermined receptor field structure by presenting a predetermined pattern to a data input layer. Thus, a feature class necessary for subject recognition can be learned automatically and efficiently.Type: GrantFiled: January 29, 2010Date of Patent: September 6, 2011Assignee: Canon Kabushiki KaishaInventors: Masakazu Matsugu, Katsuhiko Mori, Mie Ishii, Yusuke Mitarai
-
Patent number: 7979370Abstract: A system for information searching includes a first layer and a second layer. The first layer includes a first plurality of neurons each associated with a word and with a first set of dynamic connections to at least some of the first plurality of neurons. The second layer include a second plurality of neurons each associated with a document and with a second set of dynamic connections to at least some of the first plurality of neurons. The first set of dynamic connections and the second set of dynamic connections can be configured such that a query of at least one neuron of the first plurality of neurons excites at least one neuron of the second plurality of neurons. The excited at least one neuron of the second plurality of neurons can be contextually related to the queried at least one neuron of the first plurality of neurons.Type: GrantFiled: January 29, 2009Date of Patent: July 12, 2011Assignee: Dranias Development LLCInventor: Alexander V. Ershov
-
Patent number: 7831416Abstract: A method is provided for designing a product. The method may include obtaining data records relating to one or more input variables and one or more output parameters associated with the product; and pre-processing the data records based on characteristics of the input variables. The method may also include selecting one or more input parameters from the one or more input variables; and generating a computational model indicative of interrelationships between the one or more input parameters and the one or more output parameters based on the data records. Further, the method may include providing a set of constraints to the computational model representative of a compliance state for the product; and using the computational model and the provided set of constraints to generate statistical distributions for the one or more input parameters and the one or more output parameters, wherein the one or more input parameters and the one or more output parameters represent a design for the product.Type: GrantFiled: July 17, 2007Date of Patent: November 9, 2010Assignee: Caterpillar IncInventors: Anthony J. Grichnik, Michael Seskin, Amit Jayachandran
-
Patent number: 7788196Abstract: An artificial neural network comprises at least one input layer with a predetermined number of input nodes and at least one output layer with a predetermined number of output nodes or also at least one intermediate hidden layer with a predetermined number of nodes between the input and the output layer. At least the nodes of the output layer and/or of the hidden layer and/or also of the input layer carry out a non linear transformation of a first non linear transformation of the input data for computing an output value to be fed as an input value to a following layer or the output data if the output layer is considered.Type: GrantFiled: August 24, 2004Date of Patent: August 31, 2010Assignee: SemeionInventor: Paolo Massimo Buscema
-
Patent number: 7743004Abstract: A pulse signal processing circuit, a parallel processing circuit, and a pattern recognition system including a plurality of arithmetic elements for outputting pulse signals and at least one modulation circuit, synaptic connection element(s), or synaptic connection means for modulating the pulse signals, the modulated pulse signals then being separately or exclusively output to corresponding signal lines.Type: GrantFiled: June 30, 2008Date of Patent: June 22, 2010Assignee: Canon Kabushiki KaishaInventor: Masakazu Matsugu
-
Publication number: 20100088263Abstract: There is described a method for computer-aided learning of a neural network, with a plurality of neurons in which the neurons of the neural network are divided into at least two layers, comprising a first layer and a second layer crosslinked with the first layer. In the first layer input information is respectively represented by one or more characteristic values from one or several characteristics, wherein every characteristic value comprises one or more neurons of the first layer. A plurality of categories is stored in the second layer, wherein every category comprises one or more neurons of the second layer. For one or several pieces of input information, respectively at least one category in the second layer is assigned to the characteristic values of the input information in the first layer.Type: ApplicationFiled: September 20, 2006Publication date: April 8, 2010Inventors: Gustavo Deco, Martin Stetter, Miruna Szabo
-
Patent number: 7496548Abstract: A system, method and computer program product for information searching includes (a) a first layer with a first plurality of neurons, each of the first plurality of neurons being associated with a word and with a set of connections to at least some neurons of the first layer; (b) a second layer with a second plurality of neurons, each of the second plurality of neurons being associated with an object and with a set of connections to at least some neurons of the second layer, and with a set of connections to some neurons of the first layer; (c) a third layer with a third plurality of neurons, each of the third plurality of neurons being associated with a sentence and with a set of connections to at least some neurons of the third layer, and with a set of connections to at least some neurons of the first layer and to at least some neurons of the second layer; and (d) a fourth layer with a fourth plurality of neurons, each of the fourth plurality of neurons being associated with a document and with a set of connType: GrantFiled: August 29, 2006Date of Patent: February 24, 2009Assignee: Quintura, Inc.Inventor: Alexander V. Ershov
-
Publication number: 20080319933Abstract: An architecture, systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the input layer, at least one hidden layer, and output layer. In a particular case, the architecture includes a back-propagation subsystem that is configured to adjust weights in the scalable artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements.Type: ApplicationFiled: December 10, 2007Publication date: December 25, 2008Inventors: Medhat Moussa, Antony Savich, Shawki Areibi
-
Publication number: 20080319934Abstract: A neural network (100) comprising a plurality of neurons (101 to 106) and a plurality of wires (109) adapted for connecting the plurality of neurons (101 to 106), wherein at least a part of the plurality of wires (109) comprises a plurality of input connections and exactly one output connection.Type: ApplicationFiled: September 27, 2006Publication date: December 25, 2008Inventor: Eugen Oetringer
-
Patent number: 7409374Abstract: A method for discriminating between explosive events having their origins in High Explosive or Chemical/Biological detonation employing multiresolution analysis provided by a discrete wavelet transform. Original signatures of explosive events are broken down into subband components thereby removing higher frequency noise features and creating two sets of coefficients at varying levels of decomposition. These coefficients are obtained each time the signal is passed through a lowpass and highpass filter bank whose impulse response is derived from Daubechies db5 wavelet. Distinct features are obtained through the process of isolating the details of the high oscillatory components of the signature. The ratio of energy contained within the details at varying levels of decomposition is sufficient to discriminate between explosive events such as High Explosive and Chemical/Biological.Type: GrantFiled: August 22, 2005Date of Patent: August 5, 2008Assignee: The United States of America as represented by the Secretary of the ArmyInventors: Myron Hohil, Sashi V. Desai
-
Patent number: 7395248Abstract: The invention concerns a method for determining competing risks for objects following an initial event based on previously measured or otherwise objectifiable training data patterns, in which several signals obtained from a learning capable system are combined in an objective function in such a way that said learning capable system is rendered capable of detecting or forecasting the underlying probabilities of each of the said competing risks.Type: GrantFiled: December 7, 2001Date of Patent: July 1, 2008Inventors: Ronald E. Kates, Nadia Harbeck
-
Patent number: 7392231Abstract: A user's preference structure in respect of alternative “objects” with which the user is presented is captured in a multi-attribute utility function. The user ranks these competing objects in order of the user's relative preference for such objects. A utility function that defines the user's preference structure is provided as output on the basis of this relative ranking. This technique can be used to assist a buyer in selecting between multi-attribute quotes or bids submitted by prospective suppliers to the buyer.Type: GrantFiled: December 3, 2002Date of Patent: June 24, 2008Assignee: International Business Machines CorporationInventors: Jayanta Basak, Manish Gupta
-
Patent number: 7293002Abstract: A method for organizing processors to perform artificial neural network tasks is provided. The method provides a computer executable methodology for organizing processors in a self-organizing, data driven, learning hardware with local interconnections. A training data is processed substantially in parallel by the locally interconnected processors. The local processors determine local interconnections between the processors based on the training data. The local processors then determine, substantially in parallel, transformation functions and/or entropy based thresholds for the processors based on the training data.Type: GrantFiled: June 18, 2002Date of Patent: November 6, 2007Assignee: Ohio UniversityInventor: Janusz A. Starzyk
-
Patent number: 7143072Abstract: A neural network having layers of neurons divided into sublayers of neurons. The values of target neurons in one layer are calculated from sublayers of source neurons in a second underlying layer. It is therefore always possible to use for this calculation the same group of weights to be multiplied by respective source neurons related thereto and situated in the underlying layer of the neural network.Type: GrantFiled: September 26, 2002Date of Patent: November 28, 2006Assignee: CSEM Centre Suisse d′Electronique et de Microtechnique SAInventors: Jean-Marc Masgonty, Philippe Vuilleumier, Peter Masa, Christian Piguet
-
Patent number: 7092922Abstract: An adaptive learning method for automated maintenance of a neural net model is provided. The neural net model is trained with an initial set of training data. Partial products of the trained model are stored. When new training data are available, the trained model is updated by using the stored partial products and the new training data to compute weights for the updated model.Type: GrantFiled: May 21, 2004Date of Patent: August 15, 2006Assignee: Computer Associates Think, Inc.Inventors: Zhuo Meng, Baofu Duan, Yoh-Han Pao
-
Patent number: 7080055Abstract: Methods and apparatuses for backlash compensation. A dynamics inversion compensation scheme is designed for control of nonlinear discrete-time systems with input backlash. The techniques of this disclosure extend the dynamic inversion technique to discrete-time systems by using a filtered prediction, and shows how to use a neural network (NN) for inverting the backlash nonlinearity in the feedforward path. The techniques provide a general procedure for using NN to determine the dynamics preinverse of an invertible discrete time dynamical system. A discrete-time tuning algorithm is given for the NN weights so that the backlash compensation scheme guarantees bounded tracking and backlash errors, and also bounded parameter estimates. A rigorous proof of stability and performance is given and a simulation example verifies performance. Unlike standard discrete-time adaptive control techniques, no certainty equivalence (CE) or linear-in-the-parameters (LIP) assumptions are needed.Type: GrantFiled: October 2, 2001Date of Patent: July 18, 2006Inventors: Javier Campos, Frank L. Lewis
-
Patent number: 7054850Abstract: A pattern detecting apparatus has a plurality of hierarchized neuron elements to detect a predetermined pattern included in input patterns. Pulse signals output from the plurality of neuron elements are given specific delays by synapse circuits associated with the individual elements. This makes it possible to transmit the pulse signals to the neuron elements of the succeeding layer through a common bus line so that they can be identified on a time base. The neuron elements of the succeeding layer output the pulse signals at output levels based on a arrival time pattern of the plurality of pulse signals received from the plurality of neuron elements of the preceding layer within a predetermined time window. Thus, the reliability of pattern detection can be improved, and the number of wires interconnecting the elements can be reduced by the use of the common bus line, leading to a small scale of circuit and reduced power consumption.Type: GrantFiled: June 12, 2001Date of Patent: May 30, 2006Assignee: Canon Kabushiki KaishaInventor: Masakazu Matsugu
-
Patent number: 6876989Abstract: A neural network system includes a feedforward network comprising at least one neuron circuit for producing an activation function and a first derivative of the activation function and a weight updating circuit for producing updated weights to the feedforward network. The system also includes an error back-propagation network for receiving the first derivative of the activation function and to provide weight change data information to the weight updating circuit.Type: GrantFiled: February 13, 2002Date of Patent: April 5, 2005Assignee: Winbond Electronics CorporationInventors: Bingxue Shi, Chun Lu, Lu Chen
-
Patent number: 6856983Abstract: A method and system is described that adaptively adjusts an eService management system by using feedback control. Behavior experts are distributed at different levels of the hierarchy of the eService management system. Within the hierarchy, feed-forward reasoning is performed from lower level behavior experts to the higher level behavior experts. A method for identifying bottlenecks is described and utilized. The performance of these behavior experts is compared with various objective functions. The discrepancies are used to adjust the system.Type: GrantFiled: October 26, 2001Date of Patent: February 15, 2005Assignee: Panacya, Inc.Inventors: Earl D. Cox, Xindong Wang, Shi-Yue Qiu
-
Patent number: 6826550Abstract: Provided is a compiler to map application program code to object code capable of being executed on an operating system platform. A first neural network module is trained to generate characteristic output based on input information describing attributes of the application program. A second neural network module is trained to receive as input the application program code and the characteristic output and, in response, generate object code. The first and second neural network modules are used to convert the application program code to object code.Type: GrantFiled: December 15, 2000Date of Patent: November 30, 2004Assignee: International Business Machines CorporationInventors: Michael Wayne Brown, Chung Tien Nguyen
-
Patent number: 6820053Abstract: Method of suppressing audible noise in speech transmission by means of a multi-layer self-organizing fed-back neural network comprising a minima detection layer, a reaction layer, a diffusion layer and an integration layer, said layers defining a filter function F(f,T) for noise filtering.Type: GrantFiled: October 6, 2000Date of Patent: November 16, 2004Inventor: Dietmar Ruwisch
-
Patent number: 6778986Abstract: Computer method and apparatus identifies content owner of a Web site. A collecting step or element collects candidate names from the subject Web site. For each candidate name, a test module (or testing step) runs tests that provide quantitative/statistical evaluation of the candidate name being the content owner name of the subject Web site. The test results are combined mathematically, such as by a Bayesian network, into an indication of content owner name.Type: GrantFiled: November 1, 2000Date of Patent: August 17, 2004Assignee: Eliyon Technologies CorporationInventors: Jonathan Stern, Kosmas Karadimitriou, Michel Decary, Jeremy W. Rothman-Shore
-
Publication number: 20040107171Abstract: A user's preference structure in respect of alternative “objects” with which the user is presented is caputured in a multi-attribute utility function. The user ranks these competing objects in order of the user's relative preference for such objects. A utility function that defines the user's preference structure is provided as output on the basis of this relative ranking. This technique can be used to assist a buyer in selecting between multi-attribute quotes or bids submitted by prospective suppliers to the buyer.Type: ApplicationFiled: December 3, 2002Publication date: June 3, 2004Inventors: Jayanta Basak, Manish Gupta
-
Publication number: 20040064427Abstract: A PBNN for isolating faults in a plurality of components forming a physical system comprising a plurality of input nodes each input node comprising a plurality of inputs comprising a measurement of the physical system, and an input transfer function comprising a hyperplane representation of at least one fault for converting the at least one input into a first layer output, a plurality of hidden layer nodes each receiving at least one first layer output and comprising a hidden transfer function for converting the at least one of at least one first layer output into a hidden layer output comprising a root sum square of a plurality of distances of at least one of the at least one first layer outputs, and a plurality of output nodes each receiving at least one of the at least one hidden layer outputs and comprising an output transfer function for converting the at least one hidden layer outputs into an output.Type: ApplicationFiled: September 30, 2002Publication date: April 1, 2004Inventors: Hans R. Depold, David John Sirag
-
Patent number: 6714924Abstract: A method and apparatus for color matching are provided, in which paint recipe neural networks are utilized. The color of a standard is expressed as color values. The neural network includes an input layer having nodes for receiving input data related to paint bases. Weighted connections connect to the nodes of the input layer and have coefficients for weighting the input data. An output layer having nodes are either directly or indirectly connected to the weighted connections and generates output data related to color values. The data to the input layer and the data from the output layer are interrelated through the neural network's nonlinear relationship. The paint color matching neural network can be used for, but not limited to, color formula correction, matching from scratch, effect pigment identification, selection of targets for color tools, searching existing formulas for the closest match, identification of formula mistakes, development of color tolerances and enhancing conversion routines.Type: GrantFiled: February 7, 2001Date of Patent: March 30, 2004Assignee: BASF CorporationInventor: Craig J. McClanahan
-
Publication number: 20040030664Abstract: Two neural networks are used to control adaptively a vibration and noise-producing plant. The first neural network, the emulator, models the complex, nonlinear output of the plant with respect to certain controls and stimuli applied to the plant. The second neural network, the controller, calculates a control signal which affects the vibration and noise producing characteristics of the plant. By using the emulator model to calculate the nonlinear plant gradient, the controller matrix coefficients can be adapted by backpropagation of the plant gradient to produce a control signal which results in the minimum vibration and noise possible, given the current operating characteristics of the plant.Type: ApplicationFiled: November 5, 2002Publication date: February 12, 2004Inventors: Antonios N. Kotoulas, Charles Berezin, Michael S. Torok, Peter F. Lorber
-
Publication number: 20040010480Abstract: A method for operating a neural network, and a program and apparatus that operate in accordance with the method. The method comprises the steps of applying data indicative of predetermined content, derived from an electronic signal including a representation of the predetermined content, to an input of at least one neural network, to cause the at least one network to generate at least one output indicative of either a detection or a non-detection of the predetermined content. Each neural network has an architecture specified by at least one corresponding parameter. The method also comprises a step of evolving the at least one parameter to modify the architecture of the at least one neural network, based on the at least one output, to increase an accuracy at which that at least one neural network detects the predetermined content indicated by the data.Type: ApplicationFiled: July 9, 2002Publication date: January 15, 2004Inventors: Lalitha Agnihotri, James David Schaffer, Nevenka Dimitrova, Thomas McGee, Sylvie Jeannin
-
Publication number: 20040006545Abstract: An analog neural computing medium, neuron and neural networks comprising same are disclosed. The neural computing medium includes a phase change material that has the ability to cumulatively respond to multiple synchronous or asynchronous input signals. The introduction of input signals induces transformations among a plurality of accumulation states of the disclosed neural computing medium. The accumulation states are characterized by a high electrical resistance that is substantially identical for all accumulation states. The high electrical resistance prevents the neural computing medium from transmitting signals. Upon cumulative receipt of energy from one or more input signals that equals or exceeds a threshold value, the neural computing medium fires by transforming to a low resistance state that is capable of transmitting signals. The neural computing medium thus closely mimics the neurosynaptic function of a biological neuron.Type: ApplicationFiled: July 3, 2002Publication date: January 8, 2004Inventor: Stanford R. Ovhsinsky
-
Publication number: 20040002928Abstract: A RBF pattern recognition method for reducing classification errors is provided. An optimum RBF training approach is obtained for reducing an error calculated by an error function. The invention continuously generates the updated differences of parameters in the learning process of recognizing training samples. The modified parameters are employed to stepwise adjust the RBF neural network. The invention can distinguish different degrees of importance and learning contributions among the training samples and evaluate the learning contribution of each training sample for obtaining differences of the parameters of the training samples. When the learning contribution is larger, the updated difference is larger to speed up the learning. Thus, the difference of the parameters is zero when the training samples are classified as the correct pattern type.Type: ApplicationFiled: October 25, 2002Publication date: January 1, 2004Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventor: Yea-Shuan Huang
-
Publication number: 20030225716Abstract: A neural network includes a programmable template matching network and a winner take all network. The programmable template matching network can be programmed with different templates. The WTA network has an output which can be reconfigured and the scale of the WTA network can expanded.Type: ApplicationFiled: May 31, 2002Publication date: December 4, 2003Inventors: Bingxue Shi, Guoxing Li
-
Patent number: 6643627Abstract: An information processing system having signal processors that are interconnected by processing junctions that simulate and extend biological neural networks. Each processing junction receives signals from one signal processor and generates a new signal to another signal processor. The response of each processing junction is determined by internal junction processes and is continuously changed with temporal variation in the received signal. Different processing junctions connected to receive a common signal from a signal processor respond differently to produce different signals to downstream signal processors. This transforms a temporal pattern of a signal train of spikes into a spatio-temporal pattern of junction events and provides an exponential computational power to signal processors. Each signal processing junction can receive a feedback signal from a downstream signal processor so that an internal junction process can be adjusted to learn certain characteristics embedded in received signals.Type: GrantFiled: March 26, 2002Date of Patent: November 4, 2003Assignee: University of Southern CaliforniaInventors: Jim-Shih Liaw, Theodore W. Berger
-
Publication number: 20030191728Abstract: A method is described for improving the prediction accuracy and generalization performance of artificial neural network models in presence of input-output example data containing instrumental noise and/or measurement errors, the presence of noise and/or errors in the input-output example data used for training the network models create difficulties in learning accurately the nonlinear relationships existing between the inputs and the outputs, to effectively learn the noisy relationships, the methodology envisages creation of a large-sized noise-superimposed sample input-output dataset using computer simulations, here, a specific amount of Gaussian noise is added to each input/output variable in the example set and the enlarged sample data set created thereby is used as the training set for constructing the artificial neural network model, the amount of noise to be added is specific to an input/output variable and its optimal value is determined using a stochastic search and optimization technique, namely, genType: ApplicationFiled: March 27, 2002Publication date: October 9, 2003Inventors: Bhaskar Dattatray Kulkarni, Sanjeev Shrikrishna Tambe, Jayaram Budhaji Lonari, Neelamkumar Valecha, Sanjay Vasantrao Dheshmukh, Bhavanishankar Shenoy, Sivaraman Ravichandran
-
Publication number: 20030163436Abstract: A neuronal network for modeling an output function that describes a physical system using functionally linked neurons (2), each of which is assigned a transfer function, allowing it to transfer an output value determined from said neuron to the next neuron that is functionally connected to it in series in the longitudinal direction (6) of the network (1), as an input value. The functional relations necessary for linking the neurons are provided within only one of at least two groups (21, 22, 23) of neurons arranged in a transverse direction (7) and between one input layer (3) and one output layer (5). The groups (21, 22, 23) include at least two intermediate layers (11, 12, 13) arranged sequentially in a longitudinal direction (5), each with at least one neuron.Type: ApplicationFiled: January 13, 2003Publication date: August 28, 2003Inventor: Jost Seifert
-
Publication number: 20030154175Abstract: A neural network system includes a feedforward network comprising at least one neuron circuit for producing an activation function and a first derivative of the activation function and a weight updating circuit for producing updated weights to the feedforward network. The system also includes an error back-propagation network for receiving the first derivative of the activation function and to provide weight change data information to the weight updating circuit.Type: ApplicationFiled: February 13, 2002Publication date: August 14, 2003Inventors: Bingxue Shi, Chun Lu, Lu Chen
-
Patent number: 6601052Abstract: The present invention discloses an implementation of the selective attention mechanism occurring in the human brain using a conventional neural network, multi-layer perceptron and the error back-propagation method as a conventional learning method, and an application of the selective attention mechanism to perception of patterns such as voices or characters. In contrast to the conventional multi-layer perceptron and error back-propagation method in which the weighted value of the network is changed based on a given input signal, the selective attention algorithm of the present invention involves learning a present input pattern to minimize the error of the output layer with the weighted value set to a fixed value, so that the network can receive only a desired input signal to simulate the selective attention mechanism in the aspect of the biology.Type: GrantFiled: June 19, 2000Date of Patent: July 29, 2003Assignee: Korea Advanced Institute of Science and TechnologyInventors: Soo Young Lee, Ki Young Park
-
Patent number: 6601051Abstract: A neural system is disclosed for processing an exogenous input process to produce a good outward output process with respect to a performance criterion, even if the range of one or both of these processes is necessarily large and/or keeps necessarily expanding during the operation of the neural system. The disclosed neural system comprises a recurrent neural network (RNN) and at least one range extender or reducer, each of which is a dynamic transformer. A range reducer transforms dynamically at least one component of the exogenous input process into inputs to at least one input neuron of said RNN. A range extender transforms dynamically outputs of at least one output neuron of said RNN into at least one component of the outward output process. There are many types of range extender and reducer, which have different degrees of effectiveness and computational costs.Type: GrantFiled: July 11, 1997Date of Patent: July 29, 2003Assignee: Maryland Technology CorporationInventors: James Ting-Ho Lo, Lei Yu
-
Patent number: 6526168Abstract: A neural classifier that allows visualization of the query, the training data and the decision regions in a single two-dimensional display, providing benefits for both the designer and the user. The visual neural classifier is formed from a set of experts and a visualization network. Visualization is accomplished by a funnel-shaped multilayer dimensionality reduction network configured to learn one or more classification tasks. If a single dimensionality reduction network does not provide sufficiently accurate classification results, a group of these dimensionality reduction networks may be arranged in a modular architecture. Among these dimensionality reduction networks, the experts receive the input data and the visualization network combines the decisions of the experts to form the final classification decision.Type: GrantFiled: March 18, 1999Date of Patent: February 25, 2003Assignee: The Regents of the University of CaliforniaInventors: Chester Ornes, Jack Sklansky
-
Patent number: 6473746Abstract: A method of verifying pretrained, static, feedforward neural network mapping software using Lipschitz constants for determining bounds on output values and estimation errors is disclosed. By way of example, two cases of interest from the point of view of safety-critical software, like aircraft fuel gauging systems, are discussed. The first case is the simpler case of when neural net mapping software is trained to replace look-up table mapping software. A detailed verification procedure is provided to establish functional equivalence of the neural net and look-up table mapping functions on the entire range of inputs accepted by the look-up table mapping function. The second case is when a neural net is trained to estimate the quantity of interest form the process (such as fuel mass, for example) from redundant and noisy sensor signals.Type: GrantFiled: December 16, 1999Date of Patent: October 29, 2002Assignee: Simmonds Precision Products, Inc.Inventor: Radoslaw Romuald Zakrzewski
-
Patent number: 6405122Abstract: A data estimation capability using a FNN to estimate engine state data for an engine control system is described. The data estimation capability provides for making data relating to the engine state available as control parameters in a simple, inexpensive manner. The data estimation includes using data from one or more sensors as inputs to a FNN to estimate unmeasured engine operating states. The data estimates are provided as control parameters to an engine control system. The FNN can be used to provide data estimates for engine state values (e.g. the exhaust air fuel ratio, the exhaust NOx. value, the combustion chamber temperature, etc.) that are too difficult or too expensive to measure directly. Each FNN can be configured using a genetic optimizer to select the input data used by the FNN and the coupling weights in the FNN.Type: GrantFiled: June 2, 1999Date of Patent: June 11, 2002Assignee: Yamaha Hatsudoki Kabushiki KaishaInventor: Masashi Yamaguchi
-
Patent number: 6381083Abstract: In a recording/playback system, increased information is achieved by 4 level biased magnetic recording where the maximum amplitude 4 level recording signal drives the medium's magnetization into a nonlinear region of its transfer function. The bias does not eliminate distortion at the maximum signal input level, however the system's signal to noise ratio is improved due to an increase in the amplitude of the playback signal resulting from the increased recording level. The nonlinear mapping capability of a neural network provides equalization of playback signals distorted due to the record/playback nonlinearity. The 4 level recorded signals provide a factor of 2 in information storage compared to binary recording, and quadrature amplitude modulation (QAM) combined with the 4 level recording technique provides an additional factor of 2, for a factor of 4 in the information content stored.Type: GrantFiled: July 30, 1999Date of Patent: April 30, 2002Assignee: Applied Nonlinear Sciences, LLCInventors: Henry D. I. Abarbanel, James U. Lemke, Lev S. Tsimring, Lev N. Korzinov, Paul H. Bryant, Mikhail M. Sushchik, Nikolai F. Rulkov
-
Patent number: 6363333Abstract: A time series that is established by a measured signal of a dynamic system, for example a quotation curve on the stock market, is modelled according to its probability density in order to be able to make a prediction of future values. A non-linear Markov process of the order m is suited for describing the conditioned probability densities. A neural network is trained according to the probabilities of the Markov process using the maximum likelihood principle, which is a training rule for maximizing the product of probabilities. The neural network predicts a value in the future for a prescribable number of values m from the past of the signal to be predicted. A number of steps in the future can be predicted by iteration. The order m of the non-linear Markov process, which corresponds to the number of values from the past that are important in the modelling of the conditioned probability densities, serves as parameter for improving the probability of the prediction.Type: GrantFiled: April 30, 1999Date of Patent: March 26, 2002Assignee: Siemens AktiengesellschaftInventors: Gustavo Deco, Christian Schittenkopf
-
Patent number: 6360193Abstract: In an intelligent object oriented agent system, a computer implemented or user assisted method of decision making in at least one situation. The method includes the step of configuring at least one tactical agent implemented by at least one tactical agent object that includes a plurality of resources corresponding to immediate certainties, near certainties, and longer-term possibilities characterizing the at least one situation. The method also includes the steps of processing the at least one situation using the at least one tactical agent, and implementing the decision making, by at least one user or independently by at least one intelligent agent responsive to the processing step. A computer readable tangible medium stores instructions for implementing the user assisted or computer implemented method of decision making, which instructions are executable by a computer. In a preferred embodiment, the situation comprises an aerial combat situation, or other situation with moving resources.Type: GrantFiled: March 29, 1999Date of Patent: March 19, 2002Assignee: 21st Century Systems, Inc.Inventor: Alexander D. Stoyen
-
Patent number: 6338052Abstract: A method for optimizing matching network between an output impedance and an input impedance in a semiconductor process apparatus is disclosed. The method includes the steps of: providing a neural network capable of being trained through repeated learning; training the neural network from previously performed process conditions; setting up an initial value; comparing the initial value with a theoretically calculated value, to obtain error between the values; and repeating the training, setting, and comparing steps until the error becomes zero.Type: GrantFiled: June 25, 1998Date of Patent: January 8, 2002Assignee: Hyundai Electronics Industries Co., Ltd.Inventor: Koon Ho Bae
-
Patent number: 6278986Abstract: An integrated control for a machine such as an engine installed in a vehicle or vessel is conducted by the steps of: determining the characteristics of a user and/or using conditions; and changing characteristics of a control unit of a machine in accordance with the determined characteristics. Normally, the control unit includes: a reflection hierarchy for outputting a base value; a learning hierarchy for learning and operation; and an evolutionary-adaptation hierarchy for selecting the most adaptable module. The machine is “trained” to suit the characteristics of the user and/or the using conditions.Type: GrantFiled: January 15, 1999Date of Patent: August 21, 2001Assignee: Yahama Hatsudoki Kabushiki KaishaInventors: Ichikai Kamihira, Masashi Yamaguchi
-
Patent number: 6192351Abstract: There is disclosed a pattern identifying neural network comprising at least an input and an output layer, the output layer having a plurality of principal nodes, each principal node trained to recognize a different class of patterns, and at least one fuzzy node trained to recognize all classes of patterns recognized by the principal nodes but with outputs set out at levels lower than the corresponding outputs of the principal nodes.Type: GrantFiled: January 27, 1998Date of Patent: February 20, 2001Assignee: Osmetech PLCInventor: Krishna Chandra Persaud
-
Patent number: 6064997Abstract: A family of novel multi-layer discrete-time neural net controllers is presented for the control of an multi-input multi-output (MIMO) dynamical system. No learning phase is needed. The structure of the neural net (NN) controller is derived using a filtered error/passivity approach. For guaranteed stability, the upper bound on the constant learning rate parameter for the delta rule employed in standard back propagation is shown to decrease with the number of hidden-layer neurons so that learning must slow down. This major drawback is shown to be easily overcome by using a projection algorithm in each layer. The notion of persistency of excitation for multilayer NN is defined and explored. New on-line improved tuning algorithms for discrete-time systems are derived, which are similar to e-modification for the case of continuous-time systems, that include a modification to the learning rate parameter plus a correction term. These algorithms guarantee tracking as well as bounded NN weights.Type: GrantFiled: March 19, 1997Date of Patent: May 16, 2000Assignee: University of Texas System, The Board of RegentsInventors: Sarangapani Jagannathan, Frank Lewis
-
Patent number: 6058386Abstract: The invention relates to a device for designing a neural network, in which to determine the number of neurons (21 . . . 24) in the intermediate layer, the domain of the input signal (X1, X2) in question is subdivided into a predefinable number of subdomains, and in the case of a multiplicity n of input signals (X1, X2), the n-dimensional value space of the n input signals is subdivided in conformance with the subdomains in question into n-dimensional partial spaces, and the supporting values (xi, yi) of the training data are assigned to the subdomains or partial spaces, and the subdomains or partial spaces having the most supporting values are selected, and in which case, for each selected subdomain or partial space, provision is made for a neuron in the intermediate layer preceding the output layer. The device according to the invention can be advantageously used for designing neural networks where the training data are unevenly distributed.Type: GrantFiled: December 8, 1997Date of Patent: May 2, 2000Assignee: Siemens AktiengesellschaftInventor: Karl-Heinz Kirchberg