Multilayer Feedforward Patents (Class 706/31)
-
Patent number: 6820053Abstract: Method of suppressing audible noise in speech transmission by means of a multi-layer self-organizing fed-back neural network comprising a minima detection layer, a reaction layer, a diffusion layer and an integration layer, said layers defining a filter function F(f,T) for noise filtering.Type: GrantFiled: October 6, 2000Date of Patent: November 16, 2004Inventor: Dietmar Ruwisch
-
Patent number: 6778986Abstract: Computer method and apparatus identifies content owner of a Web site. A collecting step or element collects candidate names from the subject Web site. For each candidate name, a test module (or testing step) runs tests that provide quantitative/statistical evaluation of the candidate name being the content owner name of the subject Web site. The test results are combined mathematically, such as by a Bayesian network, into an indication of content owner name.Type: GrantFiled: November 1, 2000Date of Patent: August 17, 2004Assignee: Eliyon Technologies CorporationInventors: Jonathan Stern, Kosmas Karadimitriou, Michel Decary, Jeremy W. Rothman-Shore
-
Publication number: 20040107171Abstract: A user's preference structure in respect of alternative “objects” with which the user is presented is caputured in a multi-attribute utility function. The user ranks these competing objects in order of the user's relative preference for such objects. A utility function that defines the user's preference structure is provided as output on the basis of this relative ranking. This technique can be used to assist a buyer in selecting between multi-attribute quotes or bids submitted by prospective suppliers to the buyer.Type: ApplicationFiled: December 3, 2002Publication date: June 3, 2004Inventors: Jayanta Basak, Manish Gupta
-
Publication number: 20040064427Abstract: A PBNN for isolating faults in a plurality of components forming a physical system comprising a plurality of input nodes each input node comprising a plurality of inputs comprising a measurement of the physical system, and an input transfer function comprising a hyperplane representation of at least one fault for converting the at least one input into a first layer output, a plurality of hidden layer nodes each receiving at least one first layer output and comprising a hidden transfer function for converting the at least one of at least one first layer output into a hidden layer output comprising a root sum square of a plurality of distances of at least one of the at least one first layer outputs, and a plurality of output nodes each receiving at least one of the at least one hidden layer outputs and comprising an output transfer function for converting the at least one hidden layer outputs into an output.Type: ApplicationFiled: September 30, 2002Publication date: April 1, 2004Inventors: Hans R. Depold, David John Sirag
-
Patent number: 6714924Abstract: A method and apparatus for color matching are provided, in which paint recipe neural networks are utilized. The color of a standard is expressed as color values. The neural network includes an input layer having nodes for receiving input data related to paint bases. Weighted connections connect to the nodes of the input layer and have coefficients for weighting the input data. An output layer having nodes are either directly or indirectly connected to the weighted connections and generates output data related to color values. The data to the input layer and the data from the output layer are interrelated through the neural network's nonlinear relationship. The paint color matching neural network can be used for, but not limited to, color formula correction, matching from scratch, effect pigment identification, selection of targets for color tools, searching existing formulas for the closest match, identification of formula mistakes, development of color tolerances and enhancing conversion routines.Type: GrantFiled: February 7, 2001Date of Patent: March 30, 2004Assignee: BASF CorporationInventor: Craig J. McClanahan
-
Publication number: 20040030664Abstract: Two neural networks are used to control adaptively a vibration and noise-producing plant. The first neural network, the emulator, models the complex, nonlinear output of the plant with respect to certain controls and stimuli applied to the plant. The second neural network, the controller, calculates a control signal which affects the vibration and noise producing characteristics of the plant. By using the emulator model to calculate the nonlinear plant gradient, the controller matrix coefficients can be adapted by backpropagation of the plant gradient to produce a control signal which results in the minimum vibration and noise possible, given the current operating characteristics of the plant.Type: ApplicationFiled: November 5, 2002Publication date: February 12, 2004Inventors: Antonios N. Kotoulas, Charles Berezin, Michael S. Torok, Peter F. Lorber
-
Publication number: 20040010480Abstract: A method for operating a neural network, and a program and apparatus that operate in accordance with the method. The method comprises the steps of applying data indicative of predetermined content, derived from an electronic signal including a representation of the predetermined content, to an input of at least one neural network, to cause the at least one network to generate at least one output indicative of either a detection or a non-detection of the predetermined content. Each neural network has an architecture specified by at least one corresponding parameter. The method also comprises a step of evolving the at least one parameter to modify the architecture of the at least one neural network, based on the at least one output, to increase an accuracy at which that at least one neural network detects the predetermined content indicated by the data.Type: ApplicationFiled: July 9, 2002Publication date: January 15, 2004Inventors: Lalitha Agnihotri, James David Schaffer, Nevenka Dimitrova, Thomas McGee, Sylvie Jeannin
-
Publication number: 20040006545Abstract: An analog neural computing medium, neuron and neural networks comprising same are disclosed. The neural computing medium includes a phase change material that has the ability to cumulatively respond to multiple synchronous or asynchronous input signals. The introduction of input signals induces transformations among a plurality of accumulation states of the disclosed neural computing medium. The accumulation states are characterized by a high electrical resistance that is substantially identical for all accumulation states. The high electrical resistance prevents the neural computing medium from transmitting signals. Upon cumulative receipt of energy from one or more input signals that equals or exceeds a threshold value, the neural computing medium fires by transforming to a low resistance state that is capable of transmitting signals. The neural computing medium thus closely mimics the neurosynaptic function of a biological neuron.Type: ApplicationFiled: July 3, 2002Publication date: January 8, 2004Inventor: Stanford R. Ovhsinsky
-
Publication number: 20040002928Abstract: A RBF pattern recognition method for reducing classification errors is provided. An optimum RBF training approach is obtained for reducing an error calculated by an error function. The invention continuously generates the updated differences of parameters in the learning process of recognizing training samples. The modified parameters are employed to stepwise adjust the RBF neural network. The invention can distinguish different degrees of importance and learning contributions among the training samples and evaluate the learning contribution of each training sample for obtaining differences of the parameters of the training samples. When the learning contribution is larger, the updated difference is larger to speed up the learning. Thus, the difference of the parameters is zero when the training samples are classified as the correct pattern type.Type: ApplicationFiled: October 25, 2002Publication date: January 1, 2004Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventor: Yea-Shuan Huang
-
Publication number: 20030225716Abstract: A neural network includes a programmable template matching network and a winner take all network. The programmable template matching network can be programmed with different templates. The WTA network has an output which can be reconfigured and the scale of the WTA network can expanded.Type: ApplicationFiled: May 31, 2002Publication date: December 4, 2003Inventors: Bingxue Shi, Guoxing Li
-
Patent number: 6643627Abstract: An information processing system having signal processors that are interconnected by processing junctions that simulate and extend biological neural networks. Each processing junction receives signals from one signal processor and generates a new signal to another signal processor. The response of each processing junction is determined by internal junction processes and is continuously changed with temporal variation in the received signal. Different processing junctions connected to receive a common signal from a signal processor respond differently to produce different signals to downstream signal processors. This transforms a temporal pattern of a signal train of spikes into a spatio-temporal pattern of junction events and provides an exponential computational power to signal processors. Each signal processing junction can receive a feedback signal from a downstream signal processor so that an internal junction process can be adjusted to learn certain characteristics embedded in received signals.Type: GrantFiled: March 26, 2002Date of Patent: November 4, 2003Assignee: University of Southern CaliforniaInventors: Jim-Shih Liaw, Theodore W. Berger
-
Publication number: 20030191728Abstract: A method is described for improving the prediction accuracy and generalization performance of artificial neural network models in presence of input-output example data containing instrumental noise and/or measurement errors, the presence of noise and/or errors in the input-output example data used for training the network models create difficulties in learning accurately the nonlinear relationships existing between the inputs and the outputs, to effectively learn the noisy relationships, the methodology envisages creation of a large-sized noise-superimposed sample input-output dataset using computer simulations, here, a specific amount of Gaussian noise is added to each input/output variable in the example set and the enlarged sample data set created thereby is used as the training set for constructing the artificial neural network model, the amount of noise to be added is specific to an input/output variable and its optimal value is determined using a stochastic search and optimization technique, namely, genType: ApplicationFiled: March 27, 2002Publication date: October 9, 2003Inventors: Bhaskar Dattatray Kulkarni, Sanjeev Shrikrishna Tambe, Jayaram Budhaji Lonari, Neelamkumar Valecha, Sanjay Vasantrao Dheshmukh, Bhavanishankar Shenoy, Sivaraman Ravichandran
-
Publication number: 20030163436Abstract: A neuronal network for modeling an output function that describes a physical system using functionally linked neurons (2), each of which is assigned a transfer function, allowing it to transfer an output value determined from said neuron to the next neuron that is functionally connected to it in series in the longitudinal direction (6) of the network (1), as an input value. The functional relations necessary for linking the neurons are provided within only one of at least two groups (21, 22, 23) of neurons arranged in a transverse direction (7) and between one input layer (3) and one output layer (5). The groups (21, 22, 23) include at least two intermediate layers (11, 12, 13) arranged sequentially in a longitudinal direction (5), each with at least one neuron.Type: ApplicationFiled: January 13, 2003Publication date: August 28, 2003Inventor: Jost Seifert
-
Publication number: 20030154175Abstract: A neural network system includes a feedforward network comprising at least one neuron circuit for producing an activation function and a first derivative of the activation function and a weight updating circuit for producing updated weights to the feedforward network. The system also includes an error back-propagation network for receiving the first derivative of the activation function and to provide weight change data information to the weight updating circuit.Type: ApplicationFiled: February 13, 2002Publication date: August 14, 2003Inventors: Bingxue Shi, Chun Lu, Lu Chen
-
Patent number: 6601051Abstract: A neural system is disclosed for processing an exogenous input process to produce a good outward output process with respect to a performance criterion, even if the range of one or both of these processes is necessarily large and/or keeps necessarily expanding during the operation of the neural system. The disclosed neural system comprises a recurrent neural network (RNN) and at least one range extender or reducer, each of which is a dynamic transformer. A range reducer transforms dynamically at least one component of the exogenous input process into inputs to at least one input neuron of said RNN. A range extender transforms dynamically outputs of at least one output neuron of said RNN into at least one component of the outward output process. There are many types of range extender and reducer, which have different degrees of effectiveness and computational costs.Type: GrantFiled: July 11, 1997Date of Patent: July 29, 2003Assignee: Maryland Technology CorporationInventors: James Ting-Ho Lo, Lei Yu
-
Patent number: 6601052Abstract: The present invention discloses an implementation of the selective attention mechanism occurring in the human brain using a conventional neural network, multi-layer perceptron and the error back-propagation method as a conventional learning method, and an application of the selective attention mechanism to perception of patterns such as voices or characters. In contrast to the conventional multi-layer perceptron and error back-propagation method in which the weighted value of the network is changed based on a given input signal, the selective attention algorithm of the present invention involves learning a present input pattern to minimize the error of the output layer with the weighted value set to a fixed value, so that the network can receive only a desired input signal to simulate the selective attention mechanism in the aspect of the biology.Type: GrantFiled: June 19, 2000Date of Patent: July 29, 2003Assignee: Korea Advanced Institute of Science and TechnologyInventors: Soo Young Lee, Ki Young Park
-
Patent number: 6526168Abstract: A neural classifier that allows visualization of the query, the training data and the decision regions in a single two-dimensional display, providing benefits for both the designer and the user. The visual neural classifier is formed from a set of experts and a visualization network. Visualization is accomplished by a funnel-shaped multilayer dimensionality reduction network configured to learn one or more classification tasks. If a single dimensionality reduction network does not provide sufficiently accurate classification results, a group of these dimensionality reduction networks may be arranged in a modular architecture. Among these dimensionality reduction networks, the experts receive the input data and the visualization network combines the decisions of the experts to form the final classification decision.Type: GrantFiled: March 18, 1999Date of Patent: February 25, 2003Assignee: The Regents of the University of CaliforniaInventors: Chester Ornes, Jack Sklansky
-
Patent number: 6473746Abstract: A method of verifying pretrained, static, feedforward neural network mapping software using Lipschitz constants for determining bounds on output values and estimation errors is disclosed. By way of example, two cases of interest from the point of view of safety-critical software, like aircraft fuel gauging systems, are discussed. The first case is the simpler case of when neural net mapping software is trained to replace look-up table mapping software. A detailed verification procedure is provided to establish functional equivalence of the neural net and look-up table mapping functions on the entire range of inputs accepted by the look-up table mapping function. The second case is when a neural net is trained to estimate the quantity of interest form the process (such as fuel mass, for example) from redundant and noisy sensor signals.Type: GrantFiled: December 16, 1999Date of Patent: October 29, 2002Assignee: Simmonds Precision Products, Inc.Inventor: Radoslaw Romuald Zakrzewski
-
Patent number: 6405122Abstract: A data estimation capability using a FNN to estimate engine state data for an engine control system is described. The data estimation capability provides for making data relating to the engine state available as control parameters in a simple, inexpensive manner. The data estimation includes using data from one or more sensors as inputs to a FNN to estimate unmeasured engine operating states. The data estimates are provided as control parameters to an engine control system. The FNN can be used to provide data estimates for engine state values (e.g. the exhaust air fuel ratio, the exhaust NOx. value, the combustion chamber temperature, etc.) that are too difficult or too expensive to measure directly. Each FNN can be configured using a genetic optimizer to select the input data used by the FNN and the coupling weights in the FNN.Type: GrantFiled: June 2, 1999Date of Patent: June 11, 2002Assignee: Yamaha Hatsudoki Kabushiki KaishaInventor: Masashi Yamaguchi
-
Patent number: 6381083Abstract: In a recording/playback system, increased information is achieved by 4 level biased magnetic recording where the maximum amplitude 4 level recording signal drives the medium's magnetization into a nonlinear region of its transfer function. The bias does not eliminate distortion at the maximum signal input level, however the system's signal to noise ratio is improved due to an increase in the amplitude of the playback signal resulting from the increased recording level. The nonlinear mapping capability of a neural network provides equalization of playback signals distorted due to the record/playback nonlinearity. The 4 level recorded signals provide a factor of 2 in information storage compared to binary recording, and quadrature amplitude modulation (QAM) combined with the 4 level recording technique provides an additional factor of 2, for a factor of 4 in the information content stored.Type: GrantFiled: July 30, 1999Date of Patent: April 30, 2002Assignee: Applied Nonlinear Sciences, LLCInventors: Henry D. I. Abarbanel, James U. Lemke, Lev S. Tsimring, Lev N. Korzinov, Paul H. Bryant, Mikhail M. Sushchik, Nikolai F. Rulkov
-
Patent number: 6363333Abstract: A time series that is established by a measured signal of a dynamic system, for example a quotation curve on the stock market, is modelled according to its probability density in order to be able to make a prediction of future values. A non-linear Markov process of the order m is suited for describing the conditioned probability densities. A neural network is trained according to the probabilities of the Markov process using the maximum likelihood principle, which is a training rule for maximizing the product of probabilities. The neural network predicts a value in the future for a prescribable number of values m from the past of the signal to be predicted. A number of steps in the future can be predicted by iteration. The order m of the non-linear Markov process, which corresponds to the number of values from the past that are important in the modelling of the conditioned probability densities, serves as parameter for improving the probability of the prediction.Type: GrantFiled: April 30, 1999Date of Patent: March 26, 2002Assignee: Siemens AktiengesellschaftInventors: Gustavo Deco, Christian Schittenkopf
-
Patent number: 6360193Abstract: In an intelligent object oriented agent system, a computer implemented or user assisted method of decision making in at least one situation. The method includes the step of configuring at least one tactical agent implemented by at least one tactical agent object that includes a plurality of resources corresponding to immediate certainties, near certainties, and longer-term possibilities characterizing the at least one situation. The method also includes the steps of processing the at least one situation using the at least one tactical agent, and implementing the decision making, by at least one user or independently by at least one intelligent agent responsive to the processing step. A computer readable tangible medium stores instructions for implementing the user assisted or computer implemented method of decision making, which instructions are executable by a computer. In a preferred embodiment, the situation comprises an aerial combat situation, or other situation with moving resources.Type: GrantFiled: March 29, 1999Date of Patent: March 19, 2002Assignee: 21st Century Systems, Inc.Inventor: Alexander D. Stoyen
-
Patent number: 6338052Abstract: A method for optimizing matching network between an output impedance and an input impedance in a semiconductor process apparatus is disclosed. The method includes the steps of: providing a neural network capable of being trained through repeated learning; training the neural network from previously performed process conditions; setting up an initial value; comparing the initial value with a theoretically calculated value, to obtain error between the values; and repeating the training, setting, and comparing steps until the error becomes zero.Type: GrantFiled: June 25, 1998Date of Patent: January 8, 2002Assignee: Hyundai Electronics Industries Co., Ltd.Inventor: Koon Ho Bae
-
Patent number: 6278986Abstract: An integrated control for a machine such as an engine installed in a vehicle or vessel is conducted by the steps of: determining the characteristics of a user and/or using conditions; and changing characteristics of a control unit of a machine in accordance with the determined characteristics. Normally, the control unit includes: a reflection hierarchy for outputting a base value; a learning hierarchy for learning and operation; and an evolutionary-adaptation hierarchy for selecting the most adaptable module. The machine is “trained” to suit the characteristics of the user and/or the using conditions.Type: GrantFiled: January 15, 1999Date of Patent: August 21, 2001Assignee: Yahama Hatsudoki Kabushiki KaishaInventors: Ichikai Kamihira, Masashi Yamaguchi
-
Patent number: 6192351Abstract: There is disclosed a pattern identifying neural network comprising at least an input and an output layer, the output layer having a plurality of principal nodes, each principal node trained to recognize a different class of patterns, and at least one fuzzy node trained to recognize all classes of patterns recognized by the principal nodes but with outputs set out at levels lower than the corresponding outputs of the principal nodes.Type: GrantFiled: January 27, 1998Date of Patent: February 20, 2001Assignee: Osmetech PLCInventor: Krishna Chandra Persaud
-
Patent number: 6064997Abstract: A family of novel multi-layer discrete-time neural net controllers is presented for the control of an multi-input multi-output (MIMO) dynamical system. No learning phase is needed. The structure of the neural net (NN) controller is derived using a filtered error/passivity approach. For guaranteed stability, the upper bound on the constant learning rate parameter for the delta rule employed in standard back propagation is shown to decrease with the number of hidden-layer neurons so that learning must slow down. This major drawback is shown to be easily overcome by using a projection algorithm in each layer. The notion of persistency of excitation for multilayer NN is defined and explored. New on-line improved tuning algorithms for discrete-time systems are derived, which are similar to e-modification for the case of continuous-time systems, that include a modification to the learning rate parameter plus a correction term. These algorithms guarantee tracking as well as bounded NN weights.Type: GrantFiled: March 19, 1997Date of Patent: May 16, 2000Assignee: University of Texas System, The Board of RegentsInventors: Sarangapani Jagannathan, Frank Lewis
-
Patent number: 6058386Abstract: The invention relates to a device for designing a neural network, in which to determine the number of neurons (21 . . . 24) in the intermediate layer, the domain of the input signal (X1, X2) in question is subdivided into a predefinable number of subdomains, and in the case of a multiplicity n of input signals (X1, X2), the n-dimensional value space of the n input signals is subdivided in conformance with the subdomains in question into n-dimensional partial spaces, and the supporting values (xi, yi) of the training data are assigned to the subdomains or partial spaces, and the subdomains or partial spaces having the most supporting values are selected, and in which case, for each selected subdomain or partial space, provision is made for a neuron in the intermediate layer preceding the output layer. The device according to the invention can be advantageously used for designing neural networks where the training data are unevenly distributed.Type: GrantFiled: December 8, 1997Date of Patent: May 2, 2000Assignee: Siemens AktiengesellschaftInventor: Karl-Heinz Kirchberg
-
Patent number: 6041322Abstract: A digital artificial neural network (ANN) reduces memory requirements by storing sample transfer function representing output values for multiple nodes. Each nodes receives an input value representing the information to be processed by the network. Additionally, the node determines threshold values indicative of boundaries for application of the sample transfer function for the node. From the input value received, the node generates an intermediate value. Based on the threshold values and the intermediate value, the node determines an output value in accordance with the sample transfer function.Type: GrantFiled: April 18, 1997Date of Patent: March 21, 2000Assignee: Industrial Technology Research InstituteInventors: Wan-Yu Meng, Cheng-Kai Chang, Hwai-Tsu Chang, Fang-Ru Hsu, Ming-Rong Lee
-
Patent number: 6028956Abstract: Apparatus and method for determining a location and span of an object in an image. The determined location and span of the object are used to process the image to simplify a subsequent classification process.Type: GrantFiled: April 4, 1997Date of Patent: February 22, 2000Assignee: Kofile Inc.Inventors: Alexander Shustorovich, Christopher W. Thrasher
-
Patent number: 6026178Abstract: In order to realize a neural network for image processing by an inexpensive hardware arrangement, a neural network arranged in an image processing apparatus is constituted by an input layer having neurons for receiving information from picture elements in a 7.times.7 area including an interesting picture element in an image, an intermediate layer having one neuron connected to all the 49 neurons in the input layer and five groups of nine neurons, the nine neurons in each group being connected to nine neurons in the input layer, which receive information from picture elements in at least one of five 3.times.3 areas (1a to 1e), and an output layer having one neuron, which is connected to all the neurons in the intermediate layer and outputs information corresponding to the interesting picture element.Type: GrantFiled: September 5, 1996Date of Patent: February 15, 2000Assignee: Canon Kabushiki KaishaInventor: Yukari Toda
-
Patent number: 5999643Abstract: Disclosed is a two-layer switched-current type of a Hamming neural network system. This Hamming network system includes a matching rate computation circuit for modules on a first layer used to compute a matching rate between a to-be-identified pattern and each one of a plurality of standard patterns, a matching rate comparison circuit on a second layer for ranking an order of the matching rates including a switched-current type order-ranking circuit for receiving switched-current signals, finding a maximum value and outputting a time-division order-ranking output and an identification-rejection judgment circuit for performing an absolute and a relative judgment, and a pulse-generating circuit for generating sequential clock pulses, in which the circuit construction of the Hamming network is simple and flexible due to a modular design with extendible circuit dimensions, and a high precision, improved performance and enhanced reliability of the network system is achieved.Type: GrantFiled: April 6, 1998Date of Patent: December 7, 1999Assignee: Winbond Electronics Corp.Inventors: Bingxue Shi, Gu Lin
-
Patent number: 5933819Abstract: A general neural network based method and system for identifying peptide binding motifs from limited experimental data. In particular, an artificial neural network (ANN) is trained with peptides with known sequence and function (i.e., binding strength) identified from a phage display library. The ANN is then challenged with unknown peptides, and predicts relative binding motifs. Analysis of the unknown peptides validate the predictive capability of the ANN.Type: GrantFiled: May 23, 1997Date of Patent: August 3, 1999Assignee: The Scripps Research InstituteInventors: Jeffrey Skolnick, Mariusz Milik, Andrezej Kolinski
-
Patent number: 5852816Abstract: Constructing and simulating artificial neural networks and components thereof within a spreadsheet environment results in user friendly neural networks which do not require algorithmic based software in order to train or operate. Such neural networks can be easily cascaded to form complex neural networks and neural network systems, including neural networks capable of self-organizing so as to self-train within a spreadsheet, neural networks which train simultaneously within a spreadsheet, and neural networks capable of autonomously moving, monitoring, analyzing, and altering data within a spreadsheet. Neural networks can also be cascaded together in self training neural network form to achieve a device prototyping system.Type: GrantFiled: May 15, 1998Date of Patent: December 22, 1998Inventor: Stephen L. Thaler
-
Patent number: 5835901Abstract: A real-time learning (RTL) neural network is capable of indicating when an input feature vector is novel with respect to feature vectors contained within its training data set, and is capable of learning to generate a correct response to a new data vector while maintaining correct responses to previously learned data vectors without requiring that the neural network be retrained on the previously learned data. The neural network has a sensor for inputting a feature vector, a first layer and a second layer. The feature vector is supplied to the first layer which may have one or more declared and unused nodes. During training, the input feature vector is clustered to a declared node only if it lies within a hypervolume defined by the declared node's automatically selectable reject radius, else the input feature vector is clustered to an unused node. Clustering in overlapping hypervolumes is determined by a decision surface.Type: GrantFiled: July 30, 1997Date of Patent: November 10, 1998Assignee: Martin Marietta CorporationInventors: Herbert Duvoisin, III, Hal E. Beck, Joe R. Brown, Mark Bower
-
Patent number: 5832183Abstract: An information recognition circuit comprises a plurality of recognition processing units each composed of a neural network. Teacher signals and information signals to be processed are supplied to a plurality of the units, individually so as to obtain output signals by executing individual learning. Thereafter, the plural units are connected to each other so as to construct a large scale information recognition system. Further, in the man-machine interface system, a plurality of operating instruction data are prepared. An operator's face is sensed by a TV camera to extract the factors related to the operator's facial expression. The neural network analogizes operator's feeling on the basis of the extracted factors. In accordance with the guessed results, a specific sort of the operating instruction is selected from a plurality of sorts of the operating instructions, and the selected instruction is displayed as an appropriate instruction for the operator.Type: GrantFiled: January 13, 1997Date of Patent: November 3, 1998Assignee: Kabushiki Kaisha ToshibaInventors: Wataro Shinohara, Yasuo Takagi, Yutaka Iino, Shinji Hayashi, Junko Ohya, Yuichi Chida, Masahiko Murai
-
Patent number: 5822742Abstract: A dynamically stable associative learning neural system includes a plurality of neural network architectural units. A neural network architectural unit has as input both condition stimuli and unconditioned stimulus, an output neuron for accepting the input, and patch elements interposed between each input and the output neuron. The patches in the architectural unit can be modified and added. A neural network can be formed from a single unit, a layer of units, or multiple layers of units.Type: GrantFiled: February 24, 1995Date of Patent: October 13, 1998Assignees: The United States of America as represented by the Secretary of Health & Human Services, ERIM International, Inc.Inventors: Daniel L. Alkon, Thomas P. Vogl, Kim T. Blackwell, Garth S. Barbour
-
Patent number: 5799296Abstract: A continuous logic system using a neural network is characterized by defining input and output variables that do not use a membership function, by employing production rules (IF/THEN rules) that relate the output variables to the input variables, and by using the neural network to compute or interpolate the outputs. The neural network first learns the given production rules and then produces the outputs in real time. The neural network is constructed of artificial neurons each having only one significant processing element in the form of a multiplier. The neural network utilizes a training algorithm which does not require repetitive training and which yields a global minimum to each given set of input vectors.Type: GrantFiled: February 13, 1997Date of Patent: August 25, 1998Assignee: Motorola, Inc.Inventor: Shay-Ping Thomas Wang
-
Patent number: 5790761Abstract: A process is set forth in which cancer of the colon is assessed in a patient. The probabilities of developing cancer involves the initial step of extracting a set of sample body fluids from the patient. Fluids can be evaluated to determine certain marker constituents in the body fluids. Fluids which are extracted have some relationship to me development of cancer, precancer or tendency toward cancerous conditions. The body fluid markers are measured and other quantified. The marker data then is evaluated using a nonlinear technique exemplified through the use of a multiple input and multiple output neural network having a variable learning rate and training rate. The neural network is provided with data from other patients for the same or similar markers. Data from other patients who did and did not have cancer is used in the leaning of the neural network which thereby processes the data and provides a determination that the patient has a cancerous condition, precancer cells or a tendency towards cancer.Type: GrantFiled: January 24, 1996Date of Patent: August 4, 1998Inventors: Gary L. Heseltine, Richard E. Warrington