Abstract: A digital artificial neural network (ANN) reduces memory requirements by storing sample transfer function representing output values for multiple nodes. Each nodes receives an input value representing the information to be processed by the network. Additionally, the node determines threshold values indicative of boundaries for application of the sample transfer function for the node. From the input value received, the node generates an intermediate value. Based on the threshold values and the intermediate value, the node determines an output value in accordance with the sample transfer function.
Type:
Grant
Filed:
April 18, 1997
Date of Patent:
March 21, 2000
Assignee:
Industrial Technology Research Institute
Abstract: Apparatus and method for determining a location and span of an object in an image. The determined location and span of the object are used to process the image to simplify a subsequent classification process.
Type:
Grant
Filed:
April 4, 1997
Date of Patent:
February 22, 2000
Assignee:
Kofile Inc.
Inventors:
Alexander Shustorovich, Christopher W. Thrasher
Abstract: In order to realize a neural network for image processing by an inexpensive hardware arrangement, a neural network arranged in an image processing apparatus is constituted by an input layer having neurons for receiving information from picture elements in a 7.times.7 area including an interesting picture element in an image, an intermediate layer having one neuron connected to all the 49 neurons in the input layer and five groups of nine neurons, the nine neurons in each group being connected to nine neurons in the input layer, which receive information from picture elements in at least one of five 3.times.3 areas (1a to 1e), and an output layer having one neuron, which is connected to all the neurons in the intermediate layer and outputs information corresponding to the interesting picture element.
Abstract: Disclosed is a two-layer switched-current type of a Hamming neural network system. This Hamming network system includes a matching rate computation circuit for modules on a first layer used to compute a matching rate between a to-be-identified pattern and each one of a plurality of standard patterns, a matching rate comparison circuit on a second layer for ranking an order of the matching rates including a switched-current type order-ranking circuit for receiving switched-current signals, finding a maximum value and outputting a time-division order-ranking output and an identification-rejection judgment circuit for performing an absolute and a relative judgment, and a pulse-generating circuit for generating sequential clock pulses, in which the circuit construction of the Hamming network is simple and flexible due to a modular design with extendible circuit dimensions, and a high precision, improved performance and enhanced reliability of the network system is achieved.
Abstract: A general neural network based method and system for identifying peptide binding motifs from limited experimental data. In particular, an artificial neural network (ANN) is trained with peptides with known sequence and function (i.e., binding strength) identified from a phage display library. The ANN is then challenged with unknown peptides, and predicts relative binding motifs. Analysis of the unknown peptides validate the predictive capability of the ANN.
Abstract: Constructing and simulating artificial neural networks and components thereof within a spreadsheet environment results in user friendly neural networks which do not require algorithmic based software in order to train or operate. Such neural networks can be easily cascaded to form complex neural networks and neural network systems, including neural networks capable of self-organizing so as to self-train within a spreadsheet, neural networks which train simultaneously within a spreadsheet, and neural networks capable of autonomously moving, monitoring, analyzing, and altering data within a spreadsheet. Neural networks can also be cascaded together in self training neural network form to achieve a device prototyping system.
Abstract: A real-time learning (RTL) neural network is capable of indicating when an input feature vector is novel with respect to feature vectors contained within its training data set, and is capable of learning to generate a correct response to a new data vector while maintaining correct responses to previously learned data vectors without requiring that the neural network be retrained on the previously learned data. The neural network has a sensor for inputting a feature vector, a first layer and a second layer. The feature vector is supplied to the first layer which may have one or more declared and unused nodes. During training, the input feature vector is clustered to a declared node only if it lies within a hypervolume defined by the declared node's automatically selectable reject radius, else the input feature vector is clustered to an unused node. Clustering in overlapping hypervolumes is determined by a decision surface.
Type:
Grant
Filed:
July 30, 1997
Date of Patent:
November 10, 1998
Assignee:
Martin Marietta Corporation
Inventors:
Herbert Duvoisin, III, Hal E. Beck, Joe R. Brown, Mark Bower
Abstract: An information recognition circuit comprises a plurality of recognition processing units each composed of a neural network. Teacher signals and information signals to be processed are supplied to a plurality of the units, individually so as to obtain output signals by executing individual learning. Thereafter, the plural units are connected to each other so as to construct a large scale information recognition system. Further, in the man-machine interface system, a plurality of operating instruction data are prepared. An operator's face is sensed by a TV camera to extract the factors related to the operator's facial expression. The neural network analogizes operator's feeling on the basis of the extracted factors. In accordance with the guessed results, a specific sort of the operating instruction is selected from a plurality of sorts of the operating instructions, and the selected instruction is displayed as an appropriate instruction for the operator.
Abstract: A dynamically stable associative learning neural system includes a plurality of neural network architectural units. A neural network architectural unit has as input both condition stimuli and unconditioned stimulus, an output neuron for accepting the input, and patch elements interposed between each input and the output neuron. The patches in the architectural unit can be modified and added. A neural network can be formed from a single unit, a layer of units, or multiple layers of units.
Type:
Grant
Filed:
February 24, 1995
Date of Patent:
October 13, 1998
Assignees:
The United States of America as represented by the Secretary of Health & Human Services, ERIM International, Inc.
Inventors:
Daniel L. Alkon, Thomas P. Vogl, Kim T. Blackwell, Garth S. Barbour
Abstract: A continuous logic system using a neural network is characterized by defining input and output variables that do not use a membership function, by employing production rules (IF/THEN rules) that relate the output variables to the input variables, and by using the neural network to compute or interpolate the outputs. The neural network first learns the given production rules and then produces the outputs in real time. The neural network is constructed of artificial neurons each having only one significant processing element in the form of a multiplier. The neural network utilizes a training algorithm which does not require repetitive training and which yields a global minimum to each given set of input vectors.
Abstract: A process is set forth in which cancer of the colon is assessed in a patient. The probabilities of developing cancer involves the initial step of extracting a set of sample body fluids from the patient. Fluids can be evaluated to determine certain marker constituents in the body fluids. Fluids which are extracted have some relationship to me development of cancer, precancer or tendency toward cancerous conditions. The body fluid markers are measured and other quantified. The marker data then is evaluated using a nonlinear technique exemplified through the use of a multiple input and multiple output neural network having a variable learning rate and training rate. The neural network is provided with data from other patients for the same or similar markers. Data from other patients who did and did not have cancer is used in the leaning of the neural network which thereby processes the data and provides a determination that the patient has a cancerous condition, precancer cells or a tendency towards cancer.
Type:
Grant
Filed:
January 24, 1996
Date of Patent:
August 4, 1998
Inventors:
Gary L. Heseltine, Richard E. Warrington