Architecture Patents (Class 706/27)
  • Patent number: 7729953
    Abstract: In an example embodiment, a method is provided. The method may comprise receiving an auction item identifier from a global positioning system (GPS) apparatus. Auction data associated with the auction item identifier is accessed and transmitted to a voice portal server. The voice portal server may call a telephone number and receive a request to acquire the auction item.
    Type: Grant
    Filed: October 11, 2007
    Date of Patent: June 1, 2010
    Assignee: eBay Inc.
    Inventor: Senthil K. Pandurangan
  • Patent number: 7707128
    Abstract: In a parallel pulse signal processing apparatus including a plurality of pulse output arithmetic elements (2), a plurality of connection elements (3) which parallelly connect predetermined arithmetic elements, and a gate circuit (5) which selectively passes pulse signals from the plurality of connection elements, the arithmetic element inputs a plurality of time series pulse signals, executes predetermined modulation processing on the basis of the plurality of time series pulse signals which are input, and outputs a pulse signal on the basis of a result of modulation processing, wherein the gate circuit selectively passes, of the signals from the plurality of connection elements, a finite number of pulse signals corresponding to predetermined upper output levels.
    Type: Grant
    Filed: March 16, 2005
    Date of Patent: April 27, 2010
    Assignee: Canon Kabushiki Kaisha
    Inventor: Masakazu Matsugu
  • Patent number: 7698113
    Abstract: A Method for self controlled early detection and prediction of a performance shortage of an application is described comprises the steps of monitoring at least one performance parameter of the application, storing performance data including a time dependency of said performance parameter, using said performance data to compute a mathematic function describing a time dependent development of said performance parameter, using said mathematical function to compute a point in time when the performance parameter exceeds a certain threshold, and generating and outputting a prediction comprising information that a performance shortage of the application is expected at said computed point in time, if said point in time lies within a settable timeframe.
    Type: Grant
    Filed: May 11, 2006
    Date of Patent: April 13, 2010
    Assignee: International Business Machines Corporation
    Inventors: Torstein Steinbach, Michael Reichert
  • Publication number: 20100076916
    Abstract: A hierarchical information processing system is disclosed having a plurality of artificial neurons, comprised of binary logic gates, and interconnected through a second plurality of dynamic artificial synapses, intended to simulate or extend the function of a biological nervous system. The system is capable of approximation, autonomous learning and strengthening of formerly learned input patterns. The system learns by simulated Synaptic Time Dependent Plasticity, commonly abbreviated to STDP. Each artificial neuron consisting of a soma circuit and a plurality of synapse circuits, whereby the soma membrane potential, the soma threshold value, the synapse strength and the Post Synaptic Potential at each synapse are expressed as values in binary registers, which are dynamically determined from certain aspects of input pulse timing, previous strength value and output pulse feedback.
    Type: Application
    Filed: September 21, 2008
    Publication date: March 25, 2010
    Inventor: Peter AJ van der Made
  • Patent number: 7660774
    Abstract: A system and method for fault detection is provided. The fault detection system provides the ability to detect symptoms of fault in turbine engines and other mechanical systems that have nonlinear relationships. The fault detection system uses a neural network to perform a data representation and feature extraction where the extracted features are analogous to principal components derived in a principal component analysis. This neural network data representation analysis can then be used to determine the likelihood of a fault in the system.
    Type: Grant
    Filed: June 24, 2005
    Date of Patent: February 9, 2010
    Assignee: Honeywell International Inc.
    Inventors: Joydeb Mukherjee, Sunil Menon, Venkataramana B. Kini, Dinkar Mylaraswamy
  • Patent number: 7647287
    Abstract: A first total number of nodes in a first node set directly linked to a first node can be computed. A second total number of nodes in a second node set directly linked to a second node can be computed. A shared total number of nodes in a union of the first node set and the second node set can be computed. A mutual information metric can then be computed from the first total, the second total, and the shared total. A decision as to whether a new connection should be added between the first node and the second node, which were not previously directly connected, can be determined from the value of mutual information metric.
    Type: Grant
    Filed: November 21, 2008
    Date of Patent: January 12, 2010
    Assignee: International Business Machines Corporation
    Inventors: Noam Slonim, Elad Yom-Tov
  • Publication number: 20090240642
    Abstract: A special purpose processor (SPP) can use a Field Programmable Gate Array (FPGA) or similar programmable device to model a large number of neural elements. The FPGAs can have multiple cores doing presynaptic, postsynaptic, and plasticity calculations in parallel. Each core can implement multiple neural elements of the neural model.
    Type: Application
    Filed: April 9, 2009
    Publication date: September 24, 2009
    Applicant: NEUROSCIENCES RESEARCH FOUNDATION, INC.
    Inventors: James A. Snook, Donald B. Hutson, Jeffrey L. Krichmar
  • Publication number: 20090228415
    Abstract: A method for and system for training a connection network located between neuron layers within a multi-layer physical neural network. A multi-layer physical neural network can be formed having a plurality of inputs and a plurality outputs thereof, wherein the multi-layer physical neural network comprises a plurality of layers, wherein each layer comprises one or more connection networks and associated neurons. Thereafter, a training wave can be initiated across the connection networks associated with an initial layer of the multi-layer physical neural network which propagates thereafter through succeeding connection networks of succeeding layers of the neural network by successively closing and opening switches associated with each layer. One or more feedback signals thereof can be automatically provided to strengthen or weaken nanoconnections associated with each connection network.
    Type: Application
    Filed: April 10, 2008
    Publication date: September 10, 2009
    Inventor: Alex Nugent
  • Patent number: 7574411
    Abstract: Management of a low memory treelike data structure is shown. The method according to the invention comprises steps for creating a decision tree including a parent node and at least one leaf node, and steps for searching data from said nodes. The nodes of the decision tree are stored sequentially in such a manner that nodes follow the parent node in storage order, wherein the nodes refining the context of the searchable data can be reached without a link from their parent node. The method can preferably be utilized in speech-recognition systems, in text-to-phoneme mapping.
    Type: Grant
    Filed: April 29, 2004
    Date of Patent: August 11, 2009
    Assignee: Nokia Corporation
    Inventors: Janne Suontausta, Jilei Tian
  • Publication number: 20090192958
    Abstract: A parallel processing device that computes a hierarchical neural network, the parallel processing device includes: a plurality of units that are identified by characteristic unit numbers that are predetermined identification numbers, respectively; a distribution control section that, in response to input as an input value of an output value outputted from one of the plurality of units through a unit output bus, outputs control data including the input value inputted and a selection unit number that is an identification number to select one unit among the plurality of units to the plurality of units through the unit input bus; and a common storage section that stores in advance coupling weights in a plurality of layers of the hierarchical neural network, the coupling weights being shared by plural ones of the plurality of units.
    Type: Application
    Filed: January 23, 2009
    Publication date: July 30, 2009
    Applicant: SEIKO EPSON CORPORATION
    Inventor: Masayoshi TODOROKIHARA
  • Publication number: 20090119236
    Abstract: A neural network comprising a plurality of neurons in which any one of the plurality of neurons is able to associate with itself or another neuron in the plurality of neurons via active connections to a further neuron in the plurality of neurons.
    Type: Application
    Filed: July 9, 2008
    Publication date: May 7, 2009
    Inventor: Robert George Hercus
  • Patent number: 7502769
    Abstract: Fractal memory systems and methods include a fractal tree that includes one or more fractal trunks. One or more object circuits are associated with the fractal tree. The object circuit(s) is configured from a plurality of nanotechnology-based components to provide a scalable distributed computing architecture for fractal computing. Additionally, a plurality of router circuits is associated with the fractal tree, wherein one or more fractal addresses output from a recognition circuit can be provided at a fractal trunk by the router circuits.
    Type: Grant
    Filed: November 7, 2005
    Date of Patent: March 10, 2009
    Assignee: Knowmtech, LLC
    Inventor: Alex Nugent
  • Patent number: 7463773
    Abstract: An initial search method uses the input image and the template to create an initial search result output. A high precision match uses the initial search result, the input image, and the template to create a high precision match result output. The high precision match method estimates high precision parameters by image interpolation and interpolation parameter optimization. The method also performs robust matching by limiting pixel contribution or pixel weighting. An invariant high precision match method estimates subpixel position and subsampling scale and rotation parameters by image interpolation and interpolation parameter optimization on the log-converted radial-angular transformation domain.
    Type: Grant
    Filed: November 26, 2003
    Date of Patent: December 9, 2008
    Assignee: DRVision Technologies LLC
    Inventors: Shih-Jong J. Lee, Seho Oh
  • Patent number: 7444308
    Abstract: The data mining platform comprises a plurality of system modules (500, 550), each formed from a plurality of components. Each module has an input data component (502, 552), a data analysis engine (504, 554) for processing the input data, an output data component (506, 556) for outputting the results of the data analysis, and a web server (510) to access and monitor the other modules within the unit and to provide communication to other units. Each module processes a different type of data, for example, a first module processes microarray (gene expression) data while a second module processes biomedical literature on the Internet for information supporting relationships between genes and diseases and gene functionality.
    Type: Grant
    Filed: June 17, 2002
    Date of Patent: October 28, 2008
    Assignee: Health Discovery Corporation
    Inventors: Isabelle Guyon, Edward P. Reiss, René Doursat, Jason Aaron Edward Weston
  • Publication number: 20080256009
    Abstract: Described is a system for temporal prediction. The system includes an extraction module, a mapping module, and a prediction module. The extraction module is configured to receive X(1), . . . X(n) historical samples of a time series and utilize a genetic algorithm to extract deterministic features in the time series. The mapping module is configured to receive the deterministic features and utilize a learning algorithm to map the deterministic features to a predicted {circumflex over (x)}(n+1) sample of the time series. Finally, the prediction module is configured to utilize a cascaded computing structure having k levels of prediction to generate a predicted {circumflex over (x)}(n+k) sample. The predicted {circumflex over (x)}(n+k) sample is a final temporal prediction for k future samples.
    Type: Application
    Filed: April 12, 2007
    Publication date: October 16, 2008
    Inventors: Qin Jiang, Narayan Srinivasa
  • Publication number: 20080201284
    Abstract: A computer-implemented model of the central nervous system includes at least one of a basal ganglia portion, a cerebral cortex portion coupled to the basal ganglia portion, or a cerebellum portion coupled to the cerebral cortex. Each one of the basal ganglia portion, the cerebral cortex portion, and the cerebellum portion is comprised of respective elements representative of real neuroanatomical structures of a central nervous system and the respective elements are adapted to perform functions representative of real neuroanatomical functions of the central nervous system. At least one of the basal ganglia portion, the cerebral cortex portion, or the cerebellum portion is adapted to control a plant.
    Type: Application
    Filed: August 15, 2006
    Publication date: August 21, 2008
    Inventors: Steven G. Massaquoi, Zhi-Hong Mao
  • Patent number: 7398260
    Abstract: An Effector machine is a new kind of computing machine. When implemented in hardware, the Effector machine can execute multiple instructions simultaneously because every one of its computing elements is active. This greatly enhances the computing speed. By executing a meta program whose instructions change the connections in a dynamic Effector machine, the Effector machine can perform tasks that digital computers are unable to compute.
    Type: Grant
    Filed: March 2, 2004
    Date of Patent: July 8, 2008
    Assignee: Fiske Software LLC
    Inventor: Michael Stephen Fiske
  • Patent number: 7392230
    Abstract: A physical neural network is disclosed, which comprises a liquid state machine. The physical neural network is configured from molecular connections located within a dielectric solvent between pre-synaptic and post-synaptic electrodes thereof, such that the molecular connections are strengthened or weakened according to an application of an electric field or a frequency thereof to provide physical neural network connections thereof. A supervised learning mechanism is associated with the liquid state machine, whereby connections strengths of the molecular connections are determined by pre-synaptic and post-synaptic activity respectively associated with the pre-synaptic and post-synaptic electrodes, wherein the liquid state machine comprises a dynamic fading memory mechanism.
    Type: Grant
    Filed: December 30, 2003
    Date of Patent: June 24, 2008
    Assignee: KnowmTech, LLC
    Inventor: Alex Nugent
  • Patent number: 7359888
    Abstract: A method for configuring nanoscale neural network circuits using molecular-junction-nanowire crossbars, and nanoscale neural networks produced by this method. Summing of weighted inputs within a neural-network node is implemented using variable-resistance resistors selectively configured at molecular-junction-nanowire-crossbar junctions. Thresholding functions for neural network nodes are implemented using pFET and nFET components selectively configured at molecular-junction-nanowire-crossbar junctions to provide an inverter. The output of one level of neural network nodes is directed, through selectively configured connections, to the resistor elements of a second level of neural network nodes via circuits created in the molecular-junction-nanowire crossbar. An arbitrary number of inputs, outputs, neural network node levels, nodes, weighting functions, and thresholding functions for any desired neural network are readily obtained by the methods of the present invention.
    Type: Grant
    Filed: January 31, 2003
    Date of Patent: April 15, 2008
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Greg Snider
  • Publication number: 20080065575
    Abstract: An evolutionary neural network and a method of generating such a neural network is disclosed. The evolutionary neural network comprises an input set consisting of at least one input neuron, said input neurons being adapted for receiving an input signal form an external system, an output set consisting of at least one output neuron, said output neurons being adapted for producing an output signal for said external system, an internal network composed of a plurality of internal neurons, each internal neuron being adapted for processing a signal received from at least one of said input neurons or other internal neurons and producing a signal for at least one of said output neurons or other internal neurons, and a plurality of synapses constituting connections between said neurons, each of said synapses having a value of strength that can be adjusted by a learning process.
    Type: Application
    Filed: August 30, 2007
    Publication date: March 13, 2008
    Inventors: Eors Szathmary, Zoltan Szatmary, Peter Ittzes, Szabolcs Szamado
  • Patent number: 7293002
    Abstract: A method for organizing processors to perform artificial neural network tasks is provided. The method provides a computer executable methodology for organizing processors in a self-organizing, data driven, learning hardware with local interconnections. A training data is processed substantially in parallel by the locally interconnected processors. The local processors determine local interconnections between the processors based on the training data. The local processors then determine, substantially in parallel, transformation functions and/or entropy based thresholds for the processors based on the training data.
    Type: Grant
    Filed: June 18, 2002
    Date of Patent: November 6, 2007
    Assignee: Ohio University
    Inventor: Janusz A. Starzyk
  • Patent number: 7254565
    Abstract: An improved Artificial Neural Network (ANN) is disclosed that comprises a conventional ANN, a database block, and a compare and update circuit. The conventional ANN is formed by a plurality of neurons, each neuron having a prototype memory dedicated to store a prototype and a distance evaluator to evaluate the distance between the input pattern presented to the ANN and the prototype stored therein. The database block has: all the prototypes arranged in slices, each slice being capable to store up to a maximum number of prototypes; the input patterns or queries to be presented to the ANN; and the distances resulting of the evaluation performed during the recognition/classification phase. The compare and update circuit compares the distance with the distance previously found for the same input pattern updates or not the distance previously stored.
    Type: Grant
    Filed: May 3, 2002
    Date of Patent: August 7, 2007
    Assignee: International Business Machines Corporation
    Inventors: Ghislain Imbert De Tremiolles, Pascal Tannhof
  • Patent number: 7221899
    Abstract: A customer support system for improving the skill of a person engaged in a business with reduced cost for technical training support and enhancing practical training. The customer support system includes a computer system for introducing training contents from a service providing side to a customer side through a computer network; an instructor training facility for training an instructor based on training contents; and a remote technique training support device for the instructor, for supporting technical training on the customer side, from the service providing side, through the computer network.
    Type: Grant
    Filed: January 30, 2003
    Date of Patent: May 22, 2007
    Assignees: Mitsubishi Denki Kabushiki Kaisha, Chugoku Electric Power Co., Inc.
    Inventors: Hiroaki Ohno, Mitsuo Katagiri, Isao Shiromaru, Noriaki Suginohara
  • Patent number: 7177743
    Abstract: The vehicle control system having an adaptive controller is provided that accomplishes unsupervised learning such that no prior extensive training is needed for every situation. The inventive controller system is based on a neural network evolved with genetic algorithm. The genetic algorithm will determine the parameters of the neurons, the connections between the neurons and the associated weights to yield the best results. The genetic algorithm evaluates current candidate structures for accomplishing the desired result and develops new candidate structures by reproducing prior candidate structures with modification that replaces the least fit former candidate structures until the system is well satisfied. The vehicle control system is well satisfied when the desired result is met or some failure condition is triggered for vehicle control system whereby the action is never repeated.
    Type: Grant
    Filed: June 2, 2004
    Date of Patent: February 13, 2007
    Assignee: Toyota Engineering & Manufacturing North America, Inc.
    Inventor: Rini Roy
  • Patent number: 7043466
    Abstract: The neural network processing system according to the present invention includes a memory circuit for storing neuron output values, connection weights, the desired values of outputs, and data necessary for learning; an input/output circuit for writing or reading data in or out of said memory circuit; a processing circuit for performing a processing for determining the neuron outputs such as the product, sum and nonlinear conversion of the data stored in said memory circuit, a comparison of the output value and its desired value, and a processing necessary for learning; and a control circuit for controlling the operations of said memory circuit, said input/output circuit and said processing circuit.
    Type: Grant
    Filed: December 20, 2000
    Date of Patent: May 9, 2006
    Assignee: Renesas Technology Corp.
    Inventors: Takao Watanabe, Katsutaka Kimura, Kiyoo Itoh, Yoshiki Kawajiri
  • Patent number: 7039619
    Abstract: An apparatus for maintaining components in neural network formed utilizing nanotechnology is described herein. A connection gap can be formed between two terminals. A solution comprising a melting point at approximately room temperature can be provided, wherein the solution is maintained in the connection gap and comprises a plurality of nanoparticles forming nanoconnections thereof having connection strengths thereof, wherein the solution and the connection gap are adapted for use with a neural network formed utilizing nanotechnology, such when power is removed from the neural network, the solution freezes, thereby locking into place the connection strengths.
    Type: Grant
    Filed: January 31, 2005
    Date of Patent: May 2, 2006
    Assignee: Knowm Tech, LLC
    Inventor: Alex Nugent
  • Patent number: 7010513
    Abstract: The hardware of the present invention must be structured with the Brownian motion equation, Bayes' equation and its matrices as integral components. All data are input to a common bus bar. All data are then sent to all nodes simultaneously. Each node will have coded gates to admit the proper data to the appropriate matrix and Bayes' equation. Then, as the data are processed, they will be sent to a central data processing unit that integrates the data in the Brownian motion equation. The output is displayed in linguistic terms or in digital form by means of fuzzy logic.
    Type: Grant
    Filed: June 25, 2003
    Date of Patent: March 7, 2006
    Inventor: Raymond M. Tamura
  • Patent number: 6999953
    Abstract: An analog neural computing medium, neuron and neural networks are disclosed. The neural computing medium includes a phase change material that has the ability to cumulatively respond to multiple input signals. Input signals induce transformations among a plurality of accumulation states of the disclosed neural computing medium. The accumulation states are characterized by a high electrical resistance. Upon cumulative receipt of energy from one or more input signals that equals or exceeds a threshold value, the neural computing medium fires by transforming to a low resistance state. The disclosed neural computing medium may also be configured to perform a weighting function whereby it weights incoming signals. The disclosed neurons may also include activation units for further transforming signals transmitted by the accumulation units according to a mathematical operation. The artificial neurons, weighting units, accumulation units and activation units may be connected to form artificial neural networks.
    Type: Grant
    Filed: July 3, 2002
    Date of Patent: February 14, 2006
    Assignee: Energy Conversion Devices, Inc.
    Inventor: Stanford R. Ovhsinsky
  • Patent number: 6983265
    Abstract: A method is described to improve the data transfer rate between a personal computer or a host computer and a neural network implemented in hardware by merging a plurality of input patterns into a single input pattern configured to globally represent the set of input patterns. A base consolidated vector (U?*n) representing the input pattern is defined to describe all the vectors (Un, . . . , Un+6) representing the input patterns derived thereof (U?n, . . . , U?n+6) by combining components having fixed and ‘don't care’ values. The base consolidated vector is provided only once with all the components of the vectors. An artificial neural network (ANN) is then configured as a combination of sub-networks operating in parallel. In order to compute the distances with an adequate number of components, the prototypes are to include also components having a definite value and ‘don't care’ conditions. During the learning phase, the consolidated vectors are stored as prototypes.
    Type: Grant
    Filed: December 10, 2002
    Date of Patent: January 3, 2006
    Assignee: International Business Machines Corporation
    Inventors: Pascal Tannhof, Ghislain Imbert de Tremiolles
  • Patent number: 6976013
    Abstract: System and method for performing one or more relevant measurements at a target site in an animal body, using a probe. One or more of a group of selected internal measurements is performed at the target site, is optionally combined with one or more selected external measurements, and is optionally combined with one or more selected heuristic information items, in order to reduce to a relatively small number the probable medical conditions associated with the target site. One or more of the internal measurements is optionally used to navigate the probe to the target site. Neural net information processing is performed to provide a reduced set of probable medical conditions associated with the target site.
    Type: Grant
    Filed: June 16, 2004
    Date of Patent: December 13, 2005
    Assignee: The United States of America as represented by the Administrator of the National Aeronautics and Space Administration
    Inventor: Robert W. Mah
  • Patent number: 6889217
    Abstract: An autonomous adaptive agent which can learn verbal as well as nonverbal behavior. The primary object of the system is to optimize a primary value function over time through continuously learning how to behave in an environment (which may be physical or electronic). Inputs may include verbal advice or information from sources of varying reliability as well as direct or preprocessed environmental inputs. Desired agent behavior may include motor actions and verbal behavior which may constitute a system output (and which may also function “internally” to guide external actions. A further aspect involves an efficient “training” process by which the agent can be taught to utilize verbal advice and information along with environmental inputs.
    Type: Grant
    Filed: May 10, 2004
    Date of Patent: May 3, 2005
    Inventor: William R. Hutchison
  • Patent number: 6862574
    Abstract: The customer segmentation software according to the present invention automatically finds or creates profiles of prototypical customers in a large e-commerce database. The software matches all existing customer data in the database to one or more of the prototypical customers. The resulting customer segmentation is an effective summarization of the database and is useful for a range of business applications. Applications of the customer segmentation system include the development of customized web sites, the creation of targeted promotional offers and the prediction of consumer behavior.
    Type: Grant
    Filed: July 27, 2000
    Date of Patent: March 1, 2005
    Assignee: NCR Corporation
    Inventors: Sreedhar Srikant, Ellen M. Boerger, Scott W. Cunningham
  • Patent number: 6832214
    Abstract: Disclosed is a system, method, and program for generating a compiler to map a code set to object code capable of being executed on an operating system platform. At least one neural network is trained to convert the code set to object code. The at least one trained neural network can then be used to convert the code set to the object code.
    Type: Grant
    Filed: December 7, 1999
    Date of Patent: December 14, 2004
    Assignee: International Business Machines Corporation
    Inventor: Chung T. Nguyen
  • Publication number: 20040193559
    Abstract: This invention provides an interconnecting neural network system capable of freely taking a network form for inputting a plurality of input vectors, and facilitating additionally training an artificial neural network structure. The artificial neural network structure is constructed by interconnecting RBF elements relating to each other among all RBF elements via a weight. Each RBF element outputs an excitation strength according to a similarity between each input vector and a centroid vector based on a radius base function when the RBF element is excited by the input vector applied from an outside, and outputs a pseudo excitation strength obtained based on the excitation strength output from the other RBF element when the RBF element is excited in a chain reaction to excitation of the other neuron connected to the neuron.
    Type: Application
    Filed: March 23, 2004
    Publication date: September 30, 2004
    Inventor: Tetsuya Hoya
  • Patent number: 6799171
    Abstract: A neural network system including a plurality of tiers of interconnected computing elements. The plurality of tiers includes an input tier whereto a sequence of input speech vectors is applied at a first rate. Two of the plurality of tiers are interconnected through a decimator configured to reduce the first rate of the sequence of input vectors. Alternatively, two of the plurality of tiers are interconnected through an interpolator configured to increase the first rate of the sequence of input vectors.
    Type: Grant
    Filed: March 1, 2001
    Date of Patent: September 28, 2004
    Assignee: Swisscom AG
    Inventor: Robert Van Kommer
  • Patent number: 6782373
    Abstract: The method and circuits of the present invention aim to associate a norm to each component of an input pattern presented to an input space mapping algorithm based artificial neural network (ANN) during the distance evaluation process. The set of norms, referred to as the “component” norms is memorized in specific memorization means in the ANN. In a first embodiment, the ANN is provided with a global memory, common for all the neurons of the ANN, that memorizes all the component norms. For each component of the input pattern, all the neurons perform the elementary (or partial) distance calculation with the corresponding prototype components stored therein during the distance evaluation process using the associated component norm. The distance elementary calculations are then combined using a “distance” norm to determine the final distance between the input pattern and the prototypes stored in the neurons.
    Type: Grant
    Filed: July 12, 2001
    Date of Patent: August 24, 2004
    Assignee: International Business Machines Corporation
    Inventors: Ghislain Imbert de Tremiolles, Pascal Tannhof
  • Publication number: 20040162796
    Abstract: Methods and systems are disclosed herein in which a physical neural network can be configured utilizing nanotechnology. Such a physical neural network can comprise a plurality of molecular conductors (e.g., nanoconductors) which form neural connections between pre-synaptic and post-synaptic components of the physical neural network. Additionally, a learning mechanism can be applied for implementing Hebbian learning via the physical neural network. Such a learning mechanism can utilize a voltage gradient or voltage gradient dependencies to implement Hebbian and/or anti-Hebbian plasticity within the physical neural network. The learning mechanism can also utilize pre-synaptic and post-synaptic frequencies to provide Hebbian and/or anti-Hebbian learning within the physical neural network.
    Type: Application
    Filed: December 30, 2003
    Publication date: August 19, 2004
    Inventor: Alex Nugent
  • Publication number: 20040083193
    Abstract: A expandable neural network with on-chip back propagation learning is provided in the present invention. The expandable neural network comprises at least one neuron array containing a plurality of neurons, at least one synapse array containing a plurality of synapses, and an error generator array containing a plurality of error generator. An improved Gilbert multiplier is provided in each synapse where the output is a single-ended current. The synapse array receives a voltage input and generates a summed current output and a summed neuron error. The summed current output is sent to the input of the neuron array where the input current is transformed into a plurality of voltage output. These voltage output are sent to the error generator array for generating a weight error according to a control signal and a port signal.
    Type: Application
    Filed: October 29, 2002
    Publication date: April 29, 2004
    Inventors: Bingxue Shi, Chun Lu, Lu Chen
  • Publication number: 20040064425
    Abstract: A physics based neural network (PBNN) comprising a plurality of nodes each node comprising structure for receiving at least one input, and a transfer function for converting the at least one input into an output forming one of the at least one inputs to another one of the plurality of nodes, at least one training node set comprising the at least one input to one of the plurality of nodes, at least one input node set comprising the at least one input to the plurality of nodes, and a training algorithm for adjusting each of the plurality of nodes, wherein at least one of the transfer functions is different from at least one other of the transfer functions and wherein at least one of the plurality of nodes is a PBNN.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Inventors: Hans R. Depold, David John Sirag
  • Publication number: 20040064426
    Abstract: A physics based neural network (PBNN) for validating data in a physical system comprising a plurality of input nodes each receiving at least one input comprising an average measurement of a component and a standard deviation measurement of a component of the physical system and comprising a transfer function for converting the at least one input into an output, a plurality of intermediate nodes each receiving at least one output from at least one of the plurality of input nodes and comprising a transfer function embedded with knowledge of the physical system for converting the at least one output into an intermediate output, and a plurality of output nodes each receiving at least one intermediate outputs from the plurality of intermediate nodes and comprising a transfer function for outputting the average measurement of a component when the transfer function evaluates to a value greater than zero wherein the PBNN is trained with a predetermined data set.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Inventors: Hans R. Depold, David John Sirag
  • Publication number: 20040039717
    Abstract: A physical neural network synapse chip and a method for forming such a synapse chip. The synapse chip can be configured to include an input layer comprising a plurality of input electrodes and an output layer comprising a plurality of output electrodes, such that the output electrodes are located perpendicular to the input electrodes. A gap is generally formed between the input layer and the output layer. A solution can then be provided which is prepared from a plurality of nanoconductors and a dielectric solvent. The solution is located within the gap, such that an electric field is applied across the gap from the input layer to the output layer to form nanoconnections of a physical neural network implemented by the synapse chip. Such a gap can thus be configured as an electrode gap. The input electrodes can be configured as an array of input electrodes, while the output electrodes can be configured as an array of output electrodes.
    Type: Application
    Filed: August 22, 2002
    Publication date: February 26, 2004
    Inventor: Alex Nugent
  • Patent number: 6618711
    Abstract: The invention herein provides a supervisory circuit which is adapted to monitor an input signal and produce as an output signal, a parametric signal corresponding to the input signal. The circuit includes an input for receiving the input signal, and a stochastic processor coupled to the input for receiving the input signal and processing it to derive a signal that represents a parametric measure of the input signal. An output connected to said stochastic processor provides the parametric output signal as an output for supervisory purposes.
    Type: Grant
    Filed: May 26, 1999
    Date of Patent: September 9, 2003
    Assignee: International Business Machines Corporation
    Inventor: Ravi S. Ananth
  • Patent number: 6578020
    Abstract: Disclosed is a an integrated circuit method and system for generating a compiler to map a code set to object code capable of being executed on an operating system platform. The integrated circuit is encoded with logic including at least one neural network. The at least one neural network in the integrated circuit is trained to convert the code set to object code. The at least one trained neural network is then used to convert the code set to object code.
    Type: Grant
    Filed: December 7, 1999
    Date of Patent: June 10, 2003
    Assignee: International Business Machines Corporation
    Inventor: Chung T. Nguyen
  • Patent number: 6576951
    Abstract: Quantum computing systems and methods that use opposite magnetic moment states read the state of a qubit by applying current through the qubit and measuring a Hall effect voltage across the width of the current. For reading, the qubit is grounded to freeze the magnetic moment state, and the applied current is limited to pulses incapable of flipping the magnetic moment. Measurement of the Hall effect voltage can be achieved with an electrode system that is capacitively coupled to the qubit. An insulator or tunnel barrier isolates the electrode system from the qubit during quantum computing. The electrode system can include a pair of electrodes for each qubit. A readout control system uses a voltmeter or other measurement device that connects to the electrode system, a current source, and grounding circuits. For a multi-qubit system, selection logic can select which qubit or qubits are read.
    Type: Grant
    Filed: July 12, 2002
    Date of Patent: June 10, 2003
    Assignee: D-Wave Systems, Inc.
    Inventors: Zdravko Ivanov, Alexander Tzalentchuk, Jeremy P. Hilton, Alexander Maassen van den Brink
  • Patent number: 6560582
    Abstract: A dynamic memory processor for time variant pattern recognition and an input data dimensionality reduction is provided having a multi-layer harmonic neural network and a classifier network. The multi-layer harmonic neural network receives a fused feature vector of the pattern to be recognized from a neural sensor and generates output vectors which aid in discrimination between similar patterns. The fused feature vector and each output vector are separately provided to corresponding positional king of the mountain (PKOM) circuits within the classifier network. Each PKOM circuit generates a positional output vector with only one element having a value corresponding to one, the element corresponding to the element of its input vector having the highest contribution. The positional output vectors are mapped into a multidimensional memory space and read by a recognition vector array which generates a plurality of recognition vectors.
    Type: Grant
    Filed: January 5, 2000
    Date of Patent: May 6, 2003
    Assignee: The United States of America as represented by the Secretary of the Navy
    Inventor: Roger L. Woodall
  • Patent number: 6553357
    Abstract: The noise associated with conventional techniques for evolutionary improvement of neural network architectures is reduced so that of an optimum architecture can be determined more efficiently and more effectively. Parameters that affect the initialization of a neural network architecture are included within the encoding that is used by an evolutionary algorithm to optimize the neural network architecture. The example initialization parameters include an encoding that determines the initial nodal weights used in each architecture at the commencement of the training cycle. By including the initialization parameters within the encoding used by the evolutionary algorithm, the initialization parameters that have a positive effect on the performance of the resultant evolved network architecture are propagated and potentially improved from generation to generation. Conversely, initialization parameters that, for example, cause the resultant evolved network to be poorly trained, will not be propagated.
    Type: Grant
    Filed: September 1, 1999
    Date of Patent: April 22, 2003
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Keith E. Mathias, Larry J. Eshelman, J. David Schaffer
  • Publication number: 20030055799
    Abstract: A method for organizing processors to perform artificial neural network tasks is provided. The method provides a computer executable methodology for organizing processors in a self-organizing, data driven, learning hardware with local interconnections. A training data is processed substantially in parallel by the locally interconnected processors. The local processors determine local interconnections between the processors based on the training data. The local processors then determine, substantially in parallel, transformation functions and/or entropy based thresholds for the processors based on the training data.
    Type: Application
    Filed: June 18, 2002
    Publication date: March 20, 2003
    Inventor: Janusz A. Starzyk
  • Patent number: 6523018
    Abstract: The neural semiconductor chip first includes: a global register and control logic circuit block, a R/W memory block and a plurality of neurons fed by buses transporting data such as the input vector data, set-up parameters, etc., and signals such as the feed back and control signals. The R/W memory block, typically a RAM, is common to all neurons to avoid circuit duplication, increasing thereby the number of neurons integrated in the chip. The R/W memory stores the prototype components. Each neuron comprises a computation block, a register block, an evaluation block and a daisy chain block to chain the neurons. All these blocks (except the computation block) have a symmetric structure and are designed so that each neuron may operate in a dual manner, i.e. either as a single neuron (single mode) or as two independent neurons (dual mode). Each neuron generates local signals.
    Type: Grant
    Filed: December 22, 1999
    Date of Patent: February 18, 2003
    Assignee: International Business Machines Corporation
    Inventors: Didier Louis, Pascal Tannhof, André Steimle
  • Patent number: 6516309
    Abstract: A method of evolving a neural network that includes a plurality of processing elements interconnected by a plurality of weighted connections includes the step of obtaining a definition for the neural network by evolving a plurality of weights for the plurality of weighted connections, and evolving a plurality of activation function parameters associated with the plurality of processing elements. Another step of the method includes determining whether the definition for the neural network may be simplified based upon at least one activation function parameter of the plurality of activation function parameters. Yet another step of the method includes updating the definition for the neural network in response to determining that the definition for the neural network may be simplified. The method utilizes particle swarm optimization techniques to evolve the plurality of weights and the plurality of activation parameters.
    Type: Grant
    Filed: July 14, 1999
    Date of Patent: February 4, 2003
    Assignee: Advanced Research & Technology Institute
    Inventors: Russell C. Eberhart, Yuhui Shi
  • Patent number: 6502083
    Abstract: The improved neuron is connected to input buses which transport input data and control signals. It basically consists of a computation block, a register block, an evaluation block and a daisy chain block. All these blocks, except the computation block substantially have a symmetric construction. Registers are used to store data: the local norm and context, the distance, the AIF value and the category. The improved neuron further needs some R/W memory capacity which may be placed either in the neuron or outside. The evaluation circuit is connected to an output bus to generate global signals thereon. The daisy chain block allows to chain the improved neuron with others to form an artificial neural network (ANN). The improved neuron may work either as a single neuron (single mode) or as two independent neurons (dual mode). In the latter case, the computation block, which is common to the two dual neurons, must operate sequentially to service one neuron after the other.
    Type: Grant
    Filed: December 22, 1999
    Date of Patent: December 31, 2002
    Assignee: International Business Machines Corporation
    Inventors: Didier Louis, Pascal Tannhof, Andre Steimle