Multilayer Feedforward Patents (Class 706/31)
  • Patent number: 10997497
    Abstract: A device includes a first divider circuit connected to a first data lane and configured to receive a first data lane value having a first index, to receive a second index corresponding to a second data lane value from a second data lane, and to selectively output a first adding value or the first data lane value based on whether the first index is equal to the second index and a first adder circuit connected to the second data lane and the first divider circuit and configured to receive the first adding value from the first divider circuit, to receive the second data lane value, and to add the first adding value to the second data lane value to generate an addition result.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: May 4, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Jin-ook Song
  • Patent number: 10943582
    Abstract: A method and apparatus of training an acoustic feature extracting model, a device and a computer storage medium. The method comprises: considering a first acoustic feature extracted respectively from speech data corresponding to user identifiers as training data; training an initial model based on a deep neural network based on a criterion of a minimum classification error, until a preset first stop condition is reached; using a triplet loss layer to replace a Softmax layer in the initial model to constitute an acoustic feature extracting model, and continuing to train the acoustic feature extracting model until a preset second stop condition is reached, the acoustic feature extracting model being used to output a second acoustic feature of the speech data; wherein the triplet loss layer is used to maximize similarity between the second acoustic features of the same user, and minimize similarity between the second acoustic features of different users.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: March 9, 2021
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Bing Jiang, Xiaokong Ma, Chao Li, Xiangang Li
  • Patent number: 10929057
    Abstract: Provided are techniques for selecting a disconnect from different types of channel disconnects using a machine learning module. An Input/Output (I/O) operation is received from a host via a channel. Inputs are provided to a machine learning module. An output is received from the machine learning module. Based on the output, one of no disconnect from the channel, a logical disconnect from the channel, or a physical disconnect from the channel is selected.
    Type: Grant
    Filed: February 7, 2019
    Date of Patent: February 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Beth A. Peterson, Lokesh M. Gupta, Matthew R. Craig, Kevin J. Ash
  • Patent number: 10929749
    Abstract: An apparatus to facilitate optimization of a neural network (NN) is disclosed. The apparatus includes optimization logic to define a NN topology having one or more macro layers, adjust the one or more macro layers to adapt to input and output components of the NN and train the NN based on the one or more macro layers.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: February 23, 2021
    Assignee: INTEL CORPORATION
    Inventors: Narayan Srinivasa, Joydeep Ray, Nicolas C. Galoppo Von Borries, Ben Ashbaugh, Prasoonkumar Surti, Feng Chen, Barath Lakshmanan, Elmoustapha Ould-Ahmed-Vall, Liwei Ma, Linda L. Hurd, Abhishek R. Appu, John C. Weast, Sara S. Baghsorkhi, Justin E. Gottschlich, Chandrasekaran Sakthivel, Farshad Akhbari, Dukhwan Kim, Altug Koker, Nadathur Rajagopalan Satish
  • Patent number: 10922610
    Abstract: Systems, apparatuses and methods may provide for technology that conducts a first timing measurement of a blockage timing of a first window of the training of the neural network. The blockage timing measures a time that processing is impeded at layers of the neural network during the first window of the training due to synchronization of one or more synchronizing parameters of the layers. Based upon the first timing measurement, the technology is to determine whether to modify a synchronization barrier policy to include a synchronization barrier to impede synchronization of one or more synchronizing parameters of one of the layers during a second window of the training. The technology is further to impede the synchronization of the one or more synchronizing parameters of the one of the layers during the second window if the synchronization barrier policy is modified to include the synchronization barrier.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: February 16, 2021
    Assignee: Intel Corporation
    Inventors: Adam Procter, Vikram Saletore, Deepthi Karkada, Meenakshi Arunachalam
  • Patent number: 10885424
    Abstract: A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: January 5, 2021
    Assignee: International Business Machines Corporation
    Inventor: Dharmendra S. Modha
  • Patent number: 10817783
    Abstract: The disclosed computer-implemented method for efficiently updating neural networks may include (i) identifying a neural network that comprises sets of interconnected nodes represented at least in part by a plurality of matrices and that is trained on a training computing device and executes on at least one endpoint device, (ii) constraining a training session for the neural network to reduce the size in memory of the difference between the previous values of the matrices prior to the training session and the new values of the matrices after the training session, (iii) creating a delta update for the neural network that describes the difference between the previous values and the new values, and (iv) updating the neural network on the endpoint device to the new state by sending the delta update from the training computing device to the endpoint computing device. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: May 7, 2020
    Date of Patent: October 27, 2020
    Assignee: Facebook, Inc.
    Inventors: Nadav Rotem, Abdulkadir Utku Diril, Mikhail Smelyanskiy, Jong Soo Park, Christopher Dewan
  • Patent number: 10719613
    Abstract: The disclosed computer-implemented method may include (i) identifying a neural network that comprises an interconnected set of nodes organized in a set of layers represented by a plurality of matrices that each comprise a plurality of weights, where each weight represents a connection between a node in the interconnected set of nodes that resides in one layer in the set of layers and an additional node in the set of interconnected nodes that resides in a different layer in the set of layers, (ii) encrypting, using an encryption cipher, the plurality of weights, (iii) detecting that execution of the neural network has been initiated, and (iv) decrypting, using the encryption cipher, the plurality of weights in response to detecting that the execution of the neural network has been initiated. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: July 21, 2020
    Assignee: Facebook, Inc.
    Inventors: Nadav Rotem, Abdulkadir Utku Diril, Mikhail Smelyanskiy, Jong Soo Park, Roman Levenstein
  • Patent number: 10699190
    Abstract: The disclosed computer-implemented method for efficiently updating neural networks may include (i) identifying a neural network that comprises sets of interconnected nodes represented at least in part by a plurality of matrices and that is trained on a training computing device and executes on at least one endpoint device, (ii) constraining a training session for the neural network to reduce the size in memory of the difference between the previous values of the matrices prior to the training session and the new values of the matrices after the training session, (iii) creating a delta update for the neural network that describes the difference between the previous values and the new values, and (iv) updating the neural network on the endpoint device to the new state by sending the delta update from the training computing device to the endpoint computing device. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: March 4, 2018
    Date of Patent: June 30, 2020
    Assignee: Facebook, Inc.
    Inventors: Nadav Rotem, Abdulkadir Utku Diril, Mikhail Smelyanskiy, Jong Soo Park, Christopher Dewan
  • Patent number: 10459959
    Abstract: Methods and apparatus for performing top-k query processing include pruning a list of documents to identify a subset of the list of documents, where pruning includes, for other query terms in the set of query terms, skipping a document in the list of documents based, at least in part, on the contribution of the query term to the score of the corresponding document and the term upper bound for each other query term, in the set of query terms, that matches the document.
    Type: Grant
    Filed: November 7, 2016
    Date of Patent: October 29, 2019
    Assignee: Oath Inc.
    Inventors: David Carmel, Guy Gueta, Edward Bortnikov
  • Patent number: 10460237
    Abstract: Artificial neural networks (ANNs) are a distributed computing model in which computation is accomplished using many simple processing units (called neurons) and the data embodied by the connections between neurons (called synapses) and the strength of these connections (called synaptic weights). An attractive implementation of ANNs uses the conductance of non-volatile memory (NVM) elements to code the synaptic weight. In this application, the non-idealities in the response of the NVM (such as nonlinearity, saturation, stochasticity and asymmetry in response to programming pulses) lead to reduced network performance compared to an ideal network implementation. Disclosed is a method that improves performance by implementing a learning rate parameter that is local to each synaptic connection, a method for tuning this local learning rate, and an implementation that does not compromise the ability to train many synaptic weights in parallel during learning.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: October 29, 2019
    Assignee: International Business Machines Corporation
    Inventors: Irem Boybat Kara, Geoffrey Burr, Carmelo di Nolfo, Robert Shelby
  • Patent number: 10452540
    Abstract: Memory-mapped interfaces for message passing computing systems are provided. According to various embodiments, a write request is received. The write request comprises write data and a write address. The write address is a memory address within a memory map. The write address is translated into a neural network address. The neural network address identifies at least one input location of a destination neural network. The write data is sent via a network according to the neural network address to the at least one input location of the destination neural network. A message is received via the network from a source neural network. The message comprises data and at least one address. A location in a buffer is determined based on the at least one address. The data is stored at the location in the buffer. The buffer is accessible via the memory map.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: October 22, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Filipp A. Akopyan, John V. Arthur, Andrew S. Cassidy, Michael V. DeBole, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Patent number: 10417563
    Abstract: An intelligent control system based on an explicit model of cognitive development (Table 1) performs high-level functions. It comprises up to O hierarchically stacked neural networks, Nm, . . . , Nm+(O?1), where m denotes the stage/order tasks performed in the first neural network, Nm, and O denotes the highest stage/order tasks performed in the highest-level neural network. The type of processing actions performed in a network, Nm, corresponds to the complexity for stage/order m. Thus N1 performs tasks at the level corresponding to stage/order 1. N5 processes information at the level corresponding to stage/order 5. Stacked neural networks begin and end at any stage/order, but information must be processed by each stage in ascending order sequence. Stages/orders cannot be skipped. Each neural network in a stack may use different architectures, interconnections, algorithms, and training methods, depending on the stage/order of the neural network and the type of intelligent control system implemented.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: September 17, 2019
    Inventors: Michael Lamport Commons, Mitzi Sturgeon White
  • Patent number: 10410119
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for augmenting neural networks with an external memory. One of the methods includes providing an output derived from the neural network output for the time step as a system output for the time step; maintaining a current state of the external memory; determining, from the neural network output for the time step, memory state parameters for the time step; updating the current state of the external memory using the memory state parameters for the time step; reading data from the external memory in accordance with the updated state of the external memory; and combining the data read from the external memory with a system input for the next time step to generate the neural network input for the next time step.
    Type: Grant
    Filed: June 2, 2016
    Date of Patent: September 10, 2019
    Assignee: DeepMind Technologies Limited
    Inventors: Edward Thomas Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, Philip Blunsom
  • Patent number: 9658260
    Abstract: A power system grid is decomposed into several parts and decomposed state estimation steps are executed separately, on each part, using coordinated feedback regarding a boundary state. The achieved solution is the same that would be achieved with a simultaneous state estimation approach. With the disclosed approach, the state estimation problem can be distributed among decomposed estimation operations for each subsystem and a coordinating operation that yields the complete state estimate. The approach is particularly suited for estimating the state of power systems that are naturally decomposed into separate subsystems, such as separate AC and HVDC systems, and/or into separate transmission and distribution systems.
    Type: Grant
    Filed: September 4, 2013
    Date of Patent: May 23, 2017
    Assignee: ABB SCHWEIZ AG
    Inventors: Xiaoming Feng, Vaibhav Donde, Ernst Scholtz
  • Patent number: 9563842
    Abstract: A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron.
    Type: Grant
    Filed: August 11, 2015
    Date of Patent: February 7, 2017
    Assignee: International Business Machines Corporation
    Inventor: Dharmendra S. Modha
  • Patent number: 9489623
    Abstract: Apparatus and methods for developing robotic controllers comprising parallel networks. In some implementations, a parallel network may comprise at least first and second neuron layers. The second layer may be configured to determine a measure of discrepancy (error) between a target network output and actual network output. The network output may comprise control signal configured to cause a task execution by the robot. The error may be communicated back to the first neuron layer in order to adjust efficacy of input connections into the first layer. The error may be encoded into spike latency using linear or nonlinear encoding. Error communication and control signal provision may be time multiplexed so as to enable target action execution. Efficacy associated with forward and backward/reverse connections may be stored in individual arrays. A synchronization mechanism may be employed to match forward/reverse efficacy in order to implement plasticity.
    Type: Grant
    Filed: October 15, 2013
    Date of Patent: November 8, 2016
    Assignee: BRAIN CORPORATION
    Inventors: Oleg Sinyavskiy, Vadim Polonichko
  • Patent number: 9189731
    Abstract: A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron.
    Type: Grant
    Filed: March 24, 2014
    Date of Patent: November 17, 2015
    Assignee: International Business Machines Corporation
    Inventor: Dharmendra S. Modha
  • Patent number: 9183495
    Abstract: A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron.
    Type: Grant
    Filed: August 8, 2012
    Date of Patent: November 10, 2015
    Assignee: International Business Machines Corporation
    Inventor: Dharmendra S. Modha
  • Patent number: 8892485
    Abstract: Certain embodiments of the present disclosure support implementation of a neural processor with synaptic weights, wherein training of the synapse weights is based on encouraging a specific output neuron to generate a spike. The implemented neural processor can be applied for classification of images and other patterns.
    Type: Grant
    Filed: July 8, 2010
    Date of Patent: November 18, 2014
    Assignee: QUALCOMM Incorporated
    Inventors: Vladimir Aparin, Jeffrey A. Levin
  • Publication number: 20140180989
    Abstract: A parallel convolutional neural network is provided. The CNN is implemented by a plurality of convolutional neural networks each on a respective processing node. Each CNN has a plurality of layers. A subset of the layers are interconnected between processing nodes such that activations are fed forward across nodes. The remaining subset is not so interconnected.
    Type: Application
    Filed: September 18, 2013
    Publication date: June 26, 2014
    Applicant: Google Inc.
    Inventors: Alexander Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton
  • Patent number: 8712942
    Abstract: An active element machine is a new kind of computing machine. When implemented in hardware, the active element machine can execute multiple instructions simultaneously, because every one of its computing elements is active. This greatly enhances the computing speed. By executing a meta program whose instructions change the connections in a dynamic active element machine, the active element machine can perform tasks that a digital computer are unable to compute. In an embodiment, instructions in a computer language are translated into instructions in a register machine language. The instructions in the register machine language are translated into active element machine instructions. In an embodiment, an active element machine may be programmed using instructions for a register machine. The active element machine is not limited to these embodiments.
    Type: Grant
    Filed: April 24, 2007
    Date of Patent: April 29, 2014
    Assignee: AEMEA Inc.
    Inventor: Michael Stephen Fiske
  • Patent number: 8527542
    Abstract: User-generated input may be received to initiate a generation of a message associated with an incident of a computing system having a multi-layer architecture that requires support. Thereafter, context data associated with one or more operational parameters may be collected from each of at least two of the layers of the computing system. A message may then be generated on at least a portion of the user-generated input and at least a portion of the collected context data. Related apparatuses, methods, computer program products, and computer systems are also described.
    Type: Grant
    Filed: December 30, 2005
    Date of Patent: September 3, 2013
    Assignee: SAP AG
    Inventors: Tilmann Haeberle, Lilia Kotchanovskaia, Zoltan Nagy, Berthold Wocher, Juergen Subat
  • Publication number: 20130212053
    Abstract: A feature extraction device according to the present invention includes a neural network including neurons each including at least one expressed gene which is an attribute value for determining whether transmission of a signal from one of the first neurons to one of the second neurons is possible, each first neuron having input data resulting from target data to be subjected to feature extraction outputs a first signal value to corresponding second neuron(s) having the same expressed gene as the one in the first neuron, the first signal value increasing as a value of the input data increases, and each second neuron calculates, as a feature quantity of the target data, a second signal value corresponding to a total sum of the first signal values input thereto.
    Type: Application
    Filed: October 18, 2011
    Publication date: August 15, 2013
    Inventors: Takeshi Yagi, Takashi Kitsukawa
  • Patent number: 8468109
    Abstract: Systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the artificial neural network by providing scalability to neurons and layers. In a particular case, the systems and methods may include a back-propagation subsystem that is configured to scalably adjust weights in the artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements.
    Type: Grant
    Filed: December 28, 2011
    Date of Patent: June 18, 2013
    Inventors: Medhat Moussa, Antony Savich, Shawki Areibi
  • Publication number: 20120166374
    Abstract: Systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the artificial neural network by providing scalability to neurons and layers. In a particular case, the systems and methods may include a back-propagation subsystem that is configured to scalably adjust weights in the artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements.
    Type: Application
    Filed: December 28, 2011
    Publication date: June 28, 2012
    Inventors: Medhat Moussa, Antony Savich, Shawki Areibi
  • Patent number: 8190542
    Abstract: A neural network includes neurons and wires adapted for connecting the neurons. Some of the wires comprise input connections and exactly one output connection and/or a part of the wires comprise exactly one input connection and output connections. Neurons are hierarchically arranged in groups. A lower group of neurons recognizes a pattern of information input to the neurons of this lower group. A higher group of neurons recognizes higher level patterns. A strength value is associated with a connection between different neurons. The strength value of a particular connection is indicative of a likelihood that information which is input to the neurons propagates via the particular connection. The strength value of each connection is modifiable based on an amount of traffic of information which is input to the neurons and which propagates via the particular connection and/or is modifiable based on a strength modification impulse.
    Type: Grant
    Filed: September 27, 2006
    Date of Patent: May 29, 2012
    Assignee: ComDys Holding B.V.
    Inventor: Eugen Oetringer
  • Patent number: 8121817
    Abstract: Process control system for detecting abnormal events in a process having one or more independent variables and one or more dependent variables. The system includes a device for measuring values of the one or more independent and dependent variables, a process controller having a predictive model for calculating predicted values of the one or more dependent variables from the measured values of the one or more independent variables, a calculator for calculating residual values for the one or more dependent variables from the difference between the predicted and measured values of the one or more dependent variables, and an analyzer for performing a principal component analysis on the residual values. The process controller is a multivariable predictive control means and the principal component analysis results in the output of one or more scores values, T2 values and Q values.
    Type: Grant
    Filed: October 16, 2007
    Date of Patent: February 21, 2012
    Assignee: BP Oil International Limited
    Inventors: Keith Landells, Zaid Rawi
  • Patent number: 8103606
    Abstract: An architecture, systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the input layer, at least one hidden layer, and output layer. In a particular case, the architecture includes a back-propagation subsystem that is configured to adjust weights in the scalable artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements.
    Type: Grant
    Filed: December 10, 2007
    Date of Patent: January 24, 2012
    Inventors: Medhat Moussa, Antony Savich, Shawki Areibi
  • Patent number: 8065022
    Abstract: Embodiments of the invention can include methods and systems for controlling clearances in a turbine. In one embodiment, a method can include applying at least one operating parameter as an input to at least one neural network model, modeling via the neural network model a thermal expansion of at least one turbine component, and taking a control action based at least in part on the modeled thermal expansion of the one or more turbine components. An example system can include a controller operable to determine and apply the operating parameters as inputs to the neural network model, model thermal expansion via the neural network model, and generate a control action based at least in part on the modeled thermal expansion.
    Type: Grant
    Filed: January 8, 2008
    Date of Patent: November 22, 2011
    Assignee: General Electric Company
    Inventors: Karl Dean Minto, Jianbo Zhang, Erhan Karaca
  • Patent number: 8015130
    Abstract: In a hierarchical neural network having a module structure, learning necessary for detection of a new feature class is executed by a processing module which has not finished learning yet and includes a plurality of neurons which should learn an unlearned feature class and have an undetermined receptor field structure by presenting a predetermined pattern to a data input layer. Thus, a feature class necessary for subject recognition can be learned automatically and efficiently.
    Type: Grant
    Filed: January 29, 2010
    Date of Patent: September 6, 2011
    Assignee: Canon Kabushiki Kaisha
    Inventors: Masakazu Matsugu, Katsuhiko Mori, Mie Ishii, Yusuke Mitarai
  • Patent number: 7979370
    Abstract: A system for information searching includes a first layer and a second layer. The first layer includes a first plurality of neurons each associated with a word and with a first set of dynamic connections to at least some of the first plurality of neurons. The second layer include a second plurality of neurons each associated with a document and with a second set of dynamic connections to at least some of the first plurality of neurons. The first set of dynamic connections and the second set of dynamic connections can be configured such that a query of at least one neuron of the first plurality of neurons excites at least one neuron of the second plurality of neurons. The excited at least one neuron of the second plurality of neurons can be contextually related to the queried at least one neuron of the first plurality of neurons.
    Type: Grant
    Filed: January 29, 2009
    Date of Patent: July 12, 2011
    Assignee: Dranias Development LLC
    Inventor: Alexander V. Ershov
  • Patent number: 7831416
    Abstract: A method is provided for designing a product. The method may include obtaining data records relating to one or more input variables and one or more output parameters associated with the product; and pre-processing the data records based on characteristics of the input variables. The method may also include selecting one or more input parameters from the one or more input variables; and generating a computational model indicative of interrelationships between the one or more input parameters and the one or more output parameters based on the data records. Further, the method may include providing a set of constraints to the computational model representative of a compliance state for the product; and using the computational model and the provided set of constraints to generate statistical distributions for the one or more input parameters and the one or more output parameters, wherein the one or more input parameters and the one or more output parameters represent a design for the product.
    Type: Grant
    Filed: July 17, 2007
    Date of Patent: November 9, 2010
    Assignee: Caterpillar Inc
    Inventors: Anthony J. Grichnik, Michael Seskin, Amit Jayachandran
  • Patent number: 7788196
    Abstract: An artificial neural network comprises at least one input layer with a predetermined number of input nodes and at least one output layer with a predetermined number of output nodes or also at least one intermediate hidden layer with a predetermined number of nodes between the input and the output layer. At least the nodes of the output layer and/or of the hidden layer and/or also of the input layer carry out a non linear transformation of a first non linear transformation of the input data for computing an output value to be fed as an input value to a following layer or the output data if the output layer is considered.
    Type: Grant
    Filed: August 24, 2004
    Date of Patent: August 31, 2010
    Assignee: Semeion
    Inventor: Paolo Massimo Buscema
  • Patent number: 7743004
    Abstract: A pulse signal processing circuit, a parallel processing circuit, and a pattern recognition system including a plurality of arithmetic elements for outputting pulse signals and at least one modulation circuit, synaptic connection element(s), or synaptic connection means for modulating the pulse signals, the modulated pulse signals then being separately or exclusively output to corresponding signal lines.
    Type: Grant
    Filed: June 30, 2008
    Date of Patent: June 22, 2010
    Assignee: Canon Kabushiki Kaisha
    Inventor: Masakazu Matsugu
  • Publication number: 20100088263
    Abstract: There is described a method for computer-aided learning of a neural network, with a plurality of neurons in which the neurons of the neural network are divided into at least two layers, comprising a first layer and a second layer crosslinked with the first layer. In the first layer input information is respectively represented by one or more characteristic values from one or several characteristics, wherein every characteristic value comprises one or more neurons of the first layer. A plurality of categories is stored in the second layer, wherein every category comprises one or more neurons of the second layer. For one or several pieces of input information, respectively at least one category in the second layer is assigned to the characteristic values of the input information in the first layer.
    Type: Application
    Filed: September 20, 2006
    Publication date: April 8, 2010
    Inventors: Gustavo Deco, Martin Stetter, Miruna Szabo
  • Patent number: 7496548
    Abstract: A system, method and computer program product for information searching includes (a) a first layer with a first plurality of neurons, each of the first plurality of neurons being associated with a word and with a set of connections to at least some neurons of the first layer; (b) a second layer with a second plurality of neurons, each of the second plurality of neurons being associated with an object and with a set of connections to at least some neurons of the second layer, and with a set of connections to some neurons of the first layer; (c) a third layer with a third plurality of neurons, each of the third plurality of neurons being associated with a sentence and with a set of connections to at least some neurons of the third layer, and with a set of connections to at least some neurons of the first layer and to at least some neurons of the second layer; and (d) a fourth layer with a fourth plurality of neurons, each of the fourth plurality of neurons being associated with a document and with a set of conn
    Type: Grant
    Filed: August 29, 2006
    Date of Patent: February 24, 2009
    Assignee: Quintura, Inc.
    Inventor: Alexander V. Ershov
  • Publication number: 20080319933
    Abstract: An architecture, systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the input layer, at least one hidden layer, and output layer. In a particular case, the architecture includes a back-propagation subsystem that is configured to adjust weights in the scalable artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements.
    Type: Application
    Filed: December 10, 2007
    Publication date: December 25, 2008
    Inventors: Medhat Moussa, Antony Savich, Shawki Areibi
  • Publication number: 20080319934
    Abstract: A neural network (100) comprising a plurality of neurons (101 to 106) and a plurality of wires (109) adapted for connecting the plurality of neurons (101 to 106), wherein at least a part of the plurality of wires (109) comprises a plurality of input connections and exactly one output connection.
    Type: Application
    Filed: September 27, 2006
    Publication date: December 25, 2008
    Inventor: Eugen Oetringer
  • Patent number: 7409374
    Abstract: A method for discriminating between explosive events having their origins in High Explosive or Chemical/Biological detonation employing multiresolution analysis provided by a discrete wavelet transform. Original signatures of explosive events are broken down into subband components thereby removing higher frequency noise features and creating two sets of coefficients at varying levels of decomposition. These coefficients are obtained each time the signal is passed through a lowpass and highpass filter bank whose impulse response is derived from Daubechies db5 wavelet. Distinct features are obtained through the process of isolating the details of the high oscillatory components of the signature. The ratio of energy contained within the details at varying levels of decomposition is sufficient to discriminate between explosive events such as High Explosive and Chemical/Biological.
    Type: Grant
    Filed: August 22, 2005
    Date of Patent: August 5, 2008
    Assignee: The United States of America as represented by the Secretary of the Army
    Inventors: Myron Hohil, Sashi V. Desai
  • Patent number: 7395248
    Abstract: The invention concerns a method for determining competing risks for objects following an initial event based on previously measured or otherwise objectifiable training data patterns, in which several signals obtained from a learning capable system are combined in an objective function in such a way that said learning capable system is rendered capable of detecting or forecasting the underlying probabilities of each of the said competing risks.
    Type: Grant
    Filed: December 7, 2001
    Date of Patent: July 1, 2008
    Inventors: Ronald E. Kates, Nadia Harbeck
  • Patent number: 7392231
    Abstract: A user's preference structure in respect of alternative “objects” with which the user is presented is captured in a multi-attribute utility function. The user ranks these competing objects in order of the user's relative preference for such objects. A utility function that defines the user's preference structure is provided as output on the basis of this relative ranking. This technique can be used to assist a buyer in selecting between multi-attribute quotes or bids submitted by prospective suppliers to the buyer.
    Type: Grant
    Filed: December 3, 2002
    Date of Patent: June 24, 2008
    Assignee: International Business Machines Corporation
    Inventors: Jayanta Basak, Manish Gupta
  • Patent number: 7293002
    Abstract: A method for organizing processors to perform artificial neural network tasks is provided. The method provides a computer executable methodology for organizing processors in a self-organizing, data driven, learning hardware with local interconnections. A training data is processed substantially in parallel by the locally interconnected processors. The local processors determine local interconnections between the processors based on the training data. The local processors then determine, substantially in parallel, transformation functions and/or entropy based thresholds for the processors based on the training data.
    Type: Grant
    Filed: June 18, 2002
    Date of Patent: November 6, 2007
    Assignee: Ohio University
    Inventor: Janusz A. Starzyk
  • Patent number: 7143072
    Abstract: A neural network having layers of neurons divided into sublayers of neurons. The values of target neurons in one layer are calculated from sublayers of source neurons in a second underlying layer. It is therefore always possible to use for this calculation the same group of weights to be multiplied by respective source neurons related thereto and situated in the underlying layer of the neural network.
    Type: Grant
    Filed: September 26, 2002
    Date of Patent: November 28, 2006
    Assignee: CSEM Centre Suisse d′Electronique et de Microtechnique SA
    Inventors: Jean-Marc Masgonty, Philippe Vuilleumier, Peter Masa, Christian Piguet
  • Patent number: 7092922
    Abstract: An adaptive learning method for automated maintenance of a neural net model is provided. The neural net model is trained with an initial set of training data. Partial products of the trained model are stored. When new training data are available, the trained model is updated by using the stored partial products and the new training data to compute weights for the updated model.
    Type: Grant
    Filed: May 21, 2004
    Date of Patent: August 15, 2006
    Assignee: Computer Associates Think, Inc.
    Inventors: Zhuo Meng, Baofu Duan, Yoh-Han Pao
  • Patent number: 7080055
    Abstract: Methods and apparatuses for backlash compensation. A dynamics inversion compensation scheme is designed for control of nonlinear discrete-time systems with input backlash. The techniques of this disclosure extend the dynamic inversion technique to discrete-time systems by using a filtered prediction, and shows how to use a neural network (NN) for inverting the backlash nonlinearity in the feedforward path. The techniques provide a general procedure for using NN to determine the dynamics preinverse of an invertible discrete time dynamical system. A discrete-time tuning algorithm is given for the NN weights so that the backlash compensation scheme guarantees bounded tracking and backlash errors, and also bounded parameter estimates. A rigorous proof of stability and performance is given and a simulation example verifies performance. Unlike standard discrete-time adaptive control techniques, no certainty equivalence (CE) or linear-in-the-parameters (LIP) assumptions are needed.
    Type: Grant
    Filed: October 2, 2001
    Date of Patent: July 18, 2006
    Inventors: Javier Campos, Frank L. Lewis
  • Patent number: 7054850
    Abstract: A pattern detecting apparatus has a plurality of hierarchized neuron elements to detect a predetermined pattern included in input patterns. Pulse signals output from the plurality of neuron elements are given specific delays by synapse circuits associated with the individual elements. This makes it possible to transmit the pulse signals to the neuron elements of the succeeding layer through a common bus line so that they can be identified on a time base. The neuron elements of the succeeding layer output the pulse signals at output levels based on a arrival time pattern of the plurality of pulse signals received from the plurality of neuron elements of the preceding layer within a predetermined time window. Thus, the reliability of pattern detection can be improved, and the number of wires interconnecting the elements can be reduced by the use of the common bus line, leading to a small scale of circuit and reduced power consumption.
    Type: Grant
    Filed: June 12, 2001
    Date of Patent: May 30, 2006
    Assignee: Canon Kabushiki Kaisha
    Inventor: Masakazu Matsugu
  • Patent number: 6876989
    Abstract: A neural network system includes a feedforward network comprising at least one neuron circuit for producing an activation function and a first derivative of the activation function and a weight updating circuit for producing updated weights to the feedforward network. The system also includes an error back-propagation network for receiving the first derivative of the activation function and to provide weight change data information to the weight updating circuit.
    Type: Grant
    Filed: February 13, 2002
    Date of Patent: April 5, 2005
    Assignee: Winbond Electronics Corporation
    Inventors: Bingxue Shi, Chun Lu, Lu Chen
  • Patent number: 6856983
    Abstract: A method and system is described that adaptively adjusts an eService management system by using feedback control. Behavior experts are distributed at different levels of the hierarchy of the eService management system. Within the hierarchy, feed-forward reasoning is performed from lower level behavior experts to the higher level behavior experts. A method for identifying bottlenecks is described and utilized. The performance of these behavior experts is compared with various objective functions. The discrepancies are used to adjust the system.
    Type: Grant
    Filed: October 26, 2001
    Date of Patent: February 15, 2005
    Assignee: Panacya, Inc.
    Inventors: Earl D. Cox, Xindong Wang, Shi-Yue Qiu
  • Patent number: 6826550
    Abstract: Provided is a compiler to map application program code to object code capable of being executed on an operating system platform. A first neural network module is trained to generate characteristic output based on input information describing attributes of the application program. A second neural network module is trained to receive as input the application program code and the characteristic output and, in response, generate object code. The first and second neural network modules are used to convert the application program code to object code.
    Type: Grant
    Filed: December 15, 2000
    Date of Patent: November 30, 2004
    Assignee: International Business Machines Corporation
    Inventors: Michael Wayne Brown, Chung Tien Nguyen