Multilayer Feedforward Patents (Class 706/31)
-
Patent number: 10997497Abstract: A device includes a first divider circuit connected to a first data lane and configured to receive a first data lane value having a first index, to receive a second index corresponding to a second data lane value from a second data lane, and to selectively output a first adding value or the first data lane value based on whether the first index is equal to the second index and a first adder circuit connected to the second data lane and the first divider circuit and configured to receive the first adding value from the first divider circuit, to receive the second data lane value, and to add the first adding value to the second data lane value to generate an addition result.Type: GrantFiled: May 16, 2018Date of Patent: May 4, 2021Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Jin-ook Song
-
Patent number: 10943582Abstract: A method and apparatus of training an acoustic feature extracting model, a device and a computer storage medium. The method comprises: considering a first acoustic feature extracted respectively from speech data corresponding to user identifiers as training data; training an initial model based on a deep neural network based on a criterion of a minimum classification error, until a preset first stop condition is reached; using a triplet loss layer to replace a Softmax layer in the initial model to constitute an acoustic feature extracting model, and continuing to train the acoustic feature extracting model until a preset second stop condition is reached, the acoustic feature extracting model being used to output a second acoustic feature of the speech data; wherein the triplet loss layer is used to maximize similarity between the second acoustic features of the same user, and minimize similarity between the second acoustic features of different users.Type: GrantFiled: May 14, 2018Date of Patent: March 9, 2021Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.Inventors: Bing Jiang, Xiaokong Ma, Chao Li, Xiangang Li
-
Patent number: 10929057Abstract: Provided are techniques for selecting a disconnect from different types of channel disconnects using a machine learning module. An Input/Output (I/O) operation is received from a host via a channel. Inputs are provided to a machine learning module. An output is received from the machine learning module. Based on the output, one of no disconnect from the channel, a logical disconnect from the channel, or a physical disconnect from the channel is selected.Type: GrantFiled: February 7, 2019Date of Patent: February 23, 2021Assignee: International Business Machines CorporationInventors: Beth A. Peterson, Lokesh M. Gupta, Matthew R. Craig, Kevin J. Ash
-
Patent number: 10929749Abstract: An apparatus to facilitate optimization of a neural network (NN) is disclosed. The apparatus includes optimization logic to define a NN topology having one or more macro layers, adjust the one or more macro layers to adapt to input and output components of the NN and train the NN based on the one or more macro layers.Type: GrantFiled: April 24, 2017Date of Patent: February 23, 2021Assignee: INTEL CORPORATIONInventors: Narayan Srinivasa, Joydeep Ray, Nicolas C. Galoppo Von Borries, Ben Ashbaugh, Prasoonkumar Surti, Feng Chen, Barath Lakshmanan, Elmoustapha Ould-Ahmed-Vall, Liwei Ma, Linda L. Hurd, Abhishek R. Appu, John C. Weast, Sara S. Baghsorkhi, Justin E. Gottschlich, Chandrasekaran Sakthivel, Farshad Akhbari, Dukhwan Kim, Altug Koker, Nadathur Rajagopalan Satish
-
Patent number: 10922610Abstract: Systems, apparatuses and methods may provide for technology that conducts a first timing measurement of a blockage timing of a first window of the training of the neural network. The blockage timing measures a time that processing is impeded at layers of the neural network during the first window of the training due to synchronization of one or more synchronizing parameters of the layers. Based upon the first timing measurement, the technology is to determine whether to modify a synchronization barrier policy to include a synchronization barrier to impede synchronization of one or more synchronizing parameters of one of the layers during a second window of the training. The technology is further to impede the synchronization of the one or more synchronizing parameters of the one of the layers during the second window if the synchronization barrier policy is modified to include the synchronization barrier.Type: GrantFiled: September 14, 2017Date of Patent: February 16, 2021Assignee: Intel CorporationInventors: Adam Procter, Vikram Saletore, Deepthi Karkada, Meenakshi Arunachalam
-
Patent number: 10885424Abstract: A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron.Type: GrantFiled: November 13, 2017Date of Patent: January 5, 2021Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Patent number: 10817783Abstract: The disclosed computer-implemented method for efficiently updating neural networks may include (i) identifying a neural network that comprises sets of interconnected nodes represented at least in part by a plurality of matrices and that is trained on a training computing device and executes on at least one endpoint device, (ii) constraining a training session for the neural network to reduce the size in memory of the difference between the previous values of the matrices prior to the training session and the new values of the matrices after the training session, (iii) creating a delta update for the neural network that describes the difference between the previous values and the new values, and (iv) updating the neural network on the endpoint device to the new state by sending the delta update from the training computing device to the endpoint computing device. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: May 7, 2020Date of Patent: October 27, 2020Assignee: Facebook, Inc.Inventors: Nadav Rotem, Abdulkadir Utku Diril, Mikhail Smelyanskiy, Jong Soo Park, Christopher Dewan
-
Patent number: 10719613Abstract: The disclosed computer-implemented method may include (i) identifying a neural network that comprises an interconnected set of nodes organized in a set of layers represented by a plurality of matrices that each comprise a plurality of weights, where each weight represents a connection between a node in the interconnected set of nodes that resides in one layer in the set of layers and an additional node in the set of interconnected nodes that resides in a different layer in the set of layers, (ii) encrypting, using an encryption cipher, the plurality of weights, (iii) detecting that execution of the neural network has been initiated, and (iv) decrypting, using the encryption cipher, the plurality of weights in response to detecting that the execution of the neural network has been initiated. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: February 23, 2018Date of Patent: July 21, 2020Assignee: Facebook, Inc.Inventors: Nadav Rotem, Abdulkadir Utku Diril, Mikhail Smelyanskiy, Jong Soo Park, Roman Levenstein
-
Patent number: 10699190Abstract: The disclosed computer-implemented method for efficiently updating neural networks may include (i) identifying a neural network that comprises sets of interconnected nodes represented at least in part by a plurality of matrices and that is trained on a training computing device and executes on at least one endpoint device, (ii) constraining a training session for the neural network to reduce the size in memory of the difference between the previous values of the matrices prior to the training session and the new values of the matrices after the training session, (iii) creating a delta update for the neural network that describes the difference between the previous values and the new values, and (iv) updating the neural network on the endpoint device to the new state by sending the delta update from the training computing device to the endpoint computing device. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: March 4, 2018Date of Patent: June 30, 2020Assignee: Facebook, Inc.Inventors: Nadav Rotem, Abdulkadir Utku Diril, Mikhail Smelyanskiy, Jong Soo Park, Christopher Dewan
-
Patent number: 10459959Abstract: Methods and apparatus for performing top-k query processing include pruning a list of documents to identify a subset of the list of documents, where pruning includes, for other query terms in the set of query terms, skipping a document in the list of documents based, at least in part, on the contribution of the query term to the score of the corresponding document and the term upper bound for each other query term, in the set of query terms, that matches the document.Type: GrantFiled: November 7, 2016Date of Patent: October 29, 2019Assignee: Oath Inc.Inventors: David Carmel, Guy Gueta, Edward Bortnikov
-
Patent number: 10460237Abstract: Artificial neural networks (ANNs) are a distributed computing model in which computation is accomplished using many simple processing units (called neurons) and the data embodied by the connections between neurons (called synapses) and the strength of these connections (called synaptic weights). An attractive implementation of ANNs uses the conductance of non-volatile memory (NVM) elements to code the synaptic weight. In this application, the non-idealities in the response of the NVM (such as nonlinearity, saturation, stochasticity and asymmetry in response to programming pulses) lead to reduced network performance compared to an ideal network implementation. Disclosed is a method that improves performance by implementing a learning rate parameter that is local to each synaptic connection, a method for tuning this local learning rate, and an implementation that does not compromise the ability to train many synaptic weights in parallel during learning.Type: GrantFiled: November 30, 2015Date of Patent: October 29, 2019Assignee: International Business Machines CorporationInventors: Irem Boybat Kara, Geoffrey Burr, Carmelo di Nolfo, Robert Shelby
-
Patent number: 10452540Abstract: Memory-mapped interfaces for message passing computing systems are provided. According to various embodiments, a write request is received. The write request comprises write data and a write address. The write address is a memory address within a memory map. The write address is translated into a neural network address. The neural network address identifies at least one input location of a destination neural network. The write data is sent via a network according to the neural network address to the at least one input location of the destination neural network. A message is received via the network from a source neural network. The message comprises data and at least one address. A location in a buffer is determined based on the at least one address. The data is stored at the location in the buffer. The buffer is accessible via the memory map.Type: GrantFiled: October 20, 2017Date of Patent: October 22, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Filipp A. Akopyan, John V. Arthur, Andrew S. Cassidy, Michael V. DeBole, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
-
Patent number: 10417563Abstract: An intelligent control system based on an explicit model of cognitive development (Table 1) performs high-level functions. It comprises up to O hierarchically stacked neural networks, Nm, . . . , Nm+(O?1), where m denotes the stage/order tasks performed in the first neural network, Nm, and O denotes the highest stage/order tasks performed in the highest-level neural network. The type of processing actions performed in a network, Nm, corresponds to the complexity for stage/order m. Thus N1 performs tasks at the level corresponding to stage/order 1. N5 processes information at the level corresponding to stage/order 5. Stacked neural networks begin and end at any stage/order, but information must be processed by each stage in ascending order sequence. Stages/orders cannot be skipped. Each neural network in a stack may use different architectures, interconnections, algorithms, and training methods, depending on the stage/order of the neural network and the type of intelligent control system implemented.Type: GrantFiled: April 7, 2017Date of Patent: September 17, 2019Inventors: Michael Lamport Commons, Mitzi Sturgeon White
-
Patent number: 10410119Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for augmenting neural networks with an external memory. One of the methods includes providing an output derived from the neural network output for the time step as a system output for the time step; maintaining a current state of the external memory; determining, from the neural network output for the time step, memory state parameters for the time step; updating the current state of the external memory using the memory state parameters for the time step; reading data from the external memory in accordance with the updated state of the external memory; and combining the data read from the external memory with a system input for the next time step to generate the neural network input for the next time step.Type: GrantFiled: June 2, 2016Date of Patent: September 10, 2019Assignee: DeepMind Technologies LimitedInventors: Edward Thomas Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, Philip Blunsom
-
Patent number: 9658260Abstract: A power system grid is decomposed into several parts and decomposed state estimation steps are executed separately, on each part, using coordinated feedback regarding a boundary state. The achieved solution is the same that would be achieved with a simultaneous state estimation approach. With the disclosed approach, the state estimation problem can be distributed among decomposed estimation operations for each subsystem and a coordinating operation that yields the complete state estimate. The approach is particularly suited for estimating the state of power systems that are naturally decomposed into separate subsystems, such as separate AC and HVDC systems, and/or into separate transmission and distribution systems.Type: GrantFiled: September 4, 2013Date of Patent: May 23, 2017Assignee: ABB SCHWEIZ AGInventors: Xiaoming Feng, Vaibhav Donde, Ernst Scholtz
-
Patent number: 9563842Abstract: A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron.Type: GrantFiled: August 11, 2015Date of Patent: February 7, 2017Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Patent number: 9489623Abstract: Apparatus and methods for developing robotic controllers comprising parallel networks. In some implementations, a parallel network may comprise at least first and second neuron layers. The second layer may be configured to determine a measure of discrepancy (error) between a target network output and actual network output. The network output may comprise control signal configured to cause a task execution by the robot. The error may be communicated back to the first neuron layer in order to adjust efficacy of input connections into the first layer. The error may be encoded into spike latency using linear or nonlinear encoding. Error communication and control signal provision may be time multiplexed so as to enable target action execution. Efficacy associated with forward and backward/reverse connections may be stored in individual arrays. A synchronization mechanism may be employed to match forward/reverse efficacy in order to implement plasticity.Type: GrantFiled: October 15, 2013Date of Patent: November 8, 2016Assignee: BRAIN CORPORATIONInventors: Oleg Sinyavskiy, Vadim Polonichko
-
Patent number: 9189731Abstract: A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron.Type: GrantFiled: March 24, 2014Date of Patent: November 17, 2015Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Patent number: 9183495Abstract: A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron.Type: GrantFiled: August 8, 2012Date of Patent: November 10, 2015Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Patent number: 8892485Abstract: Certain embodiments of the present disclosure support implementation of a neural processor with synaptic weights, wherein training of the synapse weights is based on encouraging a specific output neuron to generate a spike. The implemented neural processor can be applied for classification of images and other patterns.Type: GrantFiled: July 8, 2010Date of Patent: November 18, 2014Assignee: QUALCOMM IncorporatedInventors: Vladimir Aparin, Jeffrey A. Levin
-
Publication number: 20140180989Abstract: A parallel convolutional neural network is provided. The CNN is implemented by a plurality of convolutional neural networks each on a respective processing node. Each CNN has a plurality of layers. A subset of the layers are interconnected between processing nodes such that activations are fed forward across nodes. The remaining subset is not so interconnected.Type: ApplicationFiled: September 18, 2013Publication date: June 26, 2014Applicant: Google Inc.Inventors: Alexander Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton
-
Patent number: 8712942Abstract: An active element machine is a new kind of computing machine. When implemented in hardware, the active element machine can execute multiple instructions simultaneously, because every one of its computing elements is active. This greatly enhances the computing speed. By executing a meta program whose instructions change the connections in a dynamic active element machine, the active element machine can perform tasks that a digital computer are unable to compute. In an embodiment, instructions in a computer language are translated into instructions in a register machine language. The instructions in the register machine language are translated into active element machine instructions. In an embodiment, an active element machine may be programmed using instructions for a register machine. The active element machine is not limited to these embodiments.Type: GrantFiled: April 24, 2007Date of Patent: April 29, 2014Assignee: AEMEA Inc.Inventor: Michael Stephen Fiske
-
Patent number: 8527542Abstract: User-generated input may be received to initiate a generation of a message associated with an incident of a computing system having a multi-layer architecture that requires support. Thereafter, context data associated with one or more operational parameters may be collected from each of at least two of the layers of the computing system. A message may then be generated on at least a portion of the user-generated input and at least a portion of the collected context data. Related apparatuses, methods, computer program products, and computer systems are also described.Type: GrantFiled: December 30, 2005Date of Patent: September 3, 2013Assignee: SAP AGInventors: Tilmann Haeberle, Lilia Kotchanovskaia, Zoltan Nagy, Berthold Wocher, Juergen Subat
-
Publication number: 20130212053Abstract: A feature extraction device according to the present invention includes a neural network including neurons each including at least one expressed gene which is an attribute value for determining whether transmission of a signal from one of the first neurons to one of the second neurons is possible, each first neuron having input data resulting from target data to be subjected to feature extraction outputs a first signal value to corresponding second neuron(s) having the same expressed gene as the one in the first neuron, the first signal value increasing as a value of the input data increases, and each second neuron calculates, as a feature quantity of the target data, a second signal value corresponding to a total sum of the first signal values input thereto.Type: ApplicationFiled: October 18, 2011Publication date: August 15, 2013Inventors: Takeshi Yagi, Takashi Kitsukawa
-
Patent number: 8468109Abstract: Systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the artificial neural network by providing scalability to neurons and layers. In a particular case, the systems and methods may include a back-propagation subsystem that is configured to scalably adjust weights in the artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements.Type: GrantFiled: December 28, 2011Date of Patent: June 18, 2013Inventors: Medhat Moussa, Antony Savich, Shawki Areibi
-
Publication number: 20120166374Abstract: Systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the artificial neural network by providing scalability to neurons and layers. In a particular case, the systems and methods may include a back-propagation subsystem that is configured to scalably adjust weights in the artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements.Type: ApplicationFiled: December 28, 2011Publication date: June 28, 2012Inventors: Medhat Moussa, Antony Savich, Shawki Areibi
-
Patent number: 8190542Abstract: A neural network includes neurons and wires adapted for connecting the neurons. Some of the wires comprise input connections and exactly one output connection and/or a part of the wires comprise exactly one input connection and output connections. Neurons are hierarchically arranged in groups. A lower group of neurons recognizes a pattern of information input to the neurons of this lower group. A higher group of neurons recognizes higher level patterns. A strength value is associated with a connection between different neurons. The strength value of a particular connection is indicative of a likelihood that information which is input to the neurons propagates via the particular connection. The strength value of each connection is modifiable based on an amount of traffic of information which is input to the neurons and which propagates via the particular connection and/or is modifiable based on a strength modification impulse.Type: GrantFiled: September 27, 2006Date of Patent: May 29, 2012Assignee: ComDys Holding B.V.Inventor: Eugen Oetringer
-
Patent number: 8121817Abstract: Process control system for detecting abnormal events in a process having one or more independent variables and one or more dependent variables. The system includes a device for measuring values of the one or more independent and dependent variables, a process controller having a predictive model for calculating predicted values of the one or more dependent variables from the measured values of the one or more independent variables, a calculator for calculating residual values for the one or more dependent variables from the difference between the predicted and measured values of the one or more dependent variables, and an analyzer for performing a principal component analysis on the residual values. The process controller is a multivariable predictive control means and the principal component analysis results in the output of one or more scores values, T2 values and Q values.Type: GrantFiled: October 16, 2007Date of Patent: February 21, 2012Assignee: BP Oil International LimitedInventors: Keith Landells, Zaid Rawi
-
Patent number: 8103606Abstract: An architecture, systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the input layer, at least one hidden layer, and output layer. In a particular case, the architecture includes a back-propagation subsystem that is configured to adjust weights in the scalable artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements.Type: GrantFiled: December 10, 2007Date of Patent: January 24, 2012Inventors: Medhat Moussa, Antony Savich, Shawki Areibi
-
Patent number: 8065022Abstract: Embodiments of the invention can include methods and systems for controlling clearances in a turbine. In one embodiment, a method can include applying at least one operating parameter as an input to at least one neural network model, modeling via the neural network model a thermal expansion of at least one turbine component, and taking a control action based at least in part on the modeled thermal expansion of the one or more turbine components. An example system can include a controller operable to determine and apply the operating parameters as inputs to the neural network model, model thermal expansion via the neural network model, and generate a control action based at least in part on the modeled thermal expansion.Type: GrantFiled: January 8, 2008Date of Patent: November 22, 2011Assignee: General Electric CompanyInventors: Karl Dean Minto, Jianbo Zhang, Erhan Karaca
-
Patent number: 8015130Abstract: In a hierarchical neural network having a module structure, learning necessary for detection of a new feature class is executed by a processing module which has not finished learning yet and includes a plurality of neurons which should learn an unlearned feature class and have an undetermined receptor field structure by presenting a predetermined pattern to a data input layer. Thus, a feature class necessary for subject recognition can be learned automatically and efficiently.Type: GrantFiled: January 29, 2010Date of Patent: September 6, 2011Assignee: Canon Kabushiki KaishaInventors: Masakazu Matsugu, Katsuhiko Mori, Mie Ishii, Yusuke Mitarai
-
Patent number: 7979370Abstract: A system for information searching includes a first layer and a second layer. The first layer includes a first plurality of neurons each associated with a word and with a first set of dynamic connections to at least some of the first plurality of neurons. The second layer include a second plurality of neurons each associated with a document and with a second set of dynamic connections to at least some of the first plurality of neurons. The first set of dynamic connections and the second set of dynamic connections can be configured such that a query of at least one neuron of the first plurality of neurons excites at least one neuron of the second plurality of neurons. The excited at least one neuron of the second plurality of neurons can be contextually related to the queried at least one neuron of the first plurality of neurons.Type: GrantFiled: January 29, 2009Date of Patent: July 12, 2011Assignee: Dranias Development LLCInventor: Alexander V. Ershov
-
Patent number: 7831416Abstract: A method is provided for designing a product. The method may include obtaining data records relating to one or more input variables and one or more output parameters associated with the product; and pre-processing the data records based on characteristics of the input variables. The method may also include selecting one or more input parameters from the one or more input variables; and generating a computational model indicative of interrelationships between the one or more input parameters and the one or more output parameters based on the data records. Further, the method may include providing a set of constraints to the computational model representative of a compliance state for the product; and using the computational model and the provided set of constraints to generate statistical distributions for the one or more input parameters and the one or more output parameters, wherein the one or more input parameters and the one or more output parameters represent a design for the product.Type: GrantFiled: July 17, 2007Date of Patent: November 9, 2010Assignee: Caterpillar IncInventors: Anthony J. Grichnik, Michael Seskin, Amit Jayachandran
-
Patent number: 7788196Abstract: An artificial neural network comprises at least one input layer with a predetermined number of input nodes and at least one output layer with a predetermined number of output nodes or also at least one intermediate hidden layer with a predetermined number of nodes between the input and the output layer. At least the nodes of the output layer and/or of the hidden layer and/or also of the input layer carry out a non linear transformation of a first non linear transformation of the input data for computing an output value to be fed as an input value to a following layer or the output data if the output layer is considered.Type: GrantFiled: August 24, 2004Date of Patent: August 31, 2010Assignee: SemeionInventor: Paolo Massimo Buscema
-
Patent number: 7743004Abstract: A pulse signal processing circuit, a parallel processing circuit, and a pattern recognition system including a plurality of arithmetic elements for outputting pulse signals and at least one modulation circuit, synaptic connection element(s), or synaptic connection means for modulating the pulse signals, the modulated pulse signals then being separately or exclusively output to corresponding signal lines.Type: GrantFiled: June 30, 2008Date of Patent: June 22, 2010Assignee: Canon Kabushiki KaishaInventor: Masakazu Matsugu
-
Publication number: 20100088263Abstract: There is described a method for computer-aided learning of a neural network, with a plurality of neurons in which the neurons of the neural network are divided into at least two layers, comprising a first layer and a second layer crosslinked with the first layer. In the first layer input information is respectively represented by one or more characteristic values from one or several characteristics, wherein every characteristic value comprises one or more neurons of the first layer. A plurality of categories is stored in the second layer, wherein every category comprises one or more neurons of the second layer. For one or several pieces of input information, respectively at least one category in the second layer is assigned to the characteristic values of the input information in the first layer.Type: ApplicationFiled: September 20, 2006Publication date: April 8, 2010Inventors: Gustavo Deco, Martin Stetter, Miruna Szabo
-
Patent number: 7496548Abstract: A system, method and computer program product for information searching includes (a) a first layer with a first plurality of neurons, each of the first plurality of neurons being associated with a word and with a set of connections to at least some neurons of the first layer; (b) a second layer with a second plurality of neurons, each of the second plurality of neurons being associated with an object and with a set of connections to at least some neurons of the second layer, and with a set of connections to some neurons of the first layer; (c) a third layer with a third plurality of neurons, each of the third plurality of neurons being associated with a sentence and with a set of connections to at least some neurons of the third layer, and with a set of connections to at least some neurons of the first layer and to at least some neurons of the second layer; and (d) a fourth layer with a fourth plurality of neurons, each of the fourth plurality of neurons being associated with a document and with a set of connType: GrantFiled: August 29, 2006Date of Patent: February 24, 2009Assignee: Quintura, Inc.Inventor: Alexander V. Ershov
-
Publication number: 20080319933Abstract: An architecture, systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the input layer, at least one hidden layer, and output layer. In a particular case, the architecture includes a back-propagation subsystem that is configured to adjust weights in the scalable artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements.Type: ApplicationFiled: December 10, 2007Publication date: December 25, 2008Inventors: Medhat Moussa, Antony Savich, Shawki Areibi
-
Publication number: 20080319934Abstract: A neural network (100) comprising a plurality of neurons (101 to 106) and a plurality of wires (109) adapted for connecting the plurality of neurons (101 to 106), wherein at least a part of the plurality of wires (109) comprises a plurality of input connections and exactly one output connection.Type: ApplicationFiled: September 27, 2006Publication date: December 25, 2008Inventor: Eugen Oetringer
-
Patent number: 7409374Abstract: A method for discriminating between explosive events having their origins in High Explosive or Chemical/Biological detonation employing multiresolution analysis provided by a discrete wavelet transform. Original signatures of explosive events are broken down into subband components thereby removing higher frequency noise features and creating two sets of coefficients at varying levels of decomposition. These coefficients are obtained each time the signal is passed through a lowpass and highpass filter bank whose impulse response is derived from Daubechies db5 wavelet. Distinct features are obtained through the process of isolating the details of the high oscillatory components of the signature. The ratio of energy contained within the details at varying levels of decomposition is sufficient to discriminate between explosive events such as High Explosive and Chemical/Biological.Type: GrantFiled: August 22, 2005Date of Patent: August 5, 2008Assignee: The United States of America as represented by the Secretary of the ArmyInventors: Myron Hohil, Sashi V. Desai
-
Patent number: 7395248Abstract: The invention concerns a method for determining competing risks for objects following an initial event based on previously measured or otherwise objectifiable training data patterns, in which several signals obtained from a learning capable system are combined in an objective function in such a way that said learning capable system is rendered capable of detecting or forecasting the underlying probabilities of each of the said competing risks.Type: GrantFiled: December 7, 2001Date of Patent: July 1, 2008Inventors: Ronald E. Kates, Nadia Harbeck
-
Patent number: 7392231Abstract: A user's preference structure in respect of alternative “objects” with which the user is presented is captured in a multi-attribute utility function. The user ranks these competing objects in order of the user's relative preference for such objects. A utility function that defines the user's preference structure is provided as output on the basis of this relative ranking. This technique can be used to assist a buyer in selecting between multi-attribute quotes or bids submitted by prospective suppliers to the buyer.Type: GrantFiled: December 3, 2002Date of Patent: June 24, 2008Assignee: International Business Machines CorporationInventors: Jayanta Basak, Manish Gupta
-
Patent number: 7293002Abstract: A method for organizing processors to perform artificial neural network tasks is provided. The method provides a computer executable methodology for organizing processors in a self-organizing, data driven, learning hardware with local interconnections. A training data is processed substantially in parallel by the locally interconnected processors. The local processors determine local interconnections between the processors based on the training data. The local processors then determine, substantially in parallel, transformation functions and/or entropy based thresholds for the processors based on the training data.Type: GrantFiled: June 18, 2002Date of Patent: November 6, 2007Assignee: Ohio UniversityInventor: Janusz A. Starzyk
-
Patent number: 7143072Abstract: A neural network having layers of neurons divided into sublayers of neurons. The values of target neurons in one layer are calculated from sublayers of source neurons in a second underlying layer. It is therefore always possible to use for this calculation the same group of weights to be multiplied by respective source neurons related thereto and situated in the underlying layer of the neural network.Type: GrantFiled: September 26, 2002Date of Patent: November 28, 2006Assignee: CSEM Centre Suisse d′Electronique et de Microtechnique SAInventors: Jean-Marc Masgonty, Philippe Vuilleumier, Peter Masa, Christian Piguet
-
Patent number: 7092922Abstract: An adaptive learning method for automated maintenance of a neural net model is provided. The neural net model is trained with an initial set of training data. Partial products of the trained model are stored. When new training data are available, the trained model is updated by using the stored partial products and the new training data to compute weights for the updated model.Type: GrantFiled: May 21, 2004Date of Patent: August 15, 2006Assignee: Computer Associates Think, Inc.Inventors: Zhuo Meng, Baofu Duan, Yoh-Han Pao
-
Patent number: 7080055Abstract: Methods and apparatuses for backlash compensation. A dynamics inversion compensation scheme is designed for control of nonlinear discrete-time systems with input backlash. The techniques of this disclosure extend the dynamic inversion technique to discrete-time systems by using a filtered prediction, and shows how to use a neural network (NN) for inverting the backlash nonlinearity in the feedforward path. The techniques provide a general procedure for using NN to determine the dynamics preinverse of an invertible discrete time dynamical system. A discrete-time tuning algorithm is given for the NN weights so that the backlash compensation scheme guarantees bounded tracking and backlash errors, and also bounded parameter estimates. A rigorous proof of stability and performance is given and a simulation example verifies performance. Unlike standard discrete-time adaptive control techniques, no certainty equivalence (CE) or linear-in-the-parameters (LIP) assumptions are needed.Type: GrantFiled: October 2, 2001Date of Patent: July 18, 2006Inventors: Javier Campos, Frank L. Lewis
-
Patent number: 7054850Abstract: A pattern detecting apparatus has a plurality of hierarchized neuron elements to detect a predetermined pattern included in input patterns. Pulse signals output from the plurality of neuron elements are given specific delays by synapse circuits associated with the individual elements. This makes it possible to transmit the pulse signals to the neuron elements of the succeeding layer through a common bus line so that they can be identified on a time base. The neuron elements of the succeeding layer output the pulse signals at output levels based on a arrival time pattern of the plurality of pulse signals received from the plurality of neuron elements of the preceding layer within a predetermined time window. Thus, the reliability of pattern detection can be improved, and the number of wires interconnecting the elements can be reduced by the use of the common bus line, leading to a small scale of circuit and reduced power consumption.Type: GrantFiled: June 12, 2001Date of Patent: May 30, 2006Assignee: Canon Kabushiki KaishaInventor: Masakazu Matsugu
-
Patent number: 6876989Abstract: A neural network system includes a feedforward network comprising at least one neuron circuit for producing an activation function and a first derivative of the activation function and a weight updating circuit for producing updated weights to the feedforward network. The system also includes an error back-propagation network for receiving the first derivative of the activation function and to provide weight change data information to the weight updating circuit.Type: GrantFiled: February 13, 2002Date of Patent: April 5, 2005Assignee: Winbond Electronics CorporationInventors: Bingxue Shi, Chun Lu, Lu Chen
-
Patent number: 6856983Abstract: A method and system is described that adaptively adjusts an eService management system by using feedback control. Behavior experts are distributed at different levels of the hierarchy of the eService management system. Within the hierarchy, feed-forward reasoning is performed from lower level behavior experts to the higher level behavior experts. A method for identifying bottlenecks is described and utilized. The performance of these behavior experts is compared with various objective functions. The discrepancies are used to adjust the system.Type: GrantFiled: October 26, 2001Date of Patent: February 15, 2005Assignee: Panacya, Inc.Inventors: Earl D. Cox, Xindong Wang, Shi-Yue Qiu
-
Patent number: 6826550Abstract: Provided is a compiler to map application program code to object code capable of being executed on an operating system platform. A first neural network module is trained to generate characteristic output based on input information describing attributes of the application program. A second neural network module is trained to receive as input the application program code and the characteristic output and, in response, generate object code. The first and second neural network modules are used to convert the application program code to object code.Type: GrantFiled: December 15, 2000Date of Patent: November 30, 2004Assignee: International Business Machines CorporationInventors: Michael Wayne Brown, Chung Tien Nguyen