Search Patents
-
Patent number: 7406451Abstract: The invention relates to a system and a method for training a number of neural networks, by determining a first training data record, wherein the training data have a particular accuracy, generating a number of second data training records by perturbing the first training data record with a random variable, and training each of the neural networks with one of the training data records. A prognosis and an estimation of the prognosis error can be carried out by means of such a system.Type: GrantFiled: March 23, 2004Date of Patent: July 29, 2008Assignee: Bayer Technology Services GmbHInventors: Thomas Mrziglod, Georg Mogk
-
Patent number: 5852816Abstract: Constructing and simulating artificial neural networks and components thereof within a spreadsheet environment results in user friendly neural networks which do not require algorithmic based software in order to train or operate. Such neural networks can be easily cascaded to form complex neural networks and neural network systems, including neural networks capable of self-organizing so as to self-train within a spreadsheet, neural networks which train simultaneously within a spreadsheet, and neural networks capable of autonomously moving, monitoring, analyzing, and altering data within a spreadsheet. Neural networks can also be cascaded together in self training neural network form to achieve a device prototyping system.Type: GrantFiled: May 15, 1998Date of Patent: December 22, 1998Inventor: Stephen L. Thaler
-
Patent number: 11741355Abstract: A student neural network may be trained by a computer-implemented method, including: inputting common input data to each teacher neural network among a plurality of teacher neural networks to obtain a soft label output among a plurality of soft label outputs from each teacher neural network among the plurality of teacher neural networks, and training a student neural network with the input data and the plurality of soft label outputs.Type: GrantFiled: July 27, 2018Date of Patent: August 29, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Takashi Fukuda, Masayuki Suzuki, Osamu Ichikawa, Gakuto Kurata, Samuel Thomas, Bhuvana Ramabhadran
-
Publication number: 20110270788Abstract: Disclosed are systems, apparatuses, and methods for clustering data. Such a method includes providing input data to each of a plurality of cluster microcircuits of a neural network, wherein each cluster microcircuit includes a mean neural group and a variance neural group. The method also includes determining a response of each cluster microcircuit with respect to the input data. The method further includes modulating the mean neural group and the variance neural group of each cluster microcircuit responsive to a value system.Type: ApplicationFiled: August 4, 2010Publication date: November 3, 2011Inventor: Douglas A. MOORE
-
Patent number: 6014653Abstract: Constructing and simulating artificial neural networks and components thereof within a spreadsheet environment results in user friendly neural networks which do not require algorithmic based software in order to train or operate. Such neural networks can be easily cascaded to form complex neural networks and neural network systems, including neural networks capable of self-organizing so as to self-train within a spreadsheet, neural networks which train simultaneously within a spreadsheet, and neural networks capable of autonomously moving, monitoring, analyzing, and altering data within a spreadsheet. Neural networks can also be cascaded together in self training neural network form to achieve a device prototyping system.Type: GrantFiled: May 15, 1998Date of Patent: January 11, 2000Inventor: Stephen L. Thaler
-
Patent number: 5852815Abstract: Constructing and simulating artificial neural networks and components thereof within a spreadsheet environment results in user friendly neural networks which do not require algorithmic based software in order to train or operate. Such neural networks can be easily cascaded to form complex neural networks and neural network systems, including neural networks capable of self-organizing so as to self-train within a spreadsheet, neural networks which train simultaneously within a spreadsheet, and neural networks capable of autonomously moving, monitoring, analyzing, and altering data within a spreadsheet. Neural networks can also be cascaded together in self training neural network form to achieve a device prototyping system.Type: GrantFiled: May 15, 1998Date of Patent: December 22, 1998Inventor: Stephen L. Thaler
-
Patent number: 10891544Abstract: The present invention provides an event-driven universal neural network circuit. The circuit comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of digital synapses interconnects the neural modules. Each synapse interconnects a first neural module to a second neural module by interconnecting a neuron in the first neural module to a corresponding neuron in the second neural module. Corresponding neurons in the first neural module and the second neural module communicate via the synapses. Each synapse comprises a learning rule associating a neuron in the first neural module with a corresponding neuron in the second neural module. A control module generates signals which define a set of time steps for event-driven operation of the neurons and event communication via the interconnection network.Type: GrantFiled: September 29, 2016Date of Patent: January 12, 2021Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Patent number: 11468323Abstract: A method, system and computer-program product for identifying neural network inputs for a neural network that may have been incorrectly processed by the neural network. A set of activation values (of a subset of neurons of a single layer) associated with a neural network input is obtained. A neural network output associated with the neural network input is also obtained. A determination is made as to whether a first and second neural network input share similar sets of activation values, but dissimilar neural network outputs or vice versa. In this way a prediction can be made as to whether one of the first and second neural network inputs has been incorrectly processed by the neural network.Type: GrantFiled: October 16, 2018Date of Patent: October 11, 2022Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Vlado Menkovski, Asif Rahman, Caroline Denise Francoise Raynaud, Bryan Conroy, Dimitrios Mavroeidis, Erik Bresch, Teun van den Heuvel
-
Publication number: 20040172375Abstract: A method for checking whether an input data record is in the permitted working range of a neural networkin which a definition of the complex envelope which is formed by the training input records of the neural network, and of its surroundings as the permitted working range of a neural network and checking whether the input data record is in the convex envelope.Type: ApplicationFiled: January 15, 2004Publication date: September 2, 2004Applicant: Bayer AktiengesellschaftInventors: Georg Mogk, Thomas Mrziglod, Peter Hubl
-
Patent number: 10546238Abstract: A technique for training a neural network including an input layer, one or more hidden layers and an output layer, in which the trained neural network can be used to perform a task such as speech recognition. In the technique, a base of the neural network having at least a pre-trained hidden layer is prepared. A parameter set associated with one pre-trained hidden layer in the neural network is decomposed into a plurality of new parameter sets. The number of hidden layers in the neural network is increased by using the plurality of the new parameter sets. Pre-training for the neural network is performed.Type: GrantFiled: April 9, 2019Date of Patent: January 28, 2020Assignee: International Business Machines CorporationInventors: Takashi Fukuda, Osamu Ichikawa
-
Patent number: 11875269Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a generator neural network and an encoder neural network. The generator neural network generates, based on a set of latent values, data items which are samples of a distribution. The encoder neural network generates a set of latent values for a respective data item. The training method comprises jointly training the generator neural network, the encoder neural network and a discriminator neural network configured to distinguish between samples generated by the generator network and samples of the distribution which are not generated by the generator network. The discriminator neural network is configured to distinguish by processing, by the discriminator neural network, an input pair comprising a sample part and a latent part.Type: GrantFiled: May 22, 2020Date of Patent: January 16, 2024Assignee: DeepMind Technologies LimitedInventors: Jeffrey Donahue, Karen Simonyan
-
Patent number: 8521671Abstract: Disclosed are systems, apparatuses, and methods for clustering data. Such a method includes providing input data to each of a plurality of cluster microcircuits of a neural network, wherein each cluster microcircuit includes a mean neural group and a variance neural group. The method also includes determining a response of each cluster microcircuit with respect to the input data. The method further includes modulating the mean neural group and the variance neural group of each cluster microcircuit responsive to a value system.Type: GrantFiled: August 4, 2010Date of Patent: August 27, 2013Assignee: The Intellisis CorporationInventor: Douglas A. Moore
-
Patent number: 11631000Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a student neural network. In one aspect, there is provided a method comprising: processing a training input using the student neural network to generate a student neural network output comprising a respective score for each of a plurality of classes; processing the training input using a brain emulation neural network to generate a brain emulation neural network output comprising a respective score for each of the plurality of classes; and adjusting current values of the student neural network parameters using gradients of an objective function that characterizes a similarity between: (i) the student neural network output for the training input, and (ii) the brain emulation neural network output for the training input.Type: GrantFiled: December 31, 2019Date of Patent: April 18, 2023Assignee: X Development LLCInventors: Sarah Ann Laszlo, Philip Edwin Watson
-
Patent number: 8214309Abstract: A self adapting cognitive-neural method and system for image analysis is disclosed. The method comprises detecting a set of potential items of interest in an image using a cognitive algorithm, refining the set of potential items of interest using a neural analysis, and adapting the cognitive algorithm based on information gained from the results of the neural analysis. The method may further comprise fusing the results of the cognitive and neural algorithms with a fusion classifier. The neural analysis can include eye tracked EEG, RSVP, or presentation to an operator for visual inspection.Type: GrantFiled: December 16, 2008Date of Patent: July 3, 2012Assignee: HRL Laboratories, LLCInventors: Deepak Khosla, Michael J. Daily
-
Publication number: 20100100514Abstract: A sensor unit comprising a sensor, a neural processor and a communication device, wherein the sensor unit is adapted to perform pattern recognition by means of the neural processor and to transfer the result of the pattern recognition via the communication device.Type: ApplicationFiled: October 29, 2008Publication date: April 22, 2010Applicant: Deutsch-Franzosisches Forschungsinstitut Saint- LouisInventors: Pierre Raymond, Guy Paillet, Anne Menendez
-
Publication number: 20150106311Abstract: A method and apparatus for constructing a neuroscience-inspired artificial neural network (NIDA) or a dynamic adaptive neural network array (DANNA) or combinations of substructures thereof comprises one of constructing a substructure of an artificial neural network for performing a subtask of the task of the artificial neural network or extracting a useful substructure based on one of activity, causality path, behavior and inputs and outputs. The method includes identifying useful substructures in artificial neural networks that may be either successful at performing a subtask or unsuccessful at performing a subtask. Successful substructures may be implanted in an artificial neural network and unsuccessful substructures may be extracted from the artificial neural network for performing the task.Type: ApplicationFiled: October 14, 2014Publication date: April 16, 2015Inventors: J. Douglas Birdwell, Mark E. Dean, Catherine Schuman
-
Patent number: 9767410Abstract: This specification describes, among other things, a computer-implemented method. The method can include training a baseline neural network using a first set of training data. For each node in a subset of interconnected nodes in the baseline neural network, a rank-k approximation of a filter for the node can be computed. A subset of nodes in a rank-constrained neural network can then be initialized with the rank-k approximations of the filters from the baseline neural network. The subset of nodes in the rank-constrained neural network can correspond to the subset of nodes in the baseline neural network. After initializing, the rank-constrained neural network can be trained using a second set of training data while maintaining a rank-k filter topology for the subset of nodes in the rank-constrained neural network.Type: GrantFiled: June 15, 2015Date of Patent: September 19, 2017Assignee: Google Inc.Inventors: Raziel Alvarez Guevara, Preetum Nakkiran
-
Publication number: 20150039543Abstract: A system for detecting a network intrusion includes a first neural network for determining a first plurality of weight values corresponding to a plurality of vectors of an input data, a second neural network for updating the first plurality of weight values received from the first neural network to a second plurality of weight values based on the plurality of vectors of the input data, a third neural network for updating the second plurality of weight values received from the second neural network to a third plurality of weight values based on the plurality of vectors of the input data, and a classification module for classifying the plurality of vectors under at least one of a plurality of intrusions based on the third plurality of weight values received from the third neural network.Type: ApplicationFiled: July 30, 2014Publication date: February 5, 2015Inventors: Balakrishnan Athmanathan, Supriya Kamthania
-
Patent number: 11574193Abstract: A method and system for training a neural network are described. The method includes providing at least one continuously differentiable model of the neural network. The at least one continuously differentiable model is specific to hardware of the neural network. The method also includes iteratively training the neural network using the at least one continuously differentiable model to provide at least one output for the neural network. Each iteration uses at least one output of a previous iteration and a current continuously differentiable model of the at least one continuously differentiable model.Type: GrantFiled: September 5, 2018Date of Patent: February 7, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Borna J. Obradovic, Titash Rakshit, Jorge A. Kittl, Ryan M. Hatcher
-
Patent number: 9111224Abstract: Certain aspects of the present disclosure support a technique for neural learning of natural multi-spike trains in spiking neural networks. A synaptic weight can be adapted depending on a resource associated with the synapse, which can be depleted by weight change and can recover over time. In one aspect of the present disclosure, the weight adaptation may depend on a time since the last significant weight change.Type: GrantFiled: October 19, 2011Date of Patent: August 18, 2015Assignee: QUALCOMM IncorporatedInventor: Jason Frank Hunzinger