Abstract: A neural processing engine may perform processing within a neural processing system and/or artificial neural network. The neural processing engine may be configured to effectively and efficiently perform the type of processing required in implementing a neural processing system and/or an artificial neural network. This configuration may facilitate such processing with neural processing engines having an enhanced computational density and/or processor density with respect to conventional processing units.
Abstract: A neural network is trained with input data. The neural network is used to rescale the input data. Errors for the rescaled values are determined, and neighborhoods of the errors are used adjust connection weights of the neural network.
Type:
Application
Filed:
June 20, 2003
Publication date:
December 23, 2004
Inventors:
Carl Staelin, Darryl Greig, Manl Flacher, Ron Maurer
Abstract: Embodiments of the invention relate to mapping neural dynamics of a neural model on to a lookup table. One embodiment comprises defining a phase plane for a neural model. The phase plane represents neural dynamics of the neural model. The phase plane is coarsely sampled to obtain state transition information for multiple neuronal states. The state transition information is mapped on to a lookup table.
Type:
Application
Filed:
December 21, 2012
Publication date:
June 26, 2014
Applicant:
International Business Machines Corporation
Inventors:
Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Paliab Datta, Paul A. Merolla, Dharmendra S. Modha
Abstract: Constructing and simulating artificial neural networks and components thereof within a spreadsheet environment results in user friendly neural networks which do not require algorithmic based software in order to train or operate. Such neural networks can be easily cascaded to form complex neural networks and neural network systems, including neural networks capable of self-organizing so as to self-train within a spreadsheet, neural networks which train simultaneously within a spreadsheet, and neural networks capable of autonomously moving, monitoring, analyzing, and altering data within a spreadsheet. Neural networks can also be cascaded together in self training neural network form to achieve a device prototyping system.
Abstract: Constructing and simulating artificial neural networks and components thereof within a spreadsheet environment results in user friendly neural networks which do not require algorithmic based software in order to train or operate. Such neural networks can be easily cascaded to form complex neural networks and neural network systems, including neural networks capable of self-organizing so as to self-train within a spreadsheet, neural networks which train simultaneously within a spreadsheet, and neural networks capable of autonomously moving, monitoring, analyzing, and altering data within a spreadsheet. Neural networks can also be cascaded together in self training neural network form to achieve a device prototyping system.
Abstract: A neural network is trained with input data. The neural network is used to rescale the input data. Errors for the rescaled values are determined, and neighborhoods of the errors are used adjust connection weights of the neural network.
Type:
Grant
Filed:
June 20, 2003
Date of Patent:
August 5, 2008
Assignee:
Hewlett-Packard Development Company, L.P.
Inventors:
Carl Staelin, Darryl Greig, Manl Flacher, Ron Maurer
Abstract: Provided are a method of generating a trained third neural network to recognize a speaker of a noisy speech signal by combining a trained first neural network which is a skip connection-based neural network for removing noise from the noisy speech signal with a trained second neural network for recognizing the speaker of a speech signal, and a neural network device for operating the neural networks.
Type:
Grant
Filed:
August 19, 2019
Date of Patent:
March 1, 2022
Assignees:
SAMSUNG ELECTRONICS CO., LTD., SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
Inventors:
Sungchan Kang, Namsoo Kim, Cheheung Kim, Hyungyong Kim
Abstract: Constructing and simulating artificial neural networks and components thereof within a spreadsheet environment results in user friendly neural networks which do not require algorithmic based software in order to train or operate. Such neural networks can be easily cascaded to form complex neural networks and neural network systems, including neural networks capable of self-organizing so as to self-train within a spreadsheet, neural networks which train simultaneously within a spreadsheet, and neural networks capable of autonomously moving, monitoring, analyzing, and altering data within a spreadsheet. Neural networks can also be cascaded together in self training neural network form to achieve a device prototyping system.
Abstract: An artificial neural network is utilized as a filter in an optical heterodyne balanced receiver system. Such an artificial neural network can be adapted or trained to correct for non-ideal behavior and imperfections in the optical heterodyne system.
Abstract: A method for generating an artificial neural network ensemble for determining stimulation design parameters. A population of artificial neural networks is trained to produce one or more output values in response to a plurality of input values. The population of artificial neural networks is optimized to create an optimized population of artificial neural networks. A plurality of ensembles of artificial neural networks is selected from the optimized population of artificial neural networks and optimized using a genetic algorithm having a multi-objective fitness function. The ensemble with the desired prediction accuracy based on the multi-objective fitness function is then selected.
Type:
Application
Filed:
January 14, 2008
Publication date:
July 16, 2009
Inventors:
Dwight David Fulton, Stanley V. Stephenson
Abstract: A method, system and machine-readable storage medium for monitoring an engine using a cascaded neural network that includes a plurality of neural networks is disclosed. In operation, the method, system and machine-readable storage medium store data corresponding to the cascaded neural network. Signals generated by a plurality of engine sensors are then inputted into the cascaded neural network. Next, a second neural network is updated at a first rate, with an output of a first neural network, wherein the output is based on the inputted signals. In response, the second neural network outputs at a second rate, at least one engine control signal, wherein the second rate is faster than the first rate.
Abstract: This invention is in the field of machine learning and neural associative memory. In particular the invention discloses a neural associative memory structure for storing and maintaining associations between memory address patterns and memory content patterns using a neural network, as well as methods for storing and retrieving such associations. Bayesian learning is applied to achieve non-linear learning.
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining update rules for training neural networks. One of the methods includes generating, using a controller neural network, a batch of output sequences, each output sequence in the batch defining a respective update rule; for each output sequence in the batch: training a respective instance of a child neural network using the update rule defined by the output sequence; evaluating a performance of the trained instance of the child neural network on the particular neural network task to determine a performance metric for the trained instance of the child neural network on the particular neural network task; and using the performance metrics for the trained instances of the child neural network to adjust the current values of the controller parameters of the controller neural network.
Type:
Grant
Filed:
October 24, 2019
Date of Patent:
February 16, 2021
Assignee:
Google LLC
Inventors:
Irwan Bello, Barret Zoph, Vijay Vasudevan, Quoc V. Le
Abstract: Techniques for reconstructing a signal encoded with a time encoding machine (TEM) using a recurrent neural network including receiving a TEM-encoded signal, processing the TEM-encoded signal, and reconstructing the TEM-encoded signal with a recurrent neural network.
Type:
Application
Filed:
July 23, 2013
Publication date:
November 21, 2013
Applicant:
The Trustees of Columbia University in the City of New York
Abstract: Certain aspects of the present disclosure support a technique for neural learning of natural multi-spike trains in spiking neural networks. A synaptic weight can be adapted depending on a resource associated with the synapse, which can be depleted by weight change and can recover over time. In one aspect of the present disclosure, the weight adaptation may depend on a time since the last significant weight change.
Abstract: A method and apparatus for constructing a neuroscience-inspired artificial neural network (NIDA) or a dynamic adaptive neural network array (DANNA) or combinations of substructures thereof comprises one of constructing a substructure of an artificial neural network for performing a subtask of the task of the artificial neural network or extracting a useful substructure based on one of activity, causality path, behavior and inputs and outputs. The method includes identifying useful substructures in artificial neural networks that may be either successful at performing a subtask or unsuccessful at performing a subtask. Successful substructures may be implanted in an artificial neural network and unsuccessful substructures may be extracted from the artificial neural network for performing the task.
Type:
Application
Filed:
October 14, 2014
Publication date:
April 16, 2015
Inventors:
J. Douglas Birdwell, Mark E. Dean, Catherine Schuman
Abstract: Techniques for reconstructing a signal encoded with a time encoding machine (TEM) using a recurrent neural network including receiving a TEM-encoded signal, processing the TEM-encoded signal, and reconstructing the TEM-encoded signal with a recurrent neural network.
Type:
Grant
Filed:
July 23, 2013
Date of Patent:
October 28, 2014
Assignee:
The Trustees of Columbia University in the City of New York
Abstract: A system is described herein which uses a neural network having an input layer that accepts an input vector and a feature vector. The input vector represents at least part of input information, such as, but not limited to, a word or phrase in a sequence of input words. The feature vector provides supplemental information pertaining to the input information. The neural network produces an output vector based on the input vector and the feature vector. In one implementation, the neural network is a recurrent neural network. Also described herein are various applications of the system, including a machine translation application.
Type:
Application
Filed:
February 10, 2013
Publication date:
August 14, 2014
Applicant:
MICROSOFT CORPORATION
Inventors:
Geoffrey G. Zweig, Tomas Mikolov, Alejandro Acero
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a deep neural network. One of the methods includes training a deep neural network with a first training set by adjusting values for each of a plurality of weights included in the neural network, and training the deep neural network to determine a probability that data received by the deep neural network has features similar to key features of one or more keywords or key phrases, the training comprising providing the deep neural network with a second training set and adjusting the values for a first subset of the plurality of weights, wherein the second training set includes data representing the key features of the one or more keywords or key phrases.
Type:
Application
Filed:
March 31, 2014
Publication date:
May 7, 2015
Applicant:
GOOGLE INC.
Inventors:
Maria Carolina Parada San Martin, Guoguo Chen, Georg Heigold
Abstract: A system receives runtime information from a plurality of software objects. The software objects include an executable, a modularization unit, and a data dictionary. The system executes a training phase in a software neural network using the runtime information. The software neural network generates a pattern among the executables, modularization units, and data dictionaries using the software neural network such that a particular executable is pattern-matched with one or more modularization units and one or more data dictionaries.