Abstract: A method may include training a thermodynamic neural network having a plurality of nodes interconnected by a plurality of edges. The training of the thermodynamic neural network may include determining an optimal organization of the thermodynamic neural network in which one or more charges are transferred through the thermodynamic neural network with a minimum quantity of residual charge remaining at each of the plurality of nodes. The trained thermodynamic neural network may be deployed to perform a cognitive task. The cognitive task may include the trained thermodynamic neural network receiving a first set of charges corresponding to an input sample and outputting a second set of charges corresponding to a decision associated with the input sample. Related systems and articles of manufacture, including computer program products, are also provided.
Abstract: A neural network learning device 20 is equipped with: a determination module 22 that determines the size of a local region in learning information 200 which is to be learned by a neural network 21 containing multiple layers, said determination being made for each layer, on the basis of the structure of the neural network 21; and a control module 25 that, on the basis of size of the local region as determined by the determination module 22, extracts the local region from the learning information 200, and performs control such that the learning of the learning information represented by the extracted local region by the neural network 200 is carried out repeatedly while changing the size of the extracted local region, and thus, a reduction in the generalization performance of the neural network can be avoided even when there is little learning data.
Abstract: Convolutional neural networks can be visualized. For example, a graphical user interface (GUI) can include a matrix of symbols indicating feature-map values that represent a likelihood of a particular feature being present or absent in an input to a convolutional neural network. The GUI can also include a node-link diagram representing a feed forward neural network that forms part of the convolutional neural network. The node-link diagram can include a first row of symbols representing an input layer to the feed forward neural network, a second row of symbols representing a hidden layer of the feed forward neural network, and a third row of symbols representing an output layer of the feed forward neural network. Lines between the rows of symbols can represent connections between nodes in the input layer, the hidden layer, and the output layer of the feed forward neural network.
Type:
Application
Filed:
October 4, 2017
Publication date:
April 5, 2018
Applicants:
SAS Institute Inc., North Carolina State University
Inventors:
Samuel Paul Leeman-Munk, Saratendu Sethi, Christopher Graham Healey, Shaoliang Nie, Kalpesh Padia, Ravinder Devarajan, David James Caira, Jordan Riley Benson, James Allen Cox, Lawrence E. Lewis, Mustafa Onur Kabul
Abstract: A speaker identification/verification system comprises at least one feature extractor for extracting a plurality of audio features from speaker voice data, a plurality of speaker-specific subsystems, and a decision module. Each of the speaker-specific subsystem comprises: a neural network configured to generate an estimate of the plurality of extracted audio features based on the plurality of extracted audio features, and an error module. Each of the plurality of neural networks is associated with one of a plurality of speakers, and the one speaker associated with each of the plurality of neural networks is different for all neural networks. The error module is configured to estimate an error based on the plurality of extracted audio features and the estimate of the plurality of extracted audio features generated by the associated neural network. The neural networks are speaker-specific auto-encoders trained for one user and therefore calibrated on that particular user's speech.
Abstract: Provided are a device and method for training a neural network. The method includes generating a candidate solution set by modifying a candidate solution which represents a basic neural network model in a variable-length string form, acquiring first candidate solutions by performing architecture variation-based unsupervised learning with a plurality of candidate solutions selected from the candidate solution set, selecting a neural network model represented by a first candidate solution which satisfies targeted effective performance as a first neural network model, acquiring second candidate solutions by performing selective error propagation-based supervised learning with the first neural network model, and selecting a neural network model represented by a second candidate solution which satisfies the targeted effective performance as a final neural network model.
Type:
Application
Filed:
November 26, 2019
Publication date:
May 28, 2020
Applicant:
Electronics and Telecommunications Research Institute
Inventors:
Yong Hyuk MOON, Jun Yong PARK, Yong Ju LEE
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for three-dimensionally stacked neural network accelerators. In one aspect, a method includes obtaining data specifying that a tile from a plurality of tiles in a three-dimensionally stacked neural network accelerator is a faulty tile. The three-dimensionally stacked neural network accelerator includes a plurality of neural network dies, each neural network die including a respective plurality of tiles, each tile has input and output connections. The three-dimensionally stacked neural network accelerator is configured to process inputs by routing the input through each of the plurality of tiles according to a dataflow configuration and modifying the dataflow configuration to route an output of a tile before the faulty tile in the dataflow configuration to an input connection of a tile that is positioned above or below the faulty tile on a different neural network die than the faulty tile.
Abstract: Embodiments of the present disclosure include systems and methods for training neural networks. In one embodiment, neural network may receive input data and produce output results in response to the input data and weights of the neural network. An error is determined at an output of the neural network based on the output results. The error is propagated in a reverse direction through the neural network from the output and one or more intermediate outputs to adjust the weights.
Type:
Application
Filed:
May 8, 2020
Publication date:
November 11, 2021
Inventors:
Andy WAGNER, Tiyasa MITRA, Marc TREMBLAY
Abstract: The present disclosure provides compositions, kits, and methods of promoting neural growth and/or neural survival using IL-17c. The compositions, kits, and methods can be used to promote neural growth and/or neural survival in a variety of conditions where such growth and survival is beneficial.
Type:
Grant
Filed:
July 7, 2016
Date of Patent:
October 5, 2021
Assignees:
Fred Hutchinson Cancer Research Center, University of Washington
Abstract: A system embodiment comprises at least one respiration sensor, a neural stimulation therapy delivery module, and a controller. The respiration sensor is adapted for use in monitoring respiration of the patient. The neural stimulation therapy delivery module is adapted to generate a neural stimulation signal for use in stimulating the autonomic neural target of the patient for the chronic neural stimulation therapy. The controller is adapted to receive a respiration signal from the at least one respiration sensor indicative of the patient's respiration, and adapted to control the neural stimulation therapy delivery module using a respiratory variability measurement derived using the respiration signal.
Type:
Application
Filed:
June 25, 2007
Publication date:
December 25, 2008
Applicant:
Cardiac Pacemakers, Inc.
Inventors:
Yachuan Pu, Anthony V. Caparso, Gerrard M. Carlson, Joseph M. Pastore
Abstract: Neural representations may be used for multi-view reconstruction of scenes. A plurality of color images representing a scene from a plurality of camera poses may be received. For each point of a plurality of points along a ray, a signed distance and a color value may be determined as a function of a feature volume, a first neural network, and a second neural network. A predicted output color may be determined as a function of the density. At least one of the first neural network, the second neural network, the feature volume, or the transformation parameter may be adjusted based on the predicted output color and a corresponding target color obtained based on one of the color images. A three-dimensional representation of the scene may be displayed based on at least one of the first neural network, the second neural network, the feature volume, or the transformation parameter.
Type:
Application
Filed:
March 17, 2023
Publication date:
October 19, 2023
Applicant:
Meta Platforms Technologies, LLC
Inventors:
Lei XIAO, Derek NOWROUZEZAHRAI, Joey LITALIEN, Feng LIU
Abstract: A system including a main neural network for performing one or more machine learning tasks on a network input to generate one or more network outputs. The main neural network includes a Mixture of Experts (MoE) subnetwork that includes a plurality of expert neural networks and a gating subsystem. The gating subsystem is configured to: apply a softmax function to a set of gating parameters having learned values to generate a respective softmax score for each of one or more of the plurality of expert neural networks; determine a respective weight for each of the one or more of the plurality of expert neural networks; select a proper subset of the plurality of expert neural networks; and combine the respective expert outputs generated by the one or more expert neural networks in the proper subset to generate one or more MoE outputs.
Abstract: The present disclosure provides a neural network processor and neural network computation method that deploy a memory and a cache to perform a neural network computation, where the memory may be configured to store data and instructions of the neural network computation, the cache may be connected to the memory via a memory bus, thereby, the actual compute ability of hardware may be fully utilized, the cost and power consumption overhead may be reduced, parallelism of the network may be fully utilized, and the efficiency of the neural network computation may be improved.
Type:
Grant
Filed:
July 23, 2019
Date of Patent:
January 10, 2023
Assignee:
SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
Inventors:
Tianshi Chen, Xiaobin Chen, Tian Zhi, Zidong Du
Abstract: A neural probe system having a single guide tube that is inserted into neural tissue and from which a number of neural probes can be deployed is described. Each probe is deployable into tissue along a desired trajectory. This is done by supporting the electrode array on a spring tape-type carrier that maintains axial stiffness once the neural probe has deployed out a channel in the guide tube. That way, a target neural tissue is bounded by an increased number of neural probes while minimizing trauma to surrounding body tissue.
Type:
Application
Filed:
October 21, 2014
Publication date:
April 23, 2015
Inventors:
David S. Pellinen, Bencharong Suwarato, Rio J. Vetter, Jamille Farraye Hetke, Daryl R. Kipke
Abstract: A neural network-based rating system includes a data set, said data set further comprising at least two records and at least one field associated with said records and a data rating application, which includes means for user input of ratings for at least a first of said records of said data set; at least one artificial neural network; means for automatically dimensioning said artificial neural network as a function of said fields within said data set; means for initiating training of said artificial neural network, said trained artificial neural network operative to generate ratings for at least a second of said records of said data set; means for initiating rating of at least said second record of said data set by said trained artificial neural network; and means for sorting said data set based on said user ratings and said artificial neural network-generated ratings.
Abstract: The present disclosure relates to neural network training. The neural network training relates to a training method, a training device, and a system including the neural network.
Type:
Application
Filed:
September 25, 2020
Publication date:
August 26, 2021
Inventors:
BYEOUNGSU KIM, Kyoungyoung Kim, Jaegon Kim, Changgwun Lee, Sanghyuck Ha
Abstract: Embodiments of the present disclosure disclose a method and a system for training a neural network for improving adversarial robustness. The method includes collecting a plurality of data samples comprising clean data samples and adversarial data samples. The training of the neural network includes training of a probabilistic encoder to encode the plurality of data samples into a probabilistic distribution over a latent space representation. In addition, the training of the neural network comprising training of a classifier to classify an instance of the latent space representation to produce a classification result. In addition, the method includes training shared parameters of a first instance of the neural network using the clean data samples and a second instance of the neural network using the adversarial data samples. Further, the method includes outputting the shared parameters of the first instance of the neural network and the second instance of the neural network.
Type:
Application
Filed:
March 18, 2022
Publication date:
September 21, 2023
Inventors:
Ye Wang, Xi Yu, Niklas Smedemark-Margulies, Shuchin Aeron, Toshiaki Koike-Akino, Pierre Moulin, Matthew Brand, Kieran Parsons
Abstract: In a neural network which includes one input layer, one or more intermediate layers and one output layer, neural elements in the input layer and neural elements in the intermediate layer are divided into groups. Arithmetic operations representing the coupling between the neural elements of the input layer and the neural elements of the intermediate layer are put into table form.
Abstract: This disclosure relates to method and system for improving classifications performed by artificial neural network (ANN) model. The method may include identifying, for a classification performed by the ANN model for an input, activated neurons in each neural layer of the ANN model; and analyzing the activated neurons in each neural layer with respect to Characteristic Feature Directive (CFDs) for corresponding neural layer and for a correct class of the input. The CFDs for each neural layer may be generated after a training phase of the ANN model and based on neurons in corresponding neural layer that may be activated for a training input of the correct class. The method may further include determining differentiating neurons in each neural layer that are not activated as per the CFDs for the correct class of the input based on the analysis; and providing missing features based on the differentiating neurons.
Abstract: A method for determining MTPA, flux-weakening, and MTPV operating points over the full speed range of an IPM motor for the most efficient torque control of the motor using a neural network is provided. The neural network is trained using a cloud-based neural network training algorithm. A special technique is developed to generate neural network training data, that is particularly suitable and favorable, to develop a high-performance neural network-based IPM torque control system, and the impact of variable motor parameters is embedded into the neural network system development and training. The provided method can achieve a fast and accurate current reference generation with a simple neural network structure, for optimal torque control of an IPM motor. The method can handle the MTPA, MTPV, and flux-weakening operation considering physical motor constraints.
Abstract: Methods and systems are disclosed herein in which a physical neural network can be configured utilizing nanotechnology. Such a physical neural network can comprise a plurality of molecular conductors (e.g., nanoconductors) which form neural connections between pre-synaptic and post-synaptic components of the physical neural network. Additionally, a learning mechanism can be applied for implementing Hebbian learning via the physical neural network. Such a learning mechanism can utilize a voltage gradient or voltage gradient dependencies to implement Hebbian and/or anti-Hebbian plasticity within the physical neural network. The learning mechanism can also utilize pre-synaptic and post-synaptic frequencies to provide Hebbian and/or anti-Hebbian learning within the physical neural network.