Abstract: In one aspect, the present disclosure relates to a method for automatically scaling a neural network including: receiving a neural network model; allocating a plurality of processing nodes for the neural network model, the number of allocated processing nodes determined based on an analysis of the neural network model; distributing training of the neural network model across the allocated processing nodes; receiving load information from the allocated processing nodes, the load information associated with the training of the neural network model; and adjusting the number of allocated processing nodes based on the load information.
Type:
Application
Filed:
January 29, 2019
Publication date:
August 15, 2019
Applicant:
Capital One Services, LLC
Inventors:
Austin Walters, Jeremy Goodsitt, Fardin Abdi Taghi Abad
Abstract: A neural network processor is provided comprising a plurality of mutually succeeding neural network processor layers is provided. A neural network processor layer therein comprising a plurality of neural network processor elements (1) having a respective state register (2) for storing a state value (X) indicative for their state, as well as an additional state register (4) for storing a value (Q) of a state value change indicator that is indicative for a direction of a previous state change exceeding a threshold value. Neural network processor elements in a neural network processor layer are configured to selectively transmit differential event messages indicative for a change of their state, dependent both on the change of their state value and on the value of their state value change indicator.
Type:
Application
Filed:
December 17, 2020
Publication date:
February 2, 2023
Inventors:
Amirreza YOUSEFZADEH, Louis ROUILLARD-ODERA, Gokturk CINSERIN, Orlando Miguel PIRES DOS REIS MOREIRA
Abstract: Technologies are described for performing adaptive high-resolution digital image processing using neural networks. For example, a number of different regions can be defined representing portions of a digital image. One of the regions covers the entire digital image at a reduced resolution. The other regions cover less than the entire digital image at resolutions higher than the region covering the entire digital image. Neural networks are then used to process each of the regions. The neural networks share information using prolongation and restriction operations. Prolongation operations propagate activations from a neural network operating on a lower resolution region to context zones of a neural network operating on a higher resolution region. Restriction operations propagate activations from the neural network operating on the higher resolution region back to the neural network operating on the lower resolution region.
Abstract: A machine trains a first neural network using a first set of images. Training the first neural network comprises computing a first set of weights for a first set of neurons. The machine, for each of one or more alpha values in order from smallest to largest, trains an additional neural network using an additional set of images. The additional set of images comprises a homographic transformation of the first set of images. The homographic transformation is computed based on the alpha value. Training the additional neural network comprises computing an additional set of weights for an additional set of neurons. The additional set of weights is initialized based on a previously computed set of weights. The machine generates a trained ensemble neural network comprising the first neural network and one or more additional neural networks corresponding to the one or more alpha values.
Abstract: Some embodiments provide a method for executing a neural network that includes multiple nodes. The method receives an input for a particular execution of the neural network. The method receives state data that includes data generated from at least two previous executions of the neural network. The method executes the neural network to generate a set of output data for the received input. A set of the nodes performs computations using (i) data output from other nodes of the particular execution of the neural network and (ii) the received state data generated from at least two previous executions of the neural network.
Type:
Grant
Filed:
September 26, 2019
Date of Patent:
April 4, 2023
Assignee:
PERCEIVE CORPORATION
Inventors:
Andrew C. Mihal, Steven L. Teig, Eric A. Sather
Abstract: A technique for training a neural network including an input layer, one or more hidden layers and an output layer, in which the trained neural network can be used to perform a task such as speech recognition. In the technique, a base of the neural network having at least a pre-trained hidden layer is prepared. A parameter set associated with one pre-trained hidden layer in the neural network is decomposed into a plurality of new parameter sets. The number of hidden layers in the neural network is increased by using the plurality of the new parameter sets. Pre-training for the neural network is performed.
Type:
Grant
Filed:
July 5, 2017
Date of Patent:
August 31, 2021
Assignee:
International Business Machines Corporation
Abstract: A data processing system operable to process a neural network, and comprising a plurality of processors. The data processing system is operable to determine whether to perform neural network processing using a single processor or using plural processors. When it is determined that plural processors should be used, a distribution of the neural network processing among two or more of the processors is determined and the two or more processors are each assigned a portion of the neural network processing to perform. A neural network processing output is provided as a result of the processors performing their assigned portions of the neural network processing.
Abstract: An apparatus and a method for neural network computation are provided. The apparatus for neural network computation includes a first neuron circuit and a second neuron circuit. The first neuron circuit is configured to execute a neural network computation of at least one computing layer with a fixed feature pattern in a neural network algorithm. The second neuron circuit is configured to execute the neural network computation of at least one computing layer with an unfixed feature pattern in the neural network algorithm. The performance of the first neuron circuit is greater than that of the second neuron circuit.
Type:
Application
Filed:
December 23, 2020
Publication date:
June 24, 2021
Applicant:
Industrial Technology Research Institute
Abstract: Methods and apparatus for training a neural network to recover a codeword and for decoding a received signal using a neural network are disclosed. According to examples of the disclosed methods, a syndrome check is introduced at even layers of the neural network during the training, testing and online phases. During training, optimisation of trainable parameters of the neural network is ceased after optimisation at the layer at which the syndrome check is satisfied. Examples of the method for training a neural network may be implemented via a proposed loss function. During testing and online phases, propagation through the neural network is ceased at the layer at which the syndrome check is satisfied.
Abstract: An interface for selective excitation or sensing of neural cells in a biological neural network is provided. The interface includes a membrane with a number of channels passing through the membrane. Each channel has at least one electrode within it. Neural cells in the biological neural network grow or migrate into the channels, thereby coming into close proximity to the electrodes.
Type:
Application
Filed:
December 19, 2003
Publication date:
November 18, 2004
Inventors:
Philip Huie, Daniel V. Palanker, Harvey A. Fishman, Alexander Vankov
Abstract: A neural stimulation system controls the delivery of neural stimulation using a respiratory signal as a therapy feedback input. The respiratory signal is used to increase the effectiveness of the neural stimulation, such as vagal nerve stimulation, while decreasing potentially adverse side effects in respiratory functions. In one embodiment, the neural stimulation system synchronizes the delivery of the neural stimulation pulses to the respiratory cycles using a respiratory fiducial point in the respiratory signal and a delay interval. In another embodiment, the neural stimulation system detects a respiratory disorder and, in response, adjusts the delivery of the neural stimulation pulses and/or delivers a respiratory therapy treating the detected respiratory disorder.
Type:
Grant
Filed:
August 30, 2006
Date of Patent:
February 21, 2012
Assignee:
Cardiac Pacemakers, Inc.
Inventors:
Paul A. Haefner, Kristofer J. James, Kent Lee, Imad Libbus, Anthony V. Caparso, Jonathan Kwok, Yachuan Pu
Abstract: The present invention aims to provide a method for efficiently producing, from pluripotent stem cells, a cell mass containing a neural cell or neural tissue, and nonneural epithelial tissue. A method for producing a cell mass containing 1) neural cells or neural tissue and 2) nonneural epithelial tissue, including the following steps (1) and (2): (1) a first step of suspension-culturing pluripotent stem cells to form a cell aggregate in the presence of a Wnt signal transduction pathway inhibiting substance, (2) a second step of suspension-culturing the aggregate obtained in the first step in the presence of a BMP signal transduction pathway activating substance, thereby obtaining a cell mass comprising 1) neural cells or neural tissue and 2) nonneural epithelial tissue.
Abstract: The present invention relates to a human neural stern cell, and to a pharmaceutical composition for the treatment of central or peripheral nervous system disorders and injuries using same. More particularly, the present invention relates to a human telencephalon-derived human neural stem cell effective in the treatment of nervous system disorders and injuries, and to a pharmaceutical composition for the treatment of nervous system disorders and injuries using same, to the use of the human neural stem cell for preparing therapeutic agents for the treatment of nervous system disorders and injuries, and to a method for treating nervous system disorders and injuries, capable of administrating an effective amount of the human neural stem cells into individuals that need the human neural stem cells.
Type:
Application
Filed:
August 12, 2009
Publication date:
March 31, 2011
Applicant:
Industry-Academic Cooperation Foundation, Yonsei University
Abstract: Waveguide neural interface devices and methods for fabricating such devices are provided herein. An exemplary interface device includes a neural device comprising an exterior neural device sidewall extending to a distal end portion of the neural device, an array of electrode sites supported by a first face of the neural device sidewall. The array includes a recording electrode site. The exemplary interface device further includes a waveguide extending along the neural device, the waveguide having a distal end to emit light to illuminate targeted tissue adjacent to the recording electrode site, and a light redirecting element disposed at the distal end of the waveguide. The light redirecting element redirects light traveling through the waveguide in a manner that avoids direct illumination of the recording electrode site on the first face of the neural device sidewall.
Type:
Application
Filed:
May 8, 2017
Publication date:
November 16, 2017
Applicant:
NeuroNexus Technologies, Inc.
Inventors:
John P. Seymour, Mayurachat Ning Gulari, Daryl R. Kipke, KC Kong
Abstract: Various system embodiments comprise a neural stimulation delivery system adapted to deliver a neural stimulation signal for use in delivering a neural stimulation therapy, a side effect detector, and a controller. The controller is adapted to control the neural stimulation delivery system, receive a signal indicative of detected side effect, determine whether the detected side effect is attributable to delivered neural stimulation therapy, and automatically titrate the neural stimulation therapy to abate the side effect. In various embodiments, the side effect detector includes a cough detector. In various embodiments, the controller is adapted to independently adjusting at least one stimulation parameter for at least one phase in the biphasic waveform as part of a process to titrate the neural stimulation therapy. Other aspects and embodiments are provided herein.
Abstract: A computing device for training an artificial neural network model includes: a model analyzer configured to receive a first artificial neural network model and split the first artificial neural network model into a plurality of layers; a training logic configured to calculate first sensitivity data varying as the first artificial neural network model is pruned, calculate a target sensitivity corresponding to a target pruning rate based on the first sensitivity data, calculate second sensitivity data varying as each of the plurality of layers is pruned, and output, based on the second sensitivity data, an optimal pruning rate of each of the plurality of layers, the optimal pruning rate corresponding to the target pruning rate; and a model updater configured to prune the first artificial neural network model based on the optimal pruning rate to obtain a second artificial neural network model, and output the second artificial neural network model.
Type:
Grant
Filed:
February 10, 2020
Date of Patent:
January 17, 2023
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Byeoungsu Kim, Sangsoo Ko, Kyoungyoung Kim, Jaegon Kim, Sanghyuck Ha
Abstract: A classification method for classifying neural signals, the method comprising the following steps: a) using a plurality of electrodes over a determined period of time to acquire a plurality of neural signal samples; b) estimating a covariance matrix of said neural signals; and c) classifying said neural signals, the classification being performed: either in the Riemann manifold of symmetric positive definite matrices of dimensions equal to the dimensions of said covariance matrix; or else in a tangent space of said Riemann manifold. A method of selecting neural signal acquisition electrodes based on the Riemann geometry of covariance matrices of said signals. An application of said classification method and of said method of selecting electrodes to direct neural control.
Type:
Grant
Filed:
July 12, 2011
Date of Patent:
November 4, 2014
Assignee:
Commissariat a l'Energie Atomique et aux Energies Alternatives
Abstract: Methods and systems, including computer programs encoded on a computer storage medium. In one aspect, a method includes obtaining data specifying one or more neural networks to be deployed on a neural network hardware accelerator, each of the one or more neural networks having a respective set of parameters, and the neural network hardware accelerator having one or more memories having a memory capacity; determining a maximum amount of the memory capacity that will be in use at any one time during a processing of any of the one or more neural networks by the neural network hardware accelerator; identifying a subset of the parameters of the one or more neural networks that consumes an amount of memory that is less than a difference between the memory capacity and the determined maximum amount of the memory capacity; and storing the identified subset of the parameters.
Type:
Application
Filed:
December 18, 2019
Publication date:
December 29, 2022
Inventors:
Jack Liu, Dong Hyuk Woo, Jason Jong Kyu Park, Raksit Ashok