Search Patents
  • Publication number: 20210350226
    Abstract: Embodiments of the present disclosure include systems and methods for training neural networks. In one embodiment, neural network may receive input data and produce output results in response to the input data and weights of the neural network. An error is determined at an output of the neural network based on the output results. The error is propagated in a reverse direction through the neural network from the output and one or more intermediate outputs to adjust the weights.
    Type: Application
    Filed: May 8, 2020
    Publication date: November 11, 2021
    Inventors: Andy WAGNER, Tiyasa MITRA, Marc TREMBLAY
  • Patent number: 11049011
    Abstract: Approaches for classifying training samples with minimal error in a neural network using a low complexity neural network classifier, are described. In one example, for the neural network, an upper bound on the Vapnik-Chervonenkis (VC) dimension is determined. Thereafter, an empirical error function corresponding to the neural network is determined. A modified error function based on the upper bound on the VC dimension and the empirical error function is generated, and used for training the neural network.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: June 29, 2021
    Assignee: Indian Institute of Technology Delhi
    Inventor: Jayadeva
  • Publication number: 20230315639
    Abstract: A neural processing device is provided. The neural processing device comprises: a processing unit configured to perform calculations, an L0 memory configured to receive data from the processing unit and provide data to the processing unit, and an LSU (Load/Store Unit) configured to perform load and store operations of the data, wherein the LSU comprises: a neural core load unit configured to issue a load instruction of the data, a neural core store unit configured to issue a store instruction for transmitting and storing the data, and a sync ID logic configured to provide a sync ID to the neural core load unit and the neural core store unit to thereby cause a synchronization signal to be generated for each sync ID.
    Type: Application
    Filed: November 18, 2022
    Publication date: October 5, 2023
    Inventors: Jinseok Kim, Jinwook Oh, Donghan Kim
  • Patent number: 7447549
    Abstract: Methods of denoising a neural recording signal include correlating a neural recording signal with a number of basis functions to obtain a number of weights and then multiplying the weights with the basis functions to obtain a denoised neural recording signal. The basis functions are derived using principal component analysis. Systems for denoising a neural recording signal include one or more devices configured to correlate a neural recording signal with the basis functions to obtain the weights and then multiply the weights with the basis functions to obtain the denoised neural recording signal.
    Type: Grant
    Filed: June 1, 2005
    Date of Patent: November 4, 2008
    Assignee: Advanced Bionioics, LLC
    Inventors: Leonid M. Litvak, Abhijit Kulkarni
  • Patent number: 10546238
    Abstract: A technique for training a neural network including an input layer, one or more hidden layers and an output layer, in which the trained neural network can be used to perform a task such as speech recognition. In the technique, a base of the neural network having at least a pre-trained hidden layer is prepared. A parameter set associated with one pre-trained hidden layer in the neural network is decomposed into a plurality of new parameter sets. The number of hidden layers in the neural network is increased by using the plurality of the new parameter sets. Pre-training for the neural network is performed.
    Type: Grant
    Filed: April 9, 2019
    Date of Patent: January 28, 2020
    Assignee: International Business Machines Corporation
    Inventors: Takashi Fukuda, Osamu Ichikawa
  • Publication number: 20050255589
    Abstract: The present invention relates to the generation of neural cells from undifferentiated human embryonic stem cells. In particular it relates to directing the differentiation of human ES cells into neural progenitors and neural cells and the production of functioning neural cells and/or neural cells of a specific type. The invention also includes the use of these cells for the treatment of neurological conditions such as Parkinson's disease.
    Type: Application
    Filed: December 3, 2004
    Publication date: November 17, 2005
    Applicant: ES Cell International Pte Ltd.
    Inventor: Benjamin Reubinoff
  • Publication number: 20120122211
    Abstract: The present invention relates to the generation of neural cells from undifferentiated human embryonic stem cells. In particular it relates to directing the differentiation of human ES cells into neural progenitors and neural cells and the production of functioning neural cells and/or neural cells of a specific type. The invention also includes the use of these cells for the treatment of neurological conditions such as Parkinson's disease.
    Type: Application
    Filed: January 30, 2012
    Publication date: May 17, 2012
    Applicant: ES Cell International PTE Ltd.
    Inventor: Benjamin Eithan REUBINOFF
  • Patent number: 6466924
    Abstract: A verification method and a verification apparatus of a neural network for guaranteeing the operation of the neural network to any input signals which might be inputted.
    Type: Grant
    Filed: June 1, 1999
    Date of Patent: October 15, 2002
    Assignee: Denso Corporation
    Inventors: Masahiko Tateishi, Shinichi Tamura
  • Publication number: 20220249009
    Abstract: Automated assessment of neural response recordings involves storing a set of basis functions comprising at least one compound action potential basis function and at least one artefact basis function. Neural recordings of electrical activity in neural tissue are obtained by application of stimuli, using a single configuration of stimulation and recording. Each neural recording is decomposed by determining at least one parameter which estimates at least one of a compound action potential and an artefact. The at least one parameter is/are determined for each respective one of the plurality of neural recordings, to yield a plurality of values. A spread of the plurality of values is determined. An indication that the neural response recordings are of higher quality is output if the spread is small. An indication that the neural response recordings are of lower quality is output if the spread is large.
    Type: Application
    Filed: July 13, 2020
    Publication date: August 11, 2022
    Applicant: Saluda Medical Pty Ltd
    Inventors: Daniel Parker, Milan Obradovic, Dean Karantonis, Ivan Guelton, Stephanie Ascone, Michael Narayanan
  • Publication number: 20170326382
    Abstract: Waveguide neural interface devices and methods for fabricating such devices are provided herein. An exemplary interface device includes a neural device comprising an exterior neural device sidewall extending to a distal end portion of the neural device, an array of electrode sites supported by a first face of the neural device sidewall. The array includes a recording electrode site. The exemplary interface device further includes a waveguide extending along the neural device, the waveguide having a distal end to emit light to illuminate targeted tissue adjacent to the recording electrode site, and a light redirecting element disposed at the distal end of the waveguide. The light redirecting element redirects light traveling through the waveguide in a manner that avoids direct illumination of the recording electrode site on the first face of the neural device sidewall.
    Type: Application
    Filed: May 8, 2017
    Publication date: November 16, 2017
    Applicant: NeuroNexus Technologies, Inc.
    Inventors: John P. Seymour, Mayurachat Ning Gulari, Daryl R. Kipke, KC Kong
  • Publication number: 20230289291
    Abstract: A neural processor may include a system memory access circuit coupled to a system memory. The system memory access circuit is configured to fetch, from the system memory, first input data of a first task associated with a neural network. The neural processor may also include neural engines coupled to the system memory access circuit. The neural engines are configured to perform convolution operations on the first input data in a first set of operating cycles. The neural processor may further include a cache access circuit coupled to a cache. The cache access circuit is configured to instruct the cache to prefetch from the system memory, during the first set of operating cycles corresponding to the first task, second input data of a second task of the neural network. The second task is scheduled for processing in a second set of operating cycles after the first set of operating cycles.
    Type: Application
    Filed: March 10, 2022
    Publication date: September 14, 2023
    Inventors: Seungjin Lee, Jaewon Shin, Christopher L Mills
  • Publication number: 20190294973
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training conversational turn analysis neural networks. One of the methods includes obtaining unsupervised training data comprising a plurality of dialogue transcripts; training a turn prediction neural network to perform a turn prediction task on the unsupervised training data using unsupervised learning, wherein: the turn prediction neural network comprises (i) a turn encoder neural network and (ii) a turn decoder neural network; obtaining supervised training data; and training a supervised prediction neural network to perform a supervised prediction task on the supervised training data using supervised learning.
    Type: Application
    Filed: March 25, 2019
    Publication date: September 26, 2019
    Inventors: Anjuli Patricia Kannan, Kai Chen, Alvin Rishi Rajkomar
  • Publication number: 20170076195
    Abstract: Techniques related to implementing distributed neural networks for data analytics are discussed. Such techniques may include generating sensor data at a device including a sensor, implementing one or more lower level convolutional neural network layers at the device, optionally implementing one or more additional lower level convolutional neural network layers at another device such as a gateway, and generating a neural network output label at a computing resource such as a cloud computing resource based on optionally implementing one or more additional lower level convolutional neural network layers and at least implementing a fully connected portion of the neural network.
    Type: Application
    Filed: September 10, 2015
    Publication date: March 16, 2017
    Inventors: Shao-Wen Yang, Jianguo Li, Yen-Kuang Chen, Yurong Chen
  • Publication number: 20130035767
    Abstract: A neural graft includes a biological substrate, a carbon nanotube structure and a neural network. The carbon nanotube structure is located on the biological substrate. The carbon nanotube structure includes a number of carbon nanotube wires crossed with each other to define a number of pores. The neural network includes a number of neural cell bodies and a number of neurites branched from the neural cell bodies. An effective diameter of each pore is larger than or equal to a diameter of the neural cell body, the neurites substantially extend along the carbon nanotube wires such that the neurites are patterned.
    Type: Application
    Filed: August 1, 2012
    Publication date: February 7, 2013
    Applicants: HON HAI PRECISION INDUSTRY CO., LTD., TSINGHUA UNIVERSITY
    Inventors: LI FAN, CHEN FENG, WEN-MEI ZHAO
  • Publication number: 20210103813
    Abstract: Apparatuses, methods, and computer programs for compressing a neural network are disclosed. An apparatus includes at least one processor; and at least one non-transitory memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to: receive information from a second device, where the information comprises at least one parameter configured to be used for compression of a neural network, where the at least one parameter is in regard to at least one first aspect or task of the neural network; and compress the neural network, where the neural network is compressed based, at least partially, upon the at least one parameter received from the second device. The apparatus may also receive a compressed neural network from the second device, and further compress the compressed neural network based on the information.
    Type: Application
    Filed: October 1, 2020
    Publication date: April 8, 2021
    Inventors: Goutham RANGU, Hamed Rezazadegan Tavakoli, Francesco Cricri, Miska matias Hannuksela, Emre Aksu
  • Publication number: 20090004736
    Abstract: The present invention relates to the generation of neural cells from undifferentiated human embryonic stem cells. In particular it relates to directing the differentiation of human ES cells into neural progenitors and neural cells and the production of functioning neural cells and/or neural cells of a specific type. The invention also includes the use of these cells for the treatment of neurological conditions such as Parkinson's disease.
    Type: Application
    Filed: June 6, 2008
    Publication date: January 1, 2009
    Applicant: ES CELL INTERNATIONAL PTE LTD.
    Inventor: Benjamin Eithan Reubinoff
  • Publication number: 20220327363
    Abstract: A neural network training method in the artificial intelligence field includes: inputting training data into a neural network; determining a first input space of a second target layer in the neural network based on a first output space of a first target layer in the neural network; and inputting a feature vector in the first input space into the second target layer, where a capability of fitting random noise by the neural network when the feature vector in the first input space is input into the second target layer is lower than a capability of fitting the random noise by using an output space that is in the neural network and that exists when a feature vector in the first output space is input into the second target layer. This application helps avoid an overfitting phenomenon that occurs when the neural network processes an image, text, or speech.
    Type: Application
    Filed: June 23, 2022
    Publication date: October 13, 2022
    Inventors: Yixing Xu, Yehui Tang, Li Qian, Yunhe Wang, Chunjing Xu
  • Publication number: 20180349788
    Abstract: An introspection network is a machine-learned neural network that accelerates training of other neural networks. The introspection network receives a weight history for each of a plurality of weights from a current training step for a target neural network. A weight history includes at least four values for the weight that are obtained during training of the target neural network up to the current step. The introspection network then provides, for each of the plurality of weights, a respective predicted value, based on the weight history. The predicted value for a weight represents a value for the weight in a future training step for the target neural network. Thus, the predicted value represents a jump in the training steps of the target neural network, which reduces the training time of the target neural network. The introspection network then sets each of the plurality of weights to its respective predicted value.
    Type: Application
    Filed: May 30, 2017
    Publication date: December 6, 2018
    Inventors: Mausoom Sarkar, Balaji Krishnamurthy, Abhishek Sinha, Aahitagni Mukherjee
  • Patent number: 9336483
    Abstract: Dynamically updating neural network systems may be implemented to generate, train, evaluate and update artificial neural network data structures used by content distribution networks. Such systems and methods described herein may include generating and training neural networks, using neural networks to perform predictive analysis and other decision-making processes within content distribution networks, evaluating the performance of neural networks, and generating and training pluralities of replacement candidate neural networks within cloud computing architectures and/or other computing environments.
    Type: Grant
    Filed: April 3, 2015
    Date of Patent: May 10, 2016
    Assignee: PEARSON EDUCATION, INC.
    Inventors: Thilani Abeysooriya, Bhanuka Withana, Achila Liyanarachchi, Thimira Dilina Kalindu Amaratunga
  • Publication number: 20220058410
    Abstract: There is provided a neural signal detection circuit capable of outputting time difference data or neural data, and including a first temporal circuit and a second temporal circuit. The first temporal circuit is used to store detected voltage energy of a first interval and a second interval as the time difference data. The second temporal circuit is used to store detected voltage energy of the second interval as the neural data. The neural signal detection circuit is used to output the time difference data or the neural data in different operating modes.
    Type: Application
    Filed: November 2, 2021
    Publication date: February 24, 2022
    Inventors: SEN-HUANG HUANG, REN-CHIEH LIU, YI-HSIEN KO, HAN-CHI LIU, YI-CHENG CHIU
Narrow Results

Filter by US Classification