Search Patents
-
Patent number: 6247001Abstract: A state vector (SVt) is determined with elements that characterize a financial market (101). Taking into account predetermined evaluation variables, an evaluation (Vt) is determined (102) for the state vector (SVt). In addition, a chronologically following state vector (SVt+1) is determined (103) and evaluated (Vt+1). On the basis of the two evaluations (Vt, Vt+1), weights (wi) of the neural network (NN) are adapted (104) using a reinforcement learning method (&Dgr;wi).Type: GrantFiled: September 3, 1998Date of Patent: June 12, 2001Assignee: Siemens AktiengesellschaftInventors: Volker Tresp, Ralph Neuneier
-
Patent number: 5832466Abstract: In the design and implementation of neural networks, training is determined by a series of architectural and parametric decisions. A method is disclosed that, using genetic algorithms, improves the training characteristics of a neural network. The method begins with a population and iteratively modifies one or more parameters in each generation based on the network with the best training response in the previous generation.Type: GrantFiled: August 12, 1996Date of Patent: November 3, 1998Assignee: International Neural Machines Inc.Inventor: Oleg Feldgajer
-
Publication number: 20040260662Abstract: A neural network is trained with input data. The neural network is used to rescale the input data. Errors for the rescaled values are determined, and neighborhoods of the errors are used adjust connection weights of the neural network.Type: ApplicationFiled: June 20, 2003Publication date: December 23, 2004Inventors: Carl Staelin, Darryl Greig, Manl Flacher, Ron Maurer
-
Patent number: 11379713Abstract: A data processing system operable to process a neural network, and comprising a plurality of processors. The data processing system is operable to determine whether to perform neural network processing using a single processor or using plural processors. When it is determined that plural processors should be used, a distribution of the neural network processing among two or more of the processors is determined and the two or more processors are each assigned a portion of the neural network processing to perform. A neural network processing output is provided as a result of the processors performing their assigned portions of the neural network processing.Type: GrantFiled: December 8, 2018Date of Patent: July 5, 2022Assignees: Apical Limited, Arm LimitedInventors: Daren Croxford, Ashley Miles Stevens
-
Patent number: 7409372Abstract: A neural network is trained with input data. The neural network is used to rescale the input data. Errors for the rescaled values are determined, and neighborhoods of the errors are used adjust connection weights of the neural network.Type: GrantFiled: June 20, 2003Date of Patent: August 5, 2008Assignee: Hewlett-Packard Development Company, L.P.Inventors: Carl Staelin, Darryl Greig, Manl Flacher, Ron Maurer
-
Patent number: 11657264Abstract: Media content is received for streaming to a user device. A neural network is trained based on a first portion of the media content. Weights of the neural network are updated to overfit the first portion of the media content to provide a first overfitted neural network. The neural network or the first overfitted neural network is trained based on a second portion of the media content. Weights of the neural network or the first overfitted neural network are updated to overfit the second portion of the media content to provide a second overfitted neural network. The first portion and the second portion of the media content are sent with associations to the first overfitted neural network and the second overfitted to the user equipment.Type: GrantFiled: April 9, 2018Date of Patent: May 23, 2023Assignee: Nokia Technologies OyInventors: Francesco Cricri, Caglar Aytekin, Emre Baris Aksu, Miika Sakari Tupala, Xingyang Ni
-
Patent number: 11989640Abstract: Embodiments relate to a neural processor circuit with scalable architecture for instantiating one or more neural networks. The neural processor circuit includes a data buffer coupled to a memory external to the neural processor circuit, and a plurality of neural engine circuits. To execute tasks that instantiate the neural networks, each neural engine circuit generates output data using input data and kernel coefficients. A neural processor circuit may include multiple neural engine circuits that are selectively activated or deactivated according to configuration data of the tasks. Furthermore, an electronic device may include multiple neural processor circuits that are selectively activated or deactivated to execute the tasks.Type: GrantFiled: November 21, 2022Date of Patent: May 21, 2024Assignee: Apple Inc.Inventors: Erik Norden, Liran Fishel, Sung Hee Park, Jaewon Shin, Christopher L. Mills, Seungjin Lee, Fernando A. Mujica
-
Publication number: 20130117211Abstract: Certain aspects of the present disclosure support techniques for unsupervised neural replay, learning refinement, association and memory transfer.Type: ApplicationFiled: November 9, 2011Publication date: May 9, 2013Applicant: QUALCOMM IncorporatedInventors: Jason Frank Hunzinger, Victor Hokkiu Chan
-
Patent number: 6338052Abstract: A method for optimizing matching network between an output impedance and an input impedance in a semiconductor process apparatus is disclosed. The method includes the steps of: providing a neural network capable of being trained through repeated learning; training the neural network from previously performed process conditions; setting up an initial value; comparing the initial value with a theoretically calculated value, to obtain error between the values; and repeating the training, setting, and comparing steps until the error becomes zero.Type: GrantFiled: June 25, 1998Date of Patent: January 8, 2002Assignee: Hyundai Electronics Industries Co., Ltd.Inventor: Koon Ho Bae
-
Patent number: 7398259Abstract: Physical neural network systems and methods are disclosed. A physical neural network can be configured utilizing molecular technology, wherein said physical neural network comprises a plurality of molecular conductors, which form neural network connections thereof. A training mechanism can be provided for training said physical neural network to accomplish a particular neural network task based on a neural network training rule. The neural network connections are formed between pre-synaptic and post-synaptic components of said physical neural network. The neural network generally includes dynamic and modifiable connections for adaptive signal processing. The neural network training mechanism can be based, for example, on the Anti-Hebbian and Hebbian (AHAH) rule and/or other plasticity rules.Type: GrantFiled: October 21, 2004Date of Patent: July 8, 2008Assignee: KnowmTech, LLCInventor: Alex Nugent
-
Patent number: 9558442Abstract: A method for generating an event includes monitoring a first neural network with a second neural network. The method also includes generating an event based on the monitoring. The event is generated at the second neural network. The event may be generated based on a spike received at the second network during the monitoring.Type: GrantFiled: January 23, 2014Date of Patent: January 31, 2017Assignee: QUALCOMM INCORPORATEDInventors: Michael-David Nakayoshi Canoy, Paul Bender
-
Patent number: 11934939Abstract: According to a method and apparatus for neural network quantization, a quantized neural network is generated by performing learning of a neural network, obtaining weight differences between an initial weight and an updated weight determined by the learning of each cycle for each of layers in the first neural network, analyzing a statistic of the weight differences for each of the layers, determining one or more layers, from among the layers, to be quantized with a lower-bit precision based on the analyzed statistic, and generating a second neural network by quantizing the determined one or more layers with the lower-bit precision.Type: GrantFiled: March 2, 2023Date of Patent: March 19, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Wonjo Lee, Seungwon Lee, Junhaeng Lee
-
Patent number: 11625577Abstract: According to a method and apparatus for neural network quantization, a quantized neural network is generated by performing learning of a neural network, obtaining weight differences between an initial weight and an updated weight determined by the learning of each cycle for each of layers in the first neural network, analyzing a statistic of the weight differences for each of the layers, determining one or more layers, from among the layers, to be quantized with a lower-bit precision based on the analyzed statistic, and generating a second neural network by quantizing the determined one or more layers with the lower-bit precision.Type: GrantFiled: January 9, 2020Date of Patent: April 11, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Wonjo Lee, Seungwon Lee, Junhaeng Lee
-
Patent number: 10417555Abstract: Executing a neural network includes generating an output tile of a first layer of the neural network by processing an input tile to the first layer and storing the output tile of the first layer in an internal memory of a processor. An output tile of a second layer of the neural network can be generated using the processor by processing the output tile of the first layer stored in the internal memory.Type: GrantFiled: May 6, 2016Date of Patent: September 17, 2019Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: John W. Brothers, Joohoon Lee
-
Patent number: 5943660Abstract: A method for linearization of feedback in neural networks, and a neural network incorporating the feedback linearization method are presented. Control action is used to achieve tracking performance for a state-feedback linearizable, but unknown nonlinear control system. The control signal comprises a feedback linearization portion provided by neural networks, plus a robustifying portion that keep the control magnitude bounded. Proofs are provided to show that all of the signals in the closed-loop system are semi-globally uniformly ultimately bounded. This eliminates an off-line learning phase, and simplifies the initialization of neural network weights.Type: GrantFiled: October 15, 1997Date of Patent: August 24, 1999Assignee: Board of Regents The University of Texas SystemInventors: A. Yesildirek, F. L. Lewis
-
Patent number: 7627540Abstract: A special purpose processor (SPP) can use a Field Programmable Gate Array (FPGA) to model a large number of neural elements. The FPGAs or similar programmable device can have multiple cores doing presynaptic, postsynaptic, and plasticity calculations in parallel. Each core can implement multiple neural elements of the neural model.Type: GrantFiled: June 27, 2006Date of Patent: December 1, 2009Assignee: Neurosciences Research Foundation, Inc.Inventors: James A. Snook, Richard W. Schermerhorn
-
Patent number: 11599779Abstract: Disclosed is neural network circuitry having a first plurality of logic cells that is interconnected to form neural network computation units that are configured to perform approximate computations. The neural network circuitry further includes a second plurality of logic cells that is interconnected to form a controller hierarchy that is interfaced with the neural network computation units to control pipelining of the approximate computations performed by the neural network computational units. In some embodiments the neural network computation units include approximate multipliers that are configured to perform approximate multiplications that comprise the approximate computations. The approximate multipliers include preprocessing units that reduce latency while maintaining accuracy.Type: GrantFiled: November 13, 2019Date of Patent: March 7, 2023Assignee: Arizona Board of Regents on Behalf of Arizona State UniversityInventors: Elham Azari, Sarma Vrudhula
-
Patent number: 11475300Abstract: A neural network training method includes inputting neuron input values of a neural network to the RRAM, and performing calculation for the neuron input values based on filters in the RRAM, to obtain neuron output values of the neural network, performing calculation based on kernel values of the RRAM, the neuron input values, the neuron output values, and backpropagation error values of the neural network, to obtain backpropagation update values of the neural network, comparing the backpropagation update values with a preset threshold, and when the backpropagation update values are greater than the preset threshold, updating the filters in the RRAM based on the backpropagation update values.Type: GrantFiled: December 13, 2019Date of Patent: October 18, 2022Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Jun Yao, Wulong Liu, Yu Wang, Lixue Xia
-
Publication number: 20090030860Abstract: A system for routing business-to-business (“B2B”) messages includes a cyclical neural network. The cyclical neural network contains neurons for determining a needed destination of a message based on content type of the message, for example. Neurons are monitored to establish a “state of understanding” of the network during processing, and tags may be applied to messages upon a determination of the needed destination.Type: ApplicationFiled: July 27, 2007Publication date: January 29, 2009Inventor: Gregory Robert Leitheiser
-
Patent number: 8676728Abstract: The location of a sound within a given spatial volume may be used in applications such as augmented reality environments. An artificial neural network processes time-difference-of-arrival data (TDOA) from a known microphone array to determine a spatial location of the sound. The neural network may be located locally or available as a cloud service. The artificial neural network is trained with perturbed and non-perturbed TDOA data.Type: GrantFiled: March 30, 2011Date of Patent: March 18, 2014Assignee: Rawles LLCInventors: Kavitha Velusamy, Edward Dietz Crump