Search Patents
  • Patent number: 6247001
    Abstract: A state vector (SVt) is determined with elements that characterize a financial market (101). Taking into account predetermined evaluation variables, an evaluation (Vt) is determined (102) for the state vector (SVt). In addition, a chronologically following state vector (SVt+1) is determined (103) and evaluated (Vt+1). On the basis of the two evaluations (Vt, Vt+1), weights (wi) of the neural network (NN) are adapted (104) using a reinforcement learning method (&Dgr;wi).
    Type: Grant
    Filed: September 3, 1998
    Date of Patent: June 12, 2001
    Assignee: Siemens Aktiengesellschaft
    Inventors: Volker Tresp, Ralph Neuneier
  • Patent number: 5832466
    Abstract: In the design and implementation of neural networks, training is determined by a series of architectural and parametric decisions. A method is disclosed that, using genetic algorithms, improves the training characteristics of a neural network. The method begins with a population and iteratively modifies one or more parameters in each generation based on the network with the best training response in the previous generation.
    Type: Grant
    Filed: August 12, 1996
    Date of Patent: November 3, 1998
    Assignee: International Neural Machines Inc.
    Inventor: Oleg Feldgajer
  • Publication number: 20040260662
    Abstract: A neural network is trained with input data. The neural network is used to rescale the input data. Errors for the rescaled values are determined, and neighborhoods of the errors are used adjust connection weights of the neural network.
    Type: Application
    Filed: June 20, 2003
    Publication date: December 23, 2004
    Inventors: Carl Staelin, Darryl Greig, Manl Flacher, Ron Maurer
  • Patent number: 11379713
    Abstract: A data processing system operable to process a neural network, and comprising a plurality of processors. The data processing system is operable to determine whether to perform neural network processing using a single processor or using plural processors. When it is determined that plural processors should be used, a distribution of the neural network processing among two or more of the processors is determined and the two or more processors are each assigned a portion of the neural network processing to perform. A neural network processing output is provided as a result of the processors performing their assigned portions of the neural network processing.
    Type: Grant
    Filed: December 8, 2018
    Date of Patent: July 5, 2022
    Assignees: Apical Limited, Arm Limited
    Inventors: Daren Croxford, Ashley Miles Stevens
  • Patent number: 7409372
    Abstract: A neural network is trained with input data. The neural network is used to rescale the input data. Errors for the rescaled values are determined, and neighborhoods of the errors are used adjust connection weights of the neural network.
    Type: Grant
    Filed: June 20, 2003
    Date of Patent: August 5, 2008
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Carl Staelin, Darryl Greig, Manl Flacher, Ron Maurer
  • Patent number: 11657264
    Abstract: Media content is received for streaming to a user device. A neural network is trained based on a first portion of the media content. Weights of the neural network are updated to overfit the first portion of the media content to provide a first overfitted neural network. The neural network or the first overfitted neural network is trained based on a second portion of the media content. Weights of the neural network or the first overfitted neural network are updated to overfit the second portion of the media content to provide a second overfitted neural network. The first portion and the second portion of the media content are sent with associations to the first overfitted neural network and the second overfitted to the user equipment.
    Type: Grant
    Filed: April 9, 2018
    Date of Patent: May 23, 2023
    Assignee: Nokia Technologies Oy
    Inventors: Francesco Cricri, Caglar Aytekin, Emre Baris Aksu, Miika Sakari Tupala, Xingyang Ni
  • Patent number: 11989640
    Abstract: Embodiments relate to a neural processor circuit with scalable architecture for instantiating one or more neural networks. The neural processor circuit includes a data buffer coupled to a memory external to the neural processor circuit, and a plurality of neural engine circuits. To execute tasks that instantiate the neural networks, each neural engine circuit generates output data using input data and kernel coefficients. A neural processor circuit may include multiple neural engine circuits that are selectively activated or deactivated according to configuration data of the tasks. Furthermore, an electronic device may include multiple neural processor circuits that are selectively activated or deactivated to execute the tasks.
    Type: Grant
    Filed: November 21, 2022
    Date of Patent: May 21, 2024
    Assignee: Apple Inc.
    Inventors: Erik Norden, Liran Fishel, Sung Hee Park, Jaewon Shin, Christopher L. Mills, Seungjin Lee, Fernando A. Mujica
  • Publication number: 20130117211
    Abstract: Certain aspects of the present disclosure support techniques for unsupervised neural replay, learning refinement, association and memory transfer.
    Type: Application
    Filed: November 9, 2011
    Publication date: May 9, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Jason Frank Hunzinger, Victor Hokkiu Chan
  • Patent number: 6338052
    Abstract: A method for optimizing matching network between an output impedance and an input impedance in a semiconductor process apparatus is disclosed. The method includes the steps of: providing a neural network capable of being trained through repeated learning; training the neural network from previously performed process conditions; setting up an initial value; comparing the initial value with a theoretically calculated value, to obtain error between the values; and repeating the training, setting, and comparing steps until the error becomes zero.
    Type: Grant
    Filed: June 25, 1998
    Date of Patent: January 8, 2002
    Assignee: Hyundai Electronics Industries Co., Ltd.
    Inventor: Koon Ho Bae
  • Patent number: 7398259
    Abstract: Physical neural network systems and methods are disclosed. A physical neural network can be configured utilizing molecular technology, wherein said physical neural network comprises a plurality of molecular conductors, which form neural network connections thereof. A training mechanism can be provided for training said physical neural network to accomplish a particular neural network task based on a neural network training rule. The neural network connections are formed between pre-synaptic and post-synaptic components of said physical neural network. The neural network generally includes dynamic and modifiable connections for adaptive signal processing. The neural network training mechanism can be based, for example, on the Anti-Hebbian and Hebbian (AHAH) rule and/or other plasticity rules.
    Type: Grant
    Filed: October 21, 2004
    Date of Patent: July 8, 2008
    Assignee: KnowmTech, LLC
    Inventor: Alex Nugent
  • Patent number: 9558442
    Abstract: A method for generating an event includes monitoring a first neural network with a second neural network. The method also includes generating an event based on the monitoring. The event is generated at the second neural network. The event may be generated based on a spike received at the second network during the monitoring.
    Type: Grant
    Filed: January 23, 2014
    Date of Patent: January 31, 2017
    Assignee: QUALCOMM INCORPORATED
    Inventors: Michael-David Nakayoshi Canoy, Paul Bender
  • Patent number: 11934939
    Abstract: According to a method and apparatus for neural network quantization, a quantized neural network is generated by performing learning of a neural network, obtaining weight differences between an initial weight and an updated weight determined by the learning of each cycle for each of layers in the first neural network, analyzing a statistic of the weight differences for each of the layers, determining one or more layers, from among the layers, to be quantized with a lower-bit precision based on the analyzed statistic, and generating a second neural network by quantizing the determined one or more layers with the lower-bit precision.
    Type: Grant
    Filed: March 2, 2023
    Date of Patent: March 19, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Wonjo Lee, Seungwon Lee, Junhaeng Lee
  • Patent number: 11625577
    Abstract: According to a method and apparatus for neural network quantization, a quantized neural network is generated by performing learning of a neural network, obtaining weight differences between an initial weight and an updated weight determined by the learning of each cycle for each of layers in the first neural network, analyzing a statistic of the weight differences for each of the layers, determining one or more layers, from among the layers, to be quantized with a lower-bit precision based on the analyzed statistic, and generating a second neural network by quantizing the determined one or more layers with the lower-bit precision.
    Type: Grant
    Filed: January 9, 2020
    Date of Patent: April 11, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Wonjo Lee, Seungwon Lee, Junhaeng Lee
  • Patent number: 10417555
    Abstract: Executing a neural network includes generating an output tile of a first layer of the neural network by processing an input tile to the first layer and storing the output tile of the first layer in an internal memory of a processor. An output tile of a second layer of the neural network can be generated using the processor by processing the output tile of the first layer stored in the internal memory.
    Type: Grant
    Filed: May 6, 2016
    Date of Patent: September 17, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: John W. Brothers, Joohoon Lee
  • Patent number: 5943660
    Abstract: A method for linearization of feedback in neural networks, and a neural network incorporating the feedback linearization method are presented. Control action is used to achieve tracking performance for a state-feedback linearizable, but unknown nonlinear control system. The control signal comprises a feedback linearization portion provided by neural networks, plus a robustifying portion that keep the control magnitude bounded. Proofs are provided to show that all of the signals in the closed-loop system are semi-globally uniformly ultimately bounded. This eliminates an off-line learning phase, and simplifies the initialization of neural network weights.
    Type: Grant
    Filed: October 15, 1997
    Date of Patent: August 24, 1999
    Assignee: Board of Regents The University of Texas System
    Inventors: A. Yesildirek, F. L. Lewis
  • Patent number: 7627540
    Abstract: A special purpose processor (SPP) can use a Field Programmable Gate Array (FPGA) to model a large number of neural elements. The FPGAs or similar programmable device can have multiple cores doing presynaptic, postsynaptic, and plasticity calculations in parallel. Each core can implement multiple neural elements of the neural model.
    Type: Grant
    Filed: June 27, 2006
    Date of Patent: December 1, 2009
    Assignee: Neurosciences Research Foundation, Inc.
    Inventors: James A. Snook, Richard W. Schermerhorn
  • Patent number: 11599779
    Abstract: Disclosed is neural network circuitry having a first plurality of logic cells that is interconnected to form neural network computation units that are configured to perform approximate computations. The neural network circuitry further includes a second plurality of logic cells that is interconnected to form a controller hierarchy that is interfaced with the neural network computation units to control pipelining of the approximate computations performed by the neural network computational units. In some embodiments the neural network computation units include approximate multipliers that are configured to perform approximate multiplications that comprise the approximate computations. The approximate multipliers include preprocessing units that reduce latency while maintaining accuracy.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: March 7, 2023
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Elham Azari, Sarma Vrudhula
  • Patent number: 11475300
    Abstract: A neural network training method includes inputting neuron input values of a neural network to the RRAM, and performing calculation for the neuron input values based on filters in the RRAM, to obtain neuron output values of the neural network, performing calculation based on kernel values of the RRAM, the neuron input values, the neuron output values, and backpropagation error values of the neural network, to obtain backpropagation update values of the neural network, comparing the backpropagation update values with a preset threshold, and when the backpropagation update values are greater than the preset threshold, updating the filters in the RRAM based on the backpropagation update values.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: October 18, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jun Yao, Wulong Liu, Yu Wang, Lixue Xia
  • Publication number: 20090030860
    Abstract: A system for routing business-to-business (“B2B”) messages includes a cyclical neural network. The cyclical neural network contains neurons for determining a needed destination of a message based on content type of the message, for example. Neurons are monitored to establish a “state of understanding” of the network during processing, and tags may be applied to messages upon a determination of the needed destination.
    Type: Application
    Filed: July 27, 2007
    Publication date: January 29, 2009
    Inventor: Gregory Robert Leitheiser
  • Patent number: 8676728
    Abstract: The location of a sound within a given spatial volume may be used in applications such as augmented reality environments. An artificial neural network processes time-difference-of-arrival data (TDOA) from a known microphone array to determine a spatial location of the sound. The neural network may be located locally or available as a cloud service. The artificial neural network is trained with perturbed and non-perturbed TDOA data.
    Type: Grant
    Filed: March 30, 2011
    Date of Patent: March 18, 2014
    Assignee: Rawles LLC
    Inventors: Kavitha Velusamy, Edward Dietz Crump