Patents by Inventor Laurence F. Wood

Laurence F. Wood has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11893488
    Abstract: Provided are continuously learning and optimizing artificial intelligence (AI) adaptive neural network (ANN) computer modeling methods and systems, designated “human affect computer modeling” (HACM) or “affective neuron” (AN) and, more particularly, to AI methods, systems and devices, that can recognize, interpret, process and simulate human reactions and affects such as emotional responses to internal and external sensory stimuli, that provides real-time reinforcement learning modeling that reproduces human affects and/or reactions, wherein the human affect computer modeling (HACM) can be used singularly or collectively for modeling and predicting complex human reactions and affects.
    Type: Grant
    Filed: October 4, 2021
    Date of Patent: February 6, 2024
    Assignee: LARSX
    Inventors: Laurence F. Wood, Lisa S. Wood
  • Publication number: 20220101136
    Abstract: Provided are continuously learning and optimizing artificial intelligence (AI) adaptive neural network (ANN) computer modeling methods and systems, designated “human affect computer modeling” (HACM) or “affective neuron” (AN) and, more particularly, to AI methods, systems and devices, that can recognize, interpret, process and simulate human reactions and affects such as emotional responses to internal and external sensory stimuli, that provides real-time reinforcement learning modeling that reproduces human affects and/or reactions, wherein the human affect computer modeling (HACM) can be used singularly or collectively for modeling and predicting complex human reactions and affects.
    Type: Application
    Filed: October 4, 2021
    Publication date: March 31, 2022
    Applicant: LARSX
    Inventors: Laurence F. Wood, Lisa S. Wood
  • Patent number: 11138503
    Abstract: Continuously learning and optimizing artificial intelligence (AI) adaptive neural network (ANN) computer modeling methods and systems, designated human affect computer modeling (HACM) or affective neuron (AN), and, more particularly, to AI methods, systems and devices that can recognize, interpret, process and simulate human reactions and affects such as emotional responses to internal and external sensory stimuli, that provides real-time reinforcement learning modeling that reproduces human affects and/or reactions, wherein the human affect modeling (HACM) can be used singularly or collectively to modeling and predict complex human reactions and affects.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: October 5, 2021
    Assignee: LARSX
    Inventors: Laurence F. Wood, Lisa S. Wood
  • Publication number: 20210034959
    Abstract: Continuously learning and optimizing artificial intelligence (AI) adaptive neural network (ANN) computer modeling methods and systems, designated human affect computer modeling (HACM) or affective neuron (AN), and, more particularly, to AI methods, systems and devices that can recognize, interpret, process and simulate human reactions and affects such as emotional responses to internal and external sensory stimuli, that provides real-time reinforcement learning modeling that reproduces human affects and/or reactions, wherein the human affect modeling (HACM) can be used singularly or collectively to modeling and predict complex human reactions and affects.
    Type: Application
    Filed: March 22, 2018
    Publication date: February 4, 2021
    Applicant: LARSX
    Inventors: Laurence F. Wood, Lisa S. Wood
  • Patent number: 4914603
    Abstract: A method of training an artificial neural network uses a computer configured as a plurality of interconnected neural units arranged in a layered network including an input layer having a network input, and an output layer having a network output. A neural unit has a first subunit and a second subunit. The first subunit having one or more first inputs, and a corresponding first set of variables for operating upon the first inputs to provide a first output. The first set of variables can change in response to feedback representing differences between desired network outputs for selected network inputs and actual network outputs. The second subunit has a plurality of second inputs, and a corresponding second set of variables for operating upon said second inputs to provide a second output. The second set of variables can change in response to differences between desired network outputs for selected network inputs and actual network outputs.
    Type: Grant
    Filed: December 14, 1988
    Date of Patent: April 3, 1990
    Assignee: GTE Laboratories Incorporated
    Inventor: Laurence F. Wood
  • Patent number: 4912653
    Abstract: A trainable artificial neural network includes a computer configured as a plurality of interconnected neural units arranged in a layered network. An input layer has a network input and an output layer has a network output. A neural unit has a first subunit and a second subunit, with the first subunit having one or more first inputs and a corresponding first set of variables for operating upon the said first inputs to provide a first output. The first set of variables can change in response to feedback representing differences between desired network outputs and actual network outputs. The second subunit has a plurality second inputs, and a corresponding second set of variables for operating upon said second inputs to provide a second output. The second set of variables can change in response to differences between desired network outputs for selected network inputs and actual network outputs.
    Type: Grant
    Filed: December 14, 1988
    Date of Patent: March 27, 1990
    Assignee: GTE Laboratories Incorporated
    Inventor: Laurence F. Wood
  • Patent number: 4912649
    Abstract: A method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output, and having a plurality of interconnected units arranged in layers including an input layer and an output layer. Each unit has a multiplicity of unit inputs and a set of variables for operating upon the unit inputs to provide a unit output. A plurality of examples are serially provided to the network input and the network output is observed. The computer is programmed with a back propagation algorithm for adjusting each set of variables in response to feedback representing differences between the network output for each example and the desired output. The examples are iterated while those values which change are identified. The examples are reiterated and the algorithm is applied to only those values which changed in a previous iteration.
    Type: Grant
    Filed: December 14, 1988
    Date of Patent: March 27, 1990
    Assignee: GTE Government Systems Corporation
    Inventor: Laurence F. Wood
  • Patent number: 4912652
    Abstract: A method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output and having a plurality of interconnected units arranged in layers including an input layer and an output layer. Each unit has a multiplicity of unit inputs and a set of variables for operating upon a unit inputs to provide a unit output in the range between binary 1 and binary 0. A plurality of training examples is serially provided to the network input and the network output is observed. The computer is programmed with a back propagation algorithm for changing each set of variables in response to feedback representing differences between the network output for each example and the desired output. The examples are iterated while the output of a unit is observed.
    Type: Grant
    Filed: December 14, 1988
    Date of Patent: March 27, 1990
    Assignee: GTE Laboratories Incorporated
    Inventor: Laurence F. Wood
  • Patent number: 4912651
    Abstract: A method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output, and having a plurality of interconnected units arranged in layers including an input layer and an output layer. Each unit has a multiplicity of unit inputs and a set of variables for operating upon a unit inputs to provide a unit output. A plurality of examples are serially provided to the network input and the network output is observed. The computer is programmed with a back propagation algorithm for adjusting each set of variables in response to feedback representing differences between the network output for each example and the desired output. The examples are iterated until the signs of the outputs of the units of the output layer converge. Then each set of variables is multiplied by a multiplier. The examples are reiterated until the magnitude of the outputs of the units of the output layer converge.
    Type: Grant
    Filed: December 14, 1988
    Date of Patent: March 27, 1990
    Assignee: GTE Laboratories Incorporated
    Inventors: Laurence F. Wood, Michael J. Grimaldi, Eric D. Peterson
  • Patent number: 4912647
    Abstract: A method of training an artificial neural network uses a first computer configured as a plurality of interconnected neural units arranged in a network. A neural unit has a first subunit and a second subunit. The first subunit has first inputs and a corresponding first set of variables for operating upon the first inputs to provide a first output during a forward pass. The first set of variables can change in response to feedback representing differences between desired network outputs and actual network outputs. The second subunit has a plurality of second inputs, and a corresponding second set of variables for operating upon the second inputs to provide a second output. The second set of variables can change in response to differences between desired network outputs for selected network inputs and actual network outputs. The computer provides an activating variable representing the difference between current second output and previous second outputs.
    Type: Grant
    Filed: December 14, 1988
    Date of Patent: March 27, 1990
    Assignee: GTE Laboratories Incorporated
    Inventor: Laurence F. Wood
  • Patent number: 4912655
    Abstract: A method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output, and having a plurality of interconnected units arranged in layers including an input layer and an output layer. Each unit has a multiplicity of unit inputs and a set of variables for operating upon a unit inputs to provide a unit output. The computer is programmed with a back propagation algorithm. A plurality of examples are serially provided to the network input and the network output is observed. The examples are iterated and proposed changes to each set of variables are calculated in response to feedback representing differences betwen the network output for each example and the desired output. The proposed changes are accumulated for a predetermined number of iterations, whereupon the accumulated proposed changes are added to the set of variables.
    Type: Grant
    Filed: December 14, 1988
    Date of Patent: March 27, 1990
    Assignee: GTE Laboratories Incorporated
    Inventor: Laurence F. Wood
  • Patent number: 4912654
    Abstract: A method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output, and having a plurality of interconnected units arranged in layers including an input layer and an output layer. Each unit has a multiplicity of unit inputs and a set of variables for operating upon a unit inputs to provide a unit output in the range positive 1 and negative 1. A plurality of examples are serially provided to the network input and the network output is observed. The computer is programmed with a back propagation algorithm for calculating changes to the sets of variables in response to feedback representing differences between the network output for each example and the desired output. The absolute magnitude of the product of an input and the corresponding output of a unit is calculated.
    Type: Grant
    Filed: December 14, 1988
    Date of Patent: March 27, 1990
    Assignee: Government Systems Corporation GTE
    Inventor: Laurence F. Wood