Patents by Inventor John E. Mixter

John E. Mixter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11640534
    Abstract: Backpropagation of an artificial neural network can be triggered or based on input data. The input data are received into the artificial neural network, and the input data are forward propagated through the artificial neural network, which generates output values at classifier layer perceptrons of the network. Classifier layer perceptrons that have the largest output values after the input data have been forward propagated through the artificial neural network are identified. The output difference between the classifier layer perceptrons that have the largest output values is determined. It is then determined whether the output difference transgresses a threshold, and if the output difference does not transgress a threshold, the artificial neural network is backpropagated.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: May 2, 2023
    Assignee: Raytheon Company
    Inventor: John E. Mixter
  • Publication number: 20230054360
    Abstract: A machine learning system identifies functions that have similar weighted values, and the system determines a representative weighted value for these functions. The system then calculates a summation of the input values for the functions, and multiplies the summation by the representative weighted value, which generates an output for the functions.
    Type: Application
    Filed: August 17, 2021
    Publication date: February 23, 2023
    Inventor: John E. Mixter
  • Publication number: 20230054467
    Abstract: A system and method are provided for classifying of an unknown object by a trained neural network. The method includes receiving unknown input data that is not classified. For each class of a set of candidate classes for which the neural network has been trained, the method further includes retraining the neural network from its trained state until a prediction can be made for classifying the input data to the class by an inference procedure and determining an amount of effort exerted for retraining the class. The method further includes selecting a class of the group of classes for which the least amount of effort was exerted and outputting the selected class as the predicted class to which the input data is predicted to be classified.
    Type: Application
    Filed: August 23, 2021
    Publication date: February 23, 2023
    Applicant: Raytheon Company
    Inventor: John E. Mixter
  • Patent number: 11537848
    Abstract: Classes are identified in a dataset, and an independent artificial neural network is created for each class in the dataset. Thereafter, all classes in the dataset are provided to each independent artificial neural network. Each independent artificial neural network is separately trained to respond to a single particular class in the dataset and to reject all other classes in the dataset. Output from each independent artificial neural network is provided to a combining classifier, and the combining classifier is trained to identify all classes in the dataset based on the output of all the independent artificial neural networks.
    Type: Grant
    Filed: July 26, 2018
    Date of Patent: December 27, 2022
    Assignee: Raytheon Company
    Inventor: John E. Mixter
  • Patent number: 11475311
    Abstract: An artificial neural network is implemented via an instruction stream. A header of the instruction stream and a format for instructions in the instruction stream are defined. The format includes an opcode, an address, and data. The instruction stream is created using the header, the opcode, the address, and the data. The artificial neural network is implemented by providing the instruction stream to a computer processor for execution of the instruction stream.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: October 18, 2022
    Assignee: Raytheon Company
    Inventors: John E. Mixter, David R. Mucha
  • Patent number: 11468332
    Abstract: Processing circuitry for a deep neural network can include input/output ports, and a plurality of neural network layers coupled in order from a first layer to a last layer, each of the plurality of neural network layers including a plurality of weighted computational units having circuitry to interleave forward propagation of computational unit input values from the first layer to the last layer and backward propagation of output error values from the last layer to the first layer.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: October 11, 2022
    Assignee: Raytheon Company
    Inventors: John R. Goulding, John E. Mixter, David R. Mucha, Troy A. Gangwer, Ryan D. Silva
  • Patent number: 11468330
    Abstract: A method to grow an artificial neural network is disclosed. A seed neural network is trained on all classes in a dataset. All classes in the dataset are applied to the seed network, and average output values of the seed network are calculated. Class members that are nearest to and furthest from the average output values are selected, the class members are applied to the seed network, and a standard deviation is calculated. Perceptrons are added to the seed network, and inputs of the added perceptrons are connected to the seed layer based on the calculated standard deviation. A classifier is then added to the outputs of the added perceptrons, and the seed network and the added perceptrons are trained using all members in the dataset.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: October 11, 2022
    Assignee: Raytheon Company
    Inventor: John E. Mixter
  • Patent number: 11308393
    Abstract: A hardware-based artificial neural network receives data patterns from a source. The hardware-based artificial neural network is trained using the data patterns such that it learns normal data patterns. A new data pattern is identified when the data pattern deviates from the normal data patterns. The hardware-based artificial neural network is then trained using the new data pattern such that the hardware-based artificial neural network learns the new data pattern by altering one or more synaptic weights associated with the new data pattern. The rate at which the hardware-based artificial neural network alters the one or more synaptic weights is monitored, wherein a training rate that is greater than a threshold indicates that the new data pattern is malicious.
    Type: Grant
    Filed: July 26, 2018
    Date of Patent: April 19, 2022
    Assignee: Raytheon Company
    Inventor: John E. Mixter
  • Publication number: 20210150364
    Abstract: Backpropagation of an artificial neural network can be triggered or based on input data. The input data are received into the artificial neural network, and the input data are forward propagated through the artificial neural network, which generates output values at classifier layer perceptrons of the network. Classifier layer perceptrons that have the largest output values after the input data have been forward propagated through the artificial neural network are identified. The output difference between the classifier layer perceptrons that have the largest output values is determined. It is then determined whether the output difference transgresses a threshold, and if the output difference does not transgress a threshold, the artificial neural network is backpropagated.
    Type: Application
    Filed: November 15, 2019
    Publication date: May 20, 2021
    Inventor: John E. Mixter
  • Publication number: 20210142151
    Abstract: An artificial neural network receives data for the inputs of a perceptron in the artificial neural network. The network determines an average of the data for each of the inputs of the perceptron, determines a standard deviation of the average for each of the inputs of the perceptron, and determines an average of the standard deviations for the perceptron. The network then sets a learning rate for the perceptron equal to the average of the standard deviations, and trains the artificial neural network using the learning rate for the perceptron.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 13, 2021
    Inventor: John E. Mixter
  • Publication number: 20210133579
    Abstract: An artificial neural network is implemented via an instruction stream. A header of the instruction stream and a format for instructions in the instruction stream are defined. The format includes an opcode, an address, and data. The instruction stream is created using the header, the opcode, the address, and the data. The artificial neural network is implemented by providing the instruction stream to a computer processor for execution of the instruction stream.
    Type: Application
    Filed: October 30, 2019
    Publication date: May 6, 2021
    Inventors: John E. Mixter, David R. Mucha
  • Patent number: 10872290
    Abstract: A dynamically adaptive neural network processing system includes memory to store instructions representing a neural network in contiguous blocks, hardware acceleration (HA) circuitry to execute the neural network, direct memory access (DMA) circuitry to transfer the instructions from the contiguous blocks of the memory to the HA circuitry, and a central processing unit (CPU) to dynamically modify a linked list representing the neural network during execution of the neural network by the HA circuitry to perform machine learning, and to generate the instructions in the contiguous blocks of the memory based on the linked list.
    Type: Grant
    Filed: September 21, 2017
    Date of Patent: December 22, 2020
    Assignee: Raytheon Company
    Inventors: John R. Goulding, John E. Mixter, David R. Mucha
  • Publication number: 20200042878
    Abstract: A method to grow an artificial neural network is disclosed. A seed neural network is trained on all classes in a dataset. All classes in the dataset are applied to the seed network, and average output values of the seed network are calculated. Class members that are nearest to and furthest from the average output values are selected, the class members are applied to the seed network, and a standard deviation is calculated. Perceptrons are added to the seed network, and inputs of the added perceptrons are connected to the seed layer based on the calculated standard deviation. A classifier is then added to the outputs of the added perceptrons, and the seed network and the added perceptrons are trained using all members in the dataset.
    Type: Application
    Filed: August 3, 2018
    Publication date: February 6, 2020
    Inventor: John E. Mixter
  • Publication number: 20200034700
    Abstract: A hardware-based artificial neural network receives data patterns from a source. The hardware-based artificial neural network is trained using the data patterns such that it learns normal data patterns. A new data pattern is identified when the data pattern deviates from the normal data patterns. The hardware-based artificial neural network is then trained using the new data pattern such that the hardware-based artificial neural network learns the new data pattern by altering one or more synaptic weights associated with the new data pattern. The rate at which the hardware-based artificial neural network alters the one or more synaptic weights is monitored, wherein a training rate that is greater than a threshold indicates that the new data pattern is malicious.
    Type: Application
    Filed: July 26, 2018
    Publication date: January 30, 2020
    Inventor: John E. Mixter
  • Publication number: 20200034691
    Abstract: Classes are identified in a dataset, and an independent artificial neural network is created for each class in the dataset. Thereafter, all classes in the dataset are provided to each independent artificial neural network. Each independent artificial neural network is separately trained to respond to a single particular class in the dataset and to reject all other classes in the dataset. Output from each independent artificial neural network is provided to a combining classifier, and the combining classifier is trained to identify all classes in the dataset based on the output of all the independent artificial neural networks.
    Type: Application
    Filed: July 26, 2018
    Publication date: January 30, 2020
    Inventor: John E. Mixter
  • Publication number: 20190147342
    Abstract: Processing circuitry for a deep neural network can include input/output ports, and a plurality of neural network layers coupled in order from a first layer to a last layer, each of the plurality of neural network layers including a plurality of weighted computational units having circuitry to interleave forward propagation of computational unit input values from the first layer to the last layer and backward propagation of output error values from the last layer to the first layer.
    Type: Application
    Filed: November 13, 2017
    Publication date: May 16, 2019
    Inventors: John R. Goulding, John E. Mixter, David R. Mucha, Troy A. Gangwer, Ryan D. Silva
  • Publication number: 20190087708
    Abstract: A dynamically adaptive neural network processing system includes memory to store instructions representing a neural network in contiguous blocks, hardware acceleration (HA) circuitry to execute the neural network, direct memory access (DMA) circuitry to transfer the instructions from the contiguous blocks of the memory to the HA circuitry, and a central processing unit (CPU) to dynamically modify a linked list representing the neural network during execution of the neural network by the HA circuitry to perform machine learning, and to generate the instructions in the contiguous blocks of the memory based on the linked list.
    Type: Application
    Filed: September 21, 2017
    Publication date: March 21, 2019
    Inventors: John R. Goulding, John E. Mixter, David R. Mucha