Patents by Inventor John E. Mixter
John E. Mixter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12271806Abstract: An artificial neural network receives data for the inputs of a perceptron in the artificial neural network. The network determines an average of the data for each of the inputs of the perceptron, determines a standard deviation of the average for each of the inputs of the perceptron, and determines an average of the standard deviations for the perceptron. The network then sets a learning rate for the perceptron equal to the average of the standard deviations, and trains the artificial neural network using the learning rate for the perceptron.Type: GrantFiled: November 8, 2019Date of Patent: April 8, 2025Assignee: Raytheon CompanyInventor: John E. Mixter
-
Patent number: 11640534Abstract: Backpropagation of an artificial neural network can be triggered or based on input data. The input data are received into the artificial neural network, and the input data are forward propagated through the artificial neural network, which generates output values at classifier layer perceptrons of the network. Classifier layer perceptrons that have the largest output values after the input data have been forward propagated through the artificial neural network are identified. The output difference between the classifier layer perceptrons that have the largest output values is determined. It is then determined whether the output difference transgresses a threshold, and if the output difference does not transgress a threshold, the artificial neural network is backpropagated.Type: GrantFiled: November 15, 2019Date of Patent: May 2, 2023Assignee: Raytheon CompanyInventor: John E. Mixter
-
Publication number: 20230054467Abstract: A system and method are provided for classifying of an unknown object by a trained neural network. The method includes receiving unknown input data that is not classified. For each class of a set of candidate classes for which the neural network has been trained, the method further includes retraining the neural network from its trained state until a prediction can be made for classifying the input data to the class by an inference procedure and determining an amount of effort exerted for retraining the class. The method further includes selecting a class of the group of classes for which the least amount of effort was exerted and outputting the selected class as the predicted class to which the input data is predicted to be classified.Type: ApplicationFiled: August 23, 2021Publication date: February 23, 2023Applicant: Raytheon CompanyInventor: John E. Mixter
-
Publication number: 20230054360Abstract: A machine learning system identifies functions that have similar weighted values, and the system determines a representative weighted value for these functions. The system then calculates a summation of the input values for the functions, and multiplies the summation by the representative weighted value, which generates an output for the functions.Type: ApplicationFiled: August 17, 2021Publication date: February 23, 2023Inventor: John E. Mixter
-
Patent number: 11537848Abstract: Classes are identified in a dataset, and an independent artificial neural network is created for each class in the dataset. Thereafter, all classes in the dataset are provided to each independent artificial neural network. Each independent artificial neural network is separately trained to respond to a single particular class in the dataset and to reject all other classes in the dataset. Output from each independent artificial neural network is provided to a combining classifier, and the combining classifier is trained to identify all classes in the dataset based on the output of all the independent artificial neural networks.Type: GrantFiled: July 26, 2018Date of Patent: December 27, 2022Assignee: Raytheon CompanyInventor: John E. Mixter
-
Patent number: 11475311Abstract: An artificial neural network is implemented via an instruction stream. A header of the instruction stream and a format for instructions in the instruction stream are defined. The format includes an opcode, an address, and data. The instruction stream is created using the header, the opcode, the address, and the data. The artificial neural network is implemented by providing the instruction stream to a computer processor for execution of the instruction stream.Type: GrantFiled: October 30, 2019Date of Patent: October 18, 2022Assignee: Raytheon CompanyInventors: John E. Mixter, David R. Mucha
-
Patent number: 11468330Abstract: A method to grow an artificial neural network is disclosed. A seed neural network is trained on all classes in a dataset. All classes in the dataset are applied to the seed network, and average output values of the seed network are calculated. Class members that are nearest to and furthest from the average output values are selected, the class members are applied to the seed network, and a standard deviation is calculated. Perceptrons are added to the seed network, and inputs of the added perceptrons are connected to the seed layer based on the calculated standard deviation. A classifier is then added to the outputs of the added perceptrons, and the seed network and the added perceptrons are trained using all members in the dataset.Type: GrantFiled: August 3, 2018Date of Patent: October 11, 2022Assignee: Raytheon CompanyInventor: John E. Mixter
-
Patent number: 11468332Abstract: Processing circuitry for a deep neural network can include input/output ports, and a plurality of neural network layers coupled in order from a first layer to a last layer, each of the plurality of neural network layers including a plurality of weighted computational units having circuitry to interleave forward propagation of computational unit input values from the first layer to the last layer and backward propagation of output error values from the last layer to the first layer.Type: GrantFiled: November 13, 2017Date of Patent: October 11, 2022Assignee: Raytheon CompanyInventors: John R. Goulding, John E. Mixter, David R. Mucha, Troy A. Gangwer, Ryan D. Silva
-
Patent number: 11308393Abstract: A hardware-based artificial neural network receives data patterns from a source. The hardware-based artificial neural network is trained using the data patterns such that it learns normal data patterns. A new data pattern is identified when the data pattern deviates from the normal data patterns. The hardware-based artificial neural network is then trained using the new data pattern such that the hardware-based artificial neural network learns the new data pattern by altering one or more synaptic weights associated with the new data pattern. The rate at which the hardware-based artificial neural network alters the one or more synaptic weights is monitored, wherein a training rate that is greater than a threshold indicates that the new data pattern is malicious.Type: GrantFiled: July 26, 2018Date of Patent: April 19, 2022Assignee: Raytheon CompanyInventor: John E. Mixter
-
Publication number: 20210150364Abstract: Backpropagation of an artificial neural network can be triggered or based on input data. The input data are received into the artificial neural network, and the input data are forward propagated through the artificial neural network, which generates output values at classifier layer perceptrons of the network. Classifier layer perceptrons that have the largest output values after the input data have been forward propagated through the artificial neural network are identified. The output difference between the classifier layer perceptrons that have the largest output values is determined. It is then determined whether the output difference transgresses a threshold, and if the output difference does not transgress a threshold, the artificial neural network is backpropagated.Type: ApplicationFiled: November 15, 2019Publication date: May 20, 2021Inventor: John E. Mixter
-
Publication number: 20210142151Abstract: An artificial neural network receives data for the inputs of a perceptron in the artificial neural network. The network determines an average of the data for each of the inputs of the perceptron, determines a standard deviation of the average for each of the inputs of the perceptron, and determines an average of the standard deviations for the perceptron. The network then sets a learning rate for the perceptron equal to the average of the standard deviations, and trains the artificial neural network using the learning rate for the perceptron.Type: ApplicationFiled: November 8, 2019Publication date: May 13, 2021Inventor: John E. Mixter
-
Publication number: 20210133579Abstract: An artificial neural network is implemented via an instruction stream. A header of the instruction stream and a format for instructions in the instruction stream are defined. The format includes an opcode, an address, and data. The instruction stream is created using the header, the opcode, the address, and the data. The artificial neural network is implemented by providing the instruction stream to a computer processor for execution of the instruction stream.Type: ApplicationFiled: October 30, 2019Publication date: May 6, 2021Inventors: John E. Mixter, David R. Mucha
-
Patent number: 10872290Abstract: A dynamically adaptive neural network processing system includes memory to store instructions representing a neural network in contiguous blocks, hardware acceleration (HA) circuitry to execute the neural network, direct memory access (DMA) circuitry to transfer the instructions from the contiguous blocks of the memory to the HA circuitry, and a central processing unit (CPU) to dynamically modify a linked list representing the neural network during execution of the neural network by the HA circuitry to perform machine learning, and to generate the instructions in the contiguous blocks of the memory based on the linked list.Type: GrantFiled: September 21, 2017Date of Patent: December 22, 2020Assignee: Raytheon CompanyInventors: John R. Goulding, John E. Mixter, David R. Mucha
-
Publication number: 20200042878Abstract: A method to grow an artificial neural network is disclosed. A seed neural network is trained on all classes in a dataset. All classes in the dataset are applied to the seed network, and average output values of the seed network are calculated. Class members that are nearest to and furthest from the average output values are selected, the class members are applied to the seed network, and a standard deviation is calculated. Perceptrons are added to the seed network, and inputs of the added perceptrons are connected to the seed layer based on the calculated standard deviation. A classifier is then added to the outputs of the added perceptrons, and the seed network and the added perceptrons are trained using all members in the dataset.Type: ApplicationFiled: August 3, 2018Publication date: February 6, 2020Inventor: John E. Mixter
-
Publication number: 20200034691Abstract: Classes are identified in a dataset, and an independent artificial neural network is created for each class in the dataset. Thereafter, all classes in the dataset are provided to each independent artificial neural network. Each independent artificial neural network is separately trained to respond to a single particular class in the dataset and to reject all other classes in the dataset. Output from each independent artificial neural network is provided to a combining classifier, and the combining classifier is trained to identify all classes in the dataset based on the output of all the independent artificial neural networks.Type: ApplicationFiled: July 26, 2018Publication date: January 30, 2020Inventor: John E. Mixter
-
Publication number: 20200034700Abstract: A hardware-based artificial neural network receives data patterns from a source. The hardware-based artificial neural network is trained using the data patterns such that it learns normal data patterns. A new data pattern is identified when the data pattern deviates from the normal data patterns. The hardware-based artificial neural network is then trained using the new data pattern such that the hardware-based artificial neural network learns the new data pattern by altering one or more synaptic weights associated with the new data pattern. The rate at which the hardware-based artificial neural network alters the one or more synaptic weights is monitored, wherein a training rate that is greater than a threshold indicates that the new data pattern is malicious.Type: ApplicationFiled: July 26, 2018Publication date: January 30, 2020Inventor: John E. Mixter
-
Publication number: 20190147342Abstract: Processing circuitry for a deep neural network can include input/output ports, and a plurality of neural network layers coupled in order from a first layer to a last layer, each of the plurality of neural network layers including a plurality of weighted computational units having circuitry to interleave forward propagation of computational unit input values from the first layer to the last layer and backward propagation of output error values from the last layer to the first layer.Type: ApplicationFiled: November 13, 2017Publication date: May 16, 2019Inventors: John R. Goulding, John E. Mixter, David R. Mucha, Troy A. Gangwer, Ryan D. Silva
-
Publication number: 20190087708Abstract: A dynamically adaptive neural network processing system includes memory to store instructions representing a neural network in contiguous blocks, hardware acceleration (HA) circuitry to execute the neural network, direct memory access (DMA) circuitry to transfer the instructions from the contiguous blocks of the memory to the HA circuitry, and a central processing unit (CPU) to dynamically modify a linked list representing the neural network during execution of the neural network by the HA circuitry to perform machine learning, and to generate the instructions in the contiguous blocks of the memory based on the linked list.Type: ApplicationFiled: September 21, 2017Publication date: March 21, 2019Inventors: John R. Goulding, John E. Mixter, David R. Mucha