Patents by Inventor James K. Baker

James K. Baker has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11037059
    Abstract: A computer-implemented method for analyzing a first neural network via a second neural network according to a differentiable function. The method includes adding a derivative node to the first neural network that receives derivatives associated with a node of the first neural network. The derivative node is connected to the second neural network such that the second neural network can receive the derivatives from the derivative node. The method further includes feeding forward activations in the first neural network for a data item, back propagating a selected differentiable function, providing the derivatives from the derivative node to the second neural network as data, feeding forward the derivatives from the derivative node through the second neural network, and then back propagating a secondary objective through both neural networks. In various aspects, the learned parameters of one or both of the neural networks can be updated according to the back propagation calculations.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: June 15, 2021
    Assignee: D5AI LLC
    Inventor: James K. Baker
  • Publication number: 20210174265
    Abstract: A computer-implemented method of training an ensemble machine learning system comprising a plurality of ensemble members. The method includes selecting a shared objective and an objective for each of the ensemble members. The method further includes training each of the ensemble members according to each objective on a training data set, connecting an output of each of the ensemble members to a joint optimization machine learning system to form a consolidated machine learning system, and training the consolidated machine learning system according to the shared objective and the objective for each of the ensemble members on the training data set. The ensemble members can be the same or different types of machine learning systems. Further, the joint optimization machine learning system can be the same or a different type of machine learning system than the ensemble members.
    Type: Application
    Filed: August 12, 2019
    Publication date: June 10, 2021
    Applicant: D5AI LLC
    Inventor: James K. BAKER
  • Patent number: 11010670
    Abstract: A deep neural network architecture comprises a stack of strata in which each stratum has its individual input and an individual objective, in addition to being activated from the system input through lower strata in the stack and receiving back propagation training from the system objective back propagated through higher strata in the stack of strata. The individual objective for a stratum may comprise an individualized target objective designed to achieve diversity among the strata. Each stratum may have a stratum support subnetwork with various specialized subnetworks. These specialized subnetworks may comprise a linear subnetwork to facilitate communication across strata and various specialized subnetworks that help encode features in a more compact way, not only to facilitate communication across strata but also to increase interpretability for human users and to facilitate communication with other machine learning systems.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: May 18, 2021
    Assignee: D5AI LLC
    Inventor: James K. Baker
  • Patent number: 11010671
    Abstract: A system and method for controlling a nodal network. The method includes estimating an effect on the objective caused by the existence or non-existence of a direct connection between a pair of nodes and changing a structure of the nodal network based at least in part on the estimate of the effect. A nodal network includes a strict partially ordered set, a weighted directed acyclic graph, an artificial neural network, and/or a layered feed-forward neural network.
    Type: Grant
    Filed: July 15, 2020
    Date of Patent: May 18, 2021
    Assignee: D5AI LLC
    Inventors: James K. Baker, Bradley J. Baker
  • Patent number: 11003982
    Abstract: Computer-based systems and methods guide the learning of features in middle layers of a deep neural network. The guidance can be provided by aligning sets of nodes or entire layers in a network being trained with sets of nodes in a reference system. This guidance facilitates the trained network to more efficiently learn features learned by the reference system using fewer parameters and with faster training. The guidance also enables training of a new system with a deeper network, i.e., more layers, which tend to perform better than shallow networks. Also, with fewer parameters, the new network has fewer tendencies to overfit the training data.
    Type: Grant
    Filed: June 15, 2018
    Date of Patent: May 11, 2021
    Assignee: D5AI LLC
    Inventor: James K. Baker
  • Patent number: 10956818
    Abstract: Systems and methods improve the performance of a network that has converged such that the gradient of the network and all the partial derivatives are zero (or close to zero) by splitting the training data such that, on each subset of the split training data, some nodes or arcs (i.e., connections between a node and previous or subsequent layers of the network) have individual partial derivative values that are different from zero on the split subsets of the data, although their partial derivatives averaged over the whole set of training data is close to zero. The present system and method can create a new network by splitting the candidate nodes or arcs that diverge from zero and then trains the resulting network with each selected node trained on the corresponding cluster of the data. Because the direction of the gradient is different for each of the nodes or arcs that are split, the nodes and their arcs in the new network will train to be different. Therefore, the new network is not at a stationary point.
    Type: Grant
    Filed: June 1, 2018
    Date of Patent: March 23, 2021
    Assignee: D5AI LLC
    Inventor: James K. Baker
  • Publication number: 20210081761
    Abstract: Computer-based systems and methods guide the learning of features in middle layers of a deep neural network. The guidance can be provided by aligning sets of nodes or entire layers in a network being trained with sets of nodes in a reference system. This guidance facilitates the trained network to more efficiently learn features learned by the reference system using fewer parameters and with faster training. The guidance also enables training of a new system with a deeper network, i.e., more layers, which tend to perform better than shallow networks. Also, with fewer parameters, the new network has fewer tendencies to overfit the training data.
    Type: Application
    Filed: June 15, 2018
    Publication date: March 18, 2021
    Inventor: James K. BAKER
  • Publication number: 20210056381
    Abstract: A system and method for controlling a nodal network. The method includes estimating an effect on the objective caused by the existence or non-existence of a direct connection between a pair of nodes and changing a structure of the nodal network based at least in part on the estimate of the effect. A nodal network includes a strict partially ordered set, a weighted directed acyclic graph, an artificial neural network, and/or a layered feed-forward neural network.
    Type: Application
    Filed: July 8, 2020
    Publication date: February 25, 2021
    Inventors: James K. Baker, Bradley J. Baker
  • Publication number: 20210056380
    Abstract: A system and method for controlling a nodal network. The method includes estimating an effect on the objective caused by the existence or non-existence of a direct connection between a pair of nodes and changing a structure of the nodal network based at least in part on the estimate of the effect. A nodal network includes a strict partially ordered set, a weighted directed acyclic graph, an artificial neural network, and/or a layered feed-forward neural network.
    Type: Application
    Filed: July 7, 2020
    Publication date: February 25, 2021
    Inventors: James K. Baker, Bradley J. Baker
  • Patent number: 10929757
    Abstract: A system and method for controlling a nodal network. The method includes estimating an effect on the objective caused by the existence or non-existence of a direct connection between a pair of nodes and changing a structure of the nodal network based at least in part on the estimate of the effect. A nodal network includes a strict partially ordered set, a weighted directed acyclic graph, an artificial neural network, and/or a layered feed-forward neural network.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: February 23, 2021
    Assignee: D5AI LLC
    Inventors: James K. Baker, Bradley J. Baker
  • Publication number: 20210049470
    Abstract: A computer system uses a pool of predefined functions and pre-trained networks to accelerate the process of building a large neural network or building a combination of (i) an ensemble of other machine learning systems with (ii) a deep neural network. Copies of a predefined function node or network may be placed in multiple locations in a network being built. In building a neural network using a pool of predefined networks, the computer system only needs to decide the relative location of each copy of a predefined network or function. The location may be determined by (i) the connections to a predefined network from source nodes and (ii) the connections from a predefined network to nodes in an upper network. The computer system may perform an iterative process of selecting trial locations for connecting arcs and evaluating the connections to choose the best ones.
    Type: Application
    Filed: August 12, 2019
    Publication date: February 18, 2021
    Inventor: James K. Baker
  • Patent number: 10922587
    Abstract: Systems and methods analyze and correct the vulnerability of individual nodes in a neural network to changes in the input data. The analysis comprises first changing the activation function of one or more nodes to make them more vulnerable. The vulnerability is then measured based on a norm on the vector of partial derivatives of the network objective evaluated on each training data item. The system is made less vulnerable by splitting the data based on the sign of the partial derivative of the network objective with respect to a vulnerable and training new ensemble members on selected subsets from the data split.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: February 16, 2021
    Assignee: D5AI LLC
    Inventor: James K. Baker
  • Publication number: 20210027163
    Abstract: Computer-implemented systems and methods soft-tie learned parameters of a neural network(s). The soft-tying comprises: applying a common label to the first and second learned parameters; and as part of the training, and in response to the first and second learned parameters having the common label, applying a regularization penalty to a loss function for the first learned parameter upon a determination that the first learned parameter is different than the second learned parameter. The learned parameters can be connection weights, node biases, and/or parametric model statistics. The application of the regularization penalty can be influenced by a soft-tying hyperparameter.
    Type: Application
    Filed: October 12, 2020
    Publication date: January 28, 2021
    Applicant: D5AI LLC
    Inventors: James K. BAKER, Bradley J. BAKER
  • Publication number: 20210027147
    Abstract: Computer systems and methods optimize a secondary objective function in the training of a multi-layer feed-forward neural network in which the secondary objective is a function of the partial derivatives of the primary objective function. Optimizing this secondary objective function comprises computing derivatives of functions of the partial derivatives computed during the back-propagation computation in a third stage of computation before the parameter update. This third stage of computation proceeds in the reverse direction from the direction of the back propagation computation. That is, the third stage of computation proceeds forwards through the network, computing derivatives of the secondary objective function based on the chain rule of calculus. The secondary objective may be used to make the neural network more robust against deviations in the input values from their normal values.
    Type: Application
    Filed: June 28, 2019
    Publication date: January 28, 2021
    Inventor: James K. Baker
  • Publication number: 20210004688
    Abstract: A computer-implemented method for analyzing a first neural network via a second neural network according to a differentiable function. The method includes adding a derivative node to the first neural network that receives derivatives associated with a node of the first neural network. The derivative node is connected to the second neural network such that the second neural network can receive the derivatives from the derivative node. The method further includes feeding forward activations in the first neural network for a data item, back propagating a selected differentiable function, providing the derivatives from the derivative node to the second neural network as data, feeding forward the derivatives from the derivative node through the second neural network, and then back propagating a secondary objective through both neural networks. In various aspects, the learned parameters of one or both of the neural networks can be updated according to the back propagation calculations.
    Type: Application
    Filed: August 23, 2019
    Publication date: January 7, 2021
    Inventor: James K. Baker
  • Patent number: 10885470
    Abstract: Computer-based systems and methods add extra terms to the objective function of machine learning systems (e.g., neural networks) in an ensemble for selected items of training data. This selective training is designed to penalize and decrease any tendency for two or more members of the ensemble to make the same mistake on any item of training data, which should result in improved performance of the ensemble in operation.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: January 5, 2021
    Assignee: D5AI LLC
    Inventor: James K. Baker
  • Publication number: 20200410295
    Abstract: Systems and methods analyze and correct the vulnerability of individual nodes in a neural network to changes in the input data. The analysis comprises first changing the activation function of one or more nodes to make them more vulnerable. The vulnerability is then measured based on a norm on the vector of partial derivatives of the network objective evaluated on each training data item. The system is made less vulnerable by splitting the data based on the sign of the partial derivative of the network objective with respect to a vulnerable and training new ensemble members on selected subsets from the data split.
    Type: Application
    Filed: June 27, 2019
    Publication date: December 31, 2020
    Inventor: James K. Baker
  • Publication number: 20200410090
    Abstract: Computer-implemented systems and methods build and train an ensemble of machine learning systems to be robust against adversarial attacks by employing a probabilistic mixed strategy with the property that, even if the adversary knows the architecture and parameters of the machine learning system, any adversarial attack has an arbitrarily low probability of success.
    Type: Application
    Filed: July 16, 2019
    Publication date: December 31, 2020
    Inventor: James K. Baker
  • Publication number: 20200401869
    Abstract: A system and method for controlling a nodal network. The method includes estimating an effect on the objective caused by the existence or non-existence of a direct connection between a pair of nodes and changing a structure of the nodal network based at least in part on the estimate of the effect. A nodal network includes a strict partially ordered set, a weighted directed acyclic graph, an artificial neural network, and/or a layered feed-forward neural network.
    Type: Application
    Filed: January 28, 2019
    Publication date: December 24, 2020
    Inventors: James K. BAKER, Bradley J. BAKER
  • Publication number: 20200394521
    Abstract: Computer-implemented, machine-learning systems and methods relate to a neural network having at least two subnetworks, i.e., a first subnetwork and a second subnetwork. The systems and methods estimate the partial derivative(s) of an objective with respect to (i) an output activation of a node in first subnetwork, (ii) the input to the node, and/or (iii) the connection weights to the node. The estimated partial derivative(s) are stored in a data store and provided as input to the second subnetwork. Because the estimated partial derivative(s) are persisted in a data store, the second subnetwork has access to them even after the second subnetwork has gone through subsequent training iterations. Using this information, subnetwork 160 can compute classifications and regression functions that can help, for example, in the training of the first subnetwork.
    Type: Application
    Filed: June 4, 2019
    Publication date: December 17, 2020
    Inventor: James K. Baker