Patents by Inventor James K. Baker
James K. Baker has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11948063Abstract: Computer systems and computer-implemented methods improve a base neural network. In an initial training, preliminary activations values computed for base network nodes for data in the training data set are stored in memory. After the initial training, a new node set is merged into the base neural network to form an expanded neural network, including directly connecting each of the nodes of the new node set to one or more base network nodes. Then the expanded neural network is trained on the training data set using a network error loss function for the expanded neural network.Type: GrantFiled: June 1, 2023Date of Patent: April 2, 2024Assignee: D5AI LLCInventors: James K. Baker, Bradley J. Baker
-
Patent number: 11915152Abstract: A machine learning (ML) system includes a student ML system, a learning coach ML system, and a reference system that generates training data for the student ML system. The learning coach ML system learns to make an enhancement to the student ML system or to its learning process, such as updated hyperparameter or a network structural change, based on training of the student ML system with the training data generated by the reference system. The system may also comprise a learning experimentation system that communicates with the reference system to conduct experiments on the learning of the student learning system. Also, the learning experimentation system can determine a cost function for the learning coach ML system.Type: GrantFiled: March 5, 2018Date of Patent: February 27, 2024Assignee: D5AI LLCInventor: James K. Baker
-
Publication number: 20240037396Abstract: Computer systems and computer-implemented methods modify a machine learning network, such as a deep neural network, to introduce judgment to the network. A “combining” node is added to the network, to thereby generate a modified network, where activation of the combining node is based, at least in part, on output from a subject node of the network. The computer system then trains the modified network by, for each training data item in a set of training data, performing forward and back propagation computations through the modified network, where the backward propagation computation through the modified network comprises computing estimated partial derivatives of an error function of an objective for the network, except that the combining node selectively blocks back-propagation of estimated partial derivatives to the subject node, even though activation of the combining node is based on the activation of the subject node.Type: ApplicationFiled: October 9, 2023Publication date: February 1, 2024Applicant: D5AI LLCInventor: James K. Baker
-
Publication number: 20240005161Abstract: Computer systems and methods train a deep neural network through machine learning. In response to detection of a training condition, computer system replaces a target node of the network with a split detector compound node, where, prior to replacement, the target node detected a pattern that activated the target node beyond a specified threshold. The split detector compound node comprises first and second nodes, such that: the first node is activated when significant evidence exists in favor of detection of the pattern in inputs to the first node; and the second node is activated when significant evidence exists against detection of the pattern in inputs to the second node, such that activations of the first and second nodes are computed independently. After replacing the target node with the split detector compound node, training of the network through machine learning is resumed.Type: ApplicationFiled: September 15, 2023Publication date: January 4, 2024Applicant: D5AI LLCInventor: James K. Baker
-
Patent number: 11847566Abstract: Computer systems and computer-implemented methods modify a machine learning network, such as a deep neural network, to introduce judgment to the network. A “combining” node is added to the network, to thereby generate a modified network, where activation of the combining node is based, at least in part, on output from a subject node of the network. The computer system then trains the modified network by, for each training data item in a set of training data, performing forward and back propagation computations through the modified network, where the backward propagation computation through the modified network comprises computing estimated partial derivatives of an error function of an objective for the network, except that the combining node selectively blocks back-propagation of estimated partial derivatives to the subject node, even though activation of the combining node is based on the activation of the subject node.Type: GrantFiled: June 13, 2023Date of Patent: December 19, 2023Assignee: D5AI LLCInventor: James K. Baker
-
Patent number: 11836600Abstract: Computer systems and computer-implemented methods train a neural network, by: (a) computing for each datum in a set of training data, activation values for nodes in the neural network and estimates of partial derivatives of an objective function for the neural network for the nodes in the neural network; (b) selecting a target node of the neural network and/or a target datum in the set of training data; (c) selecting a target-specific improvement model for the neural network, wherein the target-specific improvement model, when added to the neural network, improves performance of the neural network for the target node and/or the target datum, as the case may be; (d) training the target-specific improvement model; (e) merging the target-specific improvement model with the neural network to form an expanded neural network; and (f) training the expanded neural network.Type: GrantFiled: July 28, 2021Date of Patent: December 5, 2023Assignee: D5AI LLCInventors: James K. Baker, Bradley J. Baker
-
Patent number: 11836624Abstract: Computer systems and computer-implemented methods modify a machine learning network, such as a deep neural network, to introduce judgment to the network. A “combining” node is added to the network, to thereby generate a modified network, where activation of the combining node is based, at least in part, on output from a subject node of the network. The computer system then trains the modified network by, for each training data item in a set of training data, performing forward and back propagation computations through the modified network, where the backward propagation computation through the modified network comprises computing estimated partial derivatives of an error function of an objective for the network, except that the combining node selectively blocks back-propagation of estimated partial derivatives to the subject node, even though activation of the combining node is based on the activation of the subject node.Type: GrantFiled: July 28, 2020Date of Patent: December 5, 2023Assignee: D5AI LLCInventor: James K. Baker
-
Publication number: 20230385608Abstract: Computer systems and computer-implemented methods train a neural network, by: (a) computing for each datum in a set of training data, activation values for nodes in the neural network and estimates of partial derivatives of an objective function for the neural network for the nodes in the neural network; (b) selecting a target node of the neural network and/or a target datum in the set of training data; (c) selecting a target-specific improvement model for the neural network, wherein the target-specific improvement model, when added to the neural network, improves performance of the neural network for the target node and/or the target datum, as the case may be; (d) training the target-specific improvement model; (e)merging the target-specific improvement model with the neural network to form an expanded neural network; and (f) training the expanded neural network.Type: ApplicationFiled: June 1, 2023Publication date: November 30, 2023Applicant: D5AI LLCInventors: James K. BAKER, Bradley J. BAKER
-
Publication number: 20230368029Abstract: A system and method for controlling a nodal network. The method includes estimating an effect on the objective caused by the existence or non-existence of a direct connection between a pair of nodes and changing a structure of the nodal network based at least in part on the estimate of the effect. A nodal network includes a strict partially ordered set, a weighted directed acyclic graph, an artificial neural network, and/or a layered feed-forward neural network.Type: ApplicationFiled: July 13, 2023Publication date: November 16, 2023Applicant: D5AI LLCInventors: James K. BAKER, Bradley J. BAKER
-
Publication number: 20230359860Abstract: Data-dependent node-to-node knowledge sharing to increase the interpretability of the activation pattern of one or more nodes in a neural network, is implemented by a set of knowledge sharing links. Each link may comprise a knowledge providing node or other source P and a knowledge receiving node R. A knowledge sharing link can impose a node-specific regularization on the knowledge receiving node R to help guide the knowledge receiving node R to have an activation pattern that is more easily interpreted. The specification and training of the knowledge sharing links may be controlled by a cooperative human-AI learning supervisor system in which a human and an artificial intelligence system work cooperatively to improve the interpretability and performance of the client system.Type: ApplicationFiled: July 17, 2023Publication date: November 9, 2023Applicant: D5AI LLCInventors: James K. BAKER, Bradley J. BAKER
-
Patent number: 11797852Abstract: Computer systems and computer-implemented methods modify a machine learning network, such as a deep neural network, to introduce judgment to the network. A “combining” node is added to the network, to thereby generate a modified network, where activation of the combining node is based, at least in part, on output from a subject node of the network. The computer system then trains the modified network by, for each training data item in a set of training data, performing forward and back propagation computations through the modified network, where the backward propagation computation through the modified network comprises computing estimated partial derivatives of an error function of an objective for the network, except that the combining node selectively blocks back-propagation of estimated partial derivatives to the subject node, even though activation of the combining node is based on the activation of the subject node.Type: GrantFiled: March 10, 2023Date of Patent: October 24, 2023Assignee: D5AI LLCInventor: James K. Baker
-
Patent number: 11790235Abstract: Computer systems and methods modify a base deep neural network (DNN). The method comprises replacing the target node of the base DNN with a compound node to thereby create a modified base DNN. The compound node comprises at least first and second nodes. The first node is trained to detect target node patterns in inputs to the first node and the second node is trained to detect an absence of the target node patterns in inputs to the second node, and the first and second nodes are trained to be non-complementary.Type: GrantFiled: December 28, 2022Date of Patent: October 17, 2023Assignee: D5AI LLCInventor: James K. Baker
-
Publication number: 20230325668Abstract: Computer systems and computer-implemented methods modify a machine learning network, such as a deep neural network, to introduce judgment to the network. A “combining” node is added to the network, to thereby generate a modified network, where activation of the combining node is based, at least in part, on output from a subject node of the network. The computer system then trains the modified network by, for each training data item in a set of training data, performing forward and back propagation computations through the modified network, where the backward propagation computation through the modified network comprises computing estimated partial derivatives of an error function of an objective for the network, except that the combining node selectively blocks back-propagation of estimated partial derivatives to the subject node, even though activation of the combining node is based on the activation of the subject node.Type: ApplicationFiled: June 13, 2023Publication date: October 12, 2023Applicant: D5AI LLCInventor: James K. Baker
-
Publication number: 20230289611Abstract: Systems and methods improve performance of a classifier, which comprises a neural network and is trained through machine learning. First and second scores are computed, by the classifier, for each a multiple data examples from a generator. The first score is indicative of whether the data example belongs to a first data cluster and the second score is indicative of whether the data example belongs to a second data cluster. The generator is trained with an objective such that, for each data example generated by the generator, the first and second scores computed by the classifier are equal. Partial derivatives from the classifier are back-propagated for multiple data examples generated by the generator, to obtain a vector, for each data example, that is orthogonal to a decision surface for the classifier. A problem with the classifier is detected based on changes in directions of the vectors.Type: ApplicationFiled: May 12, 2023Publication date: September 14, 2023Inventor: James K. BAKER
-
Publication number: 20230289434Abstract: A diverse set of neural networks are trained to be individually robust against adversarial attacks and diverse in a manner that decreases the ability of an adversarial example to fool the full diverse set. The systems/methods use a diversity criterion that is specialized for measuring diversity in response to adversarial attacks rather than diversity in the classification results. Also, one or more networks can be trained that are less robust to adversarial attacks to use as a diagnostic to detect the presence of an adversarial attack. Also, node-to-node relation regularization links can be used to train diverse networks that are randomly selected from a family of diverse networks with exponentially many members.Type: ApplicationFiled: November 16, 2021Publication date: September 14, 2023Applicant: D5AI LLCInventor: James K. BAKER
-
Patent number: 11755912Abstract: A machine learning system includes a coach machine learning system that uses machine learning to help a student machine learning system learn its system. By monitoring the student learning system, the coach machine learning system can learn (through machine learning techniques) “hyperparameters” for the student learning system that control the machine learning process for the student learning system. The machine learning coach could also determine structural modifications for the student learning system architecture. The learning coach can also control data flow to the student learning system.Type: GrantFiled: February 22, 2023Date of Patent: September 12, 2023Assignee: D5AI LLCInventor: James K. Baker
-
Patent number: 11748624Abstract: A system and method for controlling a nodal network. The method includes estimating an effect on the objective caused by the existence or non-existence of a direct connection between a pair of nodes and changing a structure of the nodal network based at least in part on the estimate of the effect. A nodal network includes a strict partially ordered set, a weighted directed acyclic graph, an artificial neural network, and/or a layered feed-forward neural network.Type: GrantFiled: July 15, 2020Date of Patent: September 5, 2023Assignee: D5AI LLCInventors: James K. Baker, Bradley J. Baker
-
Patent number: 11741340Abstract: Data-dependent node-to-node knowledge sharing to increase the interpretability of the activation pattern of one or more nodes in a neural network, is implemented by a set of knowledge sharing links. Each link may comprise a knowledge providing node or other source P and a knowledge receiving node R. A knowledge sharing link can impose a node-specific regularization on the knowledge receiving node R to help guide the knowledge receiving node R to have an activation pattern that is more easily interpreted. The specification and training of the knowledge sharing links may be controlled by a cooperative human-AI learning supervisor system in which a human and an artificial intelligence system work cooperatively to improve the interpretability and performance of the client system.Type: GrantFiled: April 13, 2020Date of Patent: August 29, 2023Assignee: D5AI LLCInventors: James K. Baker, Bradley J. Baker
-
Publication number: 20230214655Abstract: Computer systems and computer-implemented methods modify a machine learning network, such as a deep neural network, to introduce judgment to the network. A “combining” node is added to the network, to thereby generate a modified network, where activation of the combining node is based, at least in part, on output from a subject node of the network. The computer system then trains the modified network by, for each training data item in a set of training data, performing forward and back propagation computations through the modified network, where the backward propagation computation through the modified network comprises computing estimated partial derivatives of an error function of an objective for the network, except that the combining node selectively blocks back-propagation of estimated partial derivatives to the subject node, even though activation of the combining node is based on the activation of the subject node.Type: ApplicationFiled: March 10, 2023Publication date: July 6, 2023Inventor: James K. Baker
-
Patent number: 11687788Abstract: Computer systems and methods generate data examples by training, through machine learning, a data generator with a training objective to produce a data example for a specific value of R, where R is value related to S1(x) and S2(x), where, for a data example, x, generated by the data generator, S1(x) is a likelihood that the data example x is in a first class of a first selected data example and S2(x) is a likelihood that the data example x is in a second class of a second selected data example. S1(x) and S2(x) are determined by a discriminator that is trained through machine learning to discriminate between the first and second classes. After training the data generator, the data generator generates a synthetic data example for each of multiple specific values of R.Type: GrantFiled: July 28, 2022Date of Patent: June 27, 2023Assignee: D5AI LLCInventor: James K. Baker