Patents by Inventor Jesse Hostetler
Jesse Hostetler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240202538Abstract: A method, apparatus and system for lifelong reinforcement learning include receiving features of a task, communicating the task features to a learning system, where the learning system learns or performs a task related to the features based on learning or performing similar previous tasks, determining from the features if the task has changed and if so, communicating the features of the changed task to the learning system, where the learning system learns or performs the changed task based on learning or performing similar previous tasks, automatically annotating feature characteristics of received features including differences between the features of the original task and the features of the changed task to enable the learning system to more efficiently learn or perform at least the changed task, and if the task has not changed, processing the task features of a current task by the learning system to learn or perform the current task.Type: ApplicationFiled: December 11, 2023Publication date: June 20, 2024Inventors: Aswin NADAMUNI RAGHAVAN, Indranil SUR, Zachary DANIELS, Jesse HOSTETLER, Abrar RAHMAN, Ajay DIVAKARAN, Michael R. PIACENTINO
-
Patent number: 11934793Abstract: A method, apparatus and system for training an embedding space for content comprehension and response includes, for each layer of a hierarchical taxonomy having at least two layers including respective words resulting in layers of varying complexity, determining a set of words associated with a layer of the hierarchical taxonomy, determining a question answer pair based on a question generated using at least one word of the set of words and at least one content domain, determining a vector representation for the generated question and for content related to the at least one content domain of the question answer pair, and embedding the question vector representation and the content vector representations into a common embedding space where vector representations that are related, are closer in the embedding space than unrelated embedded vector representations. Requests for content can then be fulfilled using the trained, common embedding space.Type: GrantFiled: November 1, 2021Date of Patent: March 19, 2024Assignee: SRI InternationalInventors: Ajay Divakaran, Karan Sikka, Yi Yao, Yunye Gong, Stephanie Nunn, Pritish Sahu, Michael A. Cogswell, Jesse Hostetler, Sara Rutherford-Quach
-
Patent number: 11494626Abstract: In general, the disclosure describes techniques for creating runtime-throttleable neural networks (TNNs) that can adaptively balance performance and resource use in response to a control signal. For example, runtime-TNNs may be trained to be throttled via a gating scheme in which a set of disjoint components of the neural network can be individually “turned off” at runtime without significantly affecting the accuracy of NN inferences. A separate gating neural network may be trained to determine which trained components of the NN to turn off to obtain operable performance for a given level of resource use of computational, power, or other resources by the neural network. This level can then be specified by the control signal at runtime to adapt the NN to operate at the specified level and in this way balance performance and resource use for different operating conditions.Type: GrantFiled: October 11, 2019Date of Patent: November 8, 2022Assignee: SRI INTERNATIONALInventors: Jesse Hostetler, Sek Meng Chai
-
Patent number: 11494597Abstract: Techniques are disclosed for training machine learning systems. An input device receives training data comprising pairs of training inputs and training labels. A generative memory assigns training inputs to each archetype task of a plurality of archetype tasks, each archetype task representative of a cluster of related tasks within a task space and assigns a skill to each archetype task. The generative memory generates, from each archetype task, auxiliary data comprising pairs of auxiliary inputs and auxiliary labels. A machine learning system trains a machine learning model to apply a skill assigned to an archetype task to training and auxiliary inputs assigned to the archetype task to obtain output labels corresponding to the training and auxiliary labels associated with the training and auxiliary inputs assigned to the archetype task to enable scalable learning to obtain labels for new tasks for which the machine learning model has not previously been trained.Type: GrantFiled: March 20, 2020Date of Patent: November 8, 2022Assignee: SRI INTERNATIONALInventors: Aswin Nadamuni Raghavan, Jesse Hostetler, Indranil Sur, Abrar Abdullah Rahman, Sek Meng Chai
-
Publication number: 20220138433Abstract: A method, apparatus and system for training an embedding space for content comprehension and response includes, for each layer of a hierarchical taxonomy having at least two layers including respective words resulting in layers of varying complexity, determining a set of words associated with a layer of the hierarchical taxonomy, determining a question answer pair based on a question generated using at least one word of the set of words and at least one content domain, determining a vector representation for the generated question and for content related to the at least one content domain of the question answer pair, and embedding the question vector representation and the content vector representations into a common embedding space where vector representations that are related, are closer in the embedding space than unrelated embedded vector representations. Requests for content can then be fulfilled using the trained, common embedding space.Type: ApplicationFiled: November 1, 2021Publication date: May 5, 2022Inventors: Ajay DIVAKARAN, Karan SIKKA, Yi YAO, Yunye GONG, Stephanie NUNN, Pritish SAHU, Michael A. COGSWELL, Jesse HOSTETLER, Sara RUTHERFORD-QUACH
-
Publication number: 20200302339Abstract: Techniques are disclosed for training machine learning systems. An input device receives training data comprising pairs of training inputs and training labels. A generative memory assigns training inputs to each archetype task of a plurality of archetype tasks, each archetype task representative of a cluster of related tasks within a task space and assigns a skill to each archetype task. The generative memory generates, from each archetype task, auxiliary data comprising pairs of auxiliary inputs and auxiliary labels. A machine learning system trains a machine learning model to apply a skill assigned to an archetype task to training and auxiliary inputs assigned to the archetype task to obtain output labels corresponding to the training and auxiliary labels associated with the training and auxiliary inputs assigned to the archetype task to enable scalable learning to obtain labels for new tasks for which the machine learning model has not previously been trained.Type: ApplicationFiled: March 20, 2020Publication date: September 24, 2020Inventors: Aswin Nadamuni Raghavan, Jesse Hostetler, Indranil Sur, Abrar Abdullah Rahman, Sek Meng Chai
-
Publication number: 20200193279Abstract: In general, the disclosure describes techniques for creating runtime-throttleable neural networks (TNNs) that can adaptively balance performance and resource use in response to a control signal. For example, runtime-TNNs may be trained to be throttled via a gating scheme in which a set of disjoint components of the neural network can be individually “turned off” at runtime without significantly affecting the accuracy of NN inferences. A separate gating neural network may be trained to determine which trained components of the NN to turn off to obtain operable performance for a given level of resource use of computational, power, or other resources by the neural network. This level can then be specified by the control signal at runtime to adapt the NN to operate at the specified level and in this way balance performance and resource use for different operating conditions.Type: ApplicationFiled: October 11, 2019Publication date: June 18, 2020Inventors: Jesse Hostetler, Sek Meng Chai