Patents by Inventor Jesse Hostetler

Jesse Hostetler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11934793
    Abstract: A method, apparatus and system for training an embedding space for content comprehension and response includes, for each layer of a hierarchical taxonomy having at least two layers including respective words resulting in layers of varying complexity, determining a set of words associated with a layer of the hierarchical taxonomy, determining a question answer pair based on a question generated using at least one word of the set of words and at least one content domain, determining a vector representation for the generated question and for content related to the at least one content domain of the question answer pair, and embedding the question vector representation and the content vector representations into a common embedding space where vector representations that are related, are closer in the embedding space than unrelated embedded vector representations. Requests for content can then be fulfilled using the trained, common embedding space.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: March 19, 2024
    Assignee: SRI International
    Inventors: Ajay Divakaran, Karan Sikka, Yi Yao, Yunye Gong, Stephanie Nunn, Pritish Sahu, Michael A. Cogswell, Jesse Hostetler, Sara Rutherford-Quach
  • Patent number: 11494626
    Abstract: In general, the disclosure describes techniques for creating runtime-throttleable neural networks (TNNs) that can adaptively balance performance and resource use in response to a control signal. For example, runtime-TNNs may be trained to be throttled via a gating scheme in which a set of disjoint components of the neural network can be individually “turned off” at runtime without significantly affecting the accuracy of NN inferences. A separate gating neural network may be trained to determine which trained components of the NN to turn off to obtain operable performance for a given level of resource use of computational, power, or other resources by the neural network. This level can then be specified by the control signal at runtime to adapt the NN to operate at the specified level and in this way balance performance and resource use for different operating conditions.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: November 8, 2022
    Assignee: SRI INTERNATIONAL
    Inventors: Jesse Hostetler, Sek Meng Chai
  • Patent number: 11494597
    Abstract: Techniques are disclosed for training machine learning systems. An input device receives training data comprising pairs of training inputs and training labels. A generative memory assigns training inputs to each archetype task of a plurality of archetype tasks, each archetype task representative of a cluster of related tasks within a task space and assigns a skill to each archetype task. The generative memory generates, from each archetype task, auxiliary data comprising pairs of auxiliary inputs and auxiliary labels. A machine learning system trains a machine learning model to apply a skill assigned to an archetype task to training and auxiliary inputs assigned to the archetype task to obtain output labels corresponding to the training and auxiliary labels associated with the training and auxiliary inputs assigned to the archetype task to enable scalable learning to obtain labels for new tasks for which the machine learning model has not previously been trained.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: November 8, 2022
    Assignee: SRI INTERNATIONAL
    Inventors: Aswin Nadamuni Raghavan, Jesse Hostetler, Indranil Sur, Abrar Abdullah Rahman, Sek Meng Chai
  • Publication number: 20220138433
    Abstract: A method, apparatus and system for training an embedding space for content comprehension and response includes, for each layer of a hierarchical taxonomy having at least two layers including respective words resulting in layers of varying complexity, determining a set of words associated with a layer of the hierarchical taxonomy, determining a question answer pair based on a question generated using at least one word of the set of words and at least one content domain, determining a vector representation for the generated question and for content related to the at least one content domain of the question answer pair, and embedding the question vector representation and the content vector representations into a common embedding space where vector representations that are related, are closer in the embedding space than unrelated embedded vector representations. Requests for content can then be fulfilled using the trained, common embedding space.
    Type: Application
    Filed: November 1, 2021
    Publication date: May 5, 2022
    Inventors: Ajay DIVAKARAN, Karan SIKKA, Yi YAO, Yunye GONG, Stephanie NUNN, Pritish SAHU, Michael A. COGSWELL, Jesse HOSTETLER, Sara RUTHERFORD-QUACH
  • Publication number: 20200302339
    Abstract: Techniques are disclosed for training machine learning systems. An input device receives training data comprising pairs of training inputs and training labels. A generative memory assigns training inputs to each archetype task of a plurality of archetype tasks, each archetype task representative of a cluster of related tasks within a task space and assigns a skill to each archetype task. The generative memory generates, from each archetype task, auxiliary data comprising pairs of auxiliary inputs and auxiliary labels. A machine learning system trains a machine learning model to apply a skill assigned to an archetype task to training and auxiliary inputs assigned to the archetype task to obtain output labels corresponding to the training and auxiliary labels associated with the training and auxiliary inputs assigned to the archetype task to enable scalable learning to obtain labels for new tasks for which the machine learning model has not previously been trained.
    Type: Application
    Filed: March 20, 2020
    Publication date: September 24, 2020
    Inventors: Aswin Nadamuni Raghavan, Jesse Hostetler, Indranil Sur, Abrar Abdullah Rahman, Sek Meng Chai
  • Publication number: 20200193279
    Abstract: In general, the disclosure describes techniques for creating runtime-throttleable neural networks (TNNs) that can adaptively balance performance and resource use in response to a control signal. For example, runtime-TNNs may be trained to be throttled via a gating scheme in which a set of disjoint components of the neural network can be individually “turned off” at runtime without significantly affecting the accuracy of NN inferences. A separate gating neural network may be trained to determine which trained components of the NN to turn off to obtain operable performance for a given level of resource use of computational, power, or other resources by the neural network. This level can then be specified by the control signal at runtime to adapt the NN to operate at the specified level and in this way balance performance and resource use for different operating conditions.
    Type: Application
    Filed: October 11, 2019
    Publication date: June 18, 2020
    Inventors: Jesse Hostetler, Sek Meng Chai