Patents by Inventor Shelby Heinecke

Shelby Heinecke has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250139411
    Abstract: Embodiments described herein provide a large language model (LLM) based AI agent that adopts Monte-Carlo Tree Search (MCTS) to execute a task. The LLM is prompted with a task description and it responds with its first attempted list of actions. Based on the success or failure of the first attempt, the LLM is prompted with an updated prompt which includes feedback from the first attempt based on a determined reward. The prompt may include a relative “score” for each action taken at each step. A numeric score may be mapped to a set of pre-defined text labels, such as “high” or “low” value putting the score in a form more suited for an LLM prompt. In this way, the LLM is iteratively given prompts which are updated with the scores from each action taken at each previous iterations so that it traverses different paths on the tree in each iteration.
    Type: Application
    Filed: October 31, 2023
    Publication date: May 1, 2025
    Inventors: Rithesh Murthy, Shelby Heinecke, Juan Carlos Niebles Duque, Zhiwei Liu, Le Xue, Weiran Yao, Yihao Feng, Zeyuan Chen, Akash Gokul, Devansh Arpit, Ran Xu, Lik Mui, Huan Wang, Caiming Xiong, Silvio Savarese
  • Publication number: 20250124233
    Abstract: Systems and methods for editing a large language model are provided. The large language model generates a sequence of tokens, a first probability of a pre-edit output based on the sequence of tokens, and a second probability of a target output based on the sequence of tokens. A loss function is provided based on the first probability and the second probability. A plurality of gradients of the large language model with respect to the loss function is computed. An edit location of the large language model is determined based on the plurality of gradients. The large language model is edited by editing weights at the edit location of the large language model, such that the updated large language model generates the target output for an input including the sequence of words.
    Type: Application
    Filed: January 31, 2024
    Publication date: April 17, 2025
    Inventors: Itai Izhak Feigenbaum, Devansh Arpit, Shelby Heinecke, Juan Carlos Niebles Duque, Weiran Yao, Huan Wang, Caiming Xiong, Silvio Savarese
  • Publication number: 20250053787
    Abstract: Embodiments described herein provide a method for training a recommendation neural network model using multiple data sources. The method may include: receiving, via a data interface, time series data indicating a user-item interaction history; transforming the time series data into a user-item graph; encoding, by a neural network encoder, the user-item graph into user embeddings and item embeddings; generating a plurality of losses according to a plurality of training tasks performed based on the user embeddings and, item embeddings; training the recommendation neural network model by updating the user embeddings and the item embeddings via backpropagation based on a weighted sum of gradients of the plurality of losses; and generating, by a neural network decoder, one or more recommended items for a given user based on the updated user embeddings and the updated item embeddings.
    Type: Application
    Filed: January 31, 2024
    Publication date: February 13, 2025
    Inventors: Liangwei Yang, Shelby Heinecke, Jianguo Zhang, Rithesh Murthy, Huan Wang, Caiming Xiong, Zhiwei Liu
  • Publication number: 20250053793
    Abstract: Embodiments described herein provide a method of predicting an action by a plurality of language model augmented agents (LAAs). In at least one embodiment, a controller receives a task instruction to be performed using an environment. The controller receives an observation of a first state from the environment. The controller selects a LAA from the plurality of LAAs based on the task instruction and the observation. The controller obtains an output from the selected LAA generated using an input combining the task instruction, the observation, and an LAA-specific prompt template. The controller determines the action based on the output. The controller causes the action to be performed on the environment thereby causing the first state of the environment to change to a second state.
    Type: Application
    Filed: October 25, 2023
    Publication date: February 13, 2025
    Inventors: Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles Duque, Devansh Arpit, Ran Xu, Lik Mui, Huan Wang, Caiming Xiong, Silvio Savarese
  • Publication number: 20250045567
    Abstract: Embodiments described herein provide for optimizing a language model (LM) agent. In at least one embodiment, and LM agent comprises an “actor” LM and a “retrospective LM which provides reflections on attempts by the actor LM. The reflections are used to update subsequent prompts to the actor LM. Optimizing the LM agent comprises fine-tuning parameters of the retrospective LM while keeping parameters of the actor LM frozen. A gradient may be determined by a change in reward from the environment based on actions taken by the actor LM with and without a reflection of the retrospective LM. Using this gradient, parameters of the retrospective LM may be updated via backpropagation.
    Type: Application
    Filed: October 31, 2023
    Publication date: February 6, 2025
    Inventors: Weiran Yao, Shelby Heinecke, Juan Carlos Niebles Duque, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Lik Mui, Huan Wang, Caiming Xiong, Silvio Savarese
  • Publication number: 20240062071
    Abstract: Embodiments described herein provide a cascade-guided adversarial training method for sequential recommendation models. A system may compute cascade scores for items in a user interaction sequence. The cascade scores may be based on the position in the sequence, as well as the appearance of the same item in other sequences. Based on the computed cascade score, the system may perturb item embeddings. The perturbed user interaction sequences with perturbed item embeddings may be used to train a sequential recommendation model.
    Type: Application
    Filed: December 30, 2022
    Publication date: February 22, 2024
    Inventors: Juntao TAN, Shelby Heinecke, Zhiwei Liu, Yongjun Chen