Patents by Inventor Swaroop Mishra

Swaroop Mishra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250173043
    Abstract: Implementations relate to a method implemented by one or more processors, the method including: receiving an input prompt for a large language model (LLM); generating first LLM output that is usable to generate a set of user interface (UI) each associated with a corresponding sub-prompt of the input prompt; causing, based on the first LLM output, the set of UI elements to be rendered at a user device; receiving further user input based on user interactions with one or more of the UI elements of the set of UI elements; and in response to determining that one or more termination conditions are satisfied: generating a final response to the input prompt based on generating second LLM output that is usable to generate the final response; and causing the final response to be rendered at the user device.
    Type: Application
    Filed: January 18, 2024
    Publication date: May 29, 2025
    Inventors: Xiao Ma, Ariel Liu, Quoc Le, Jilin Chen, Heng-Tze Cheng, Swaroop Mishra, Sophie Ying Su
  • Publication number: 20250086405
    Abstract: Some implementations relate to generating a training and/or evaluation dataset with LLM prompts (e.g., derived from user queries) based on a prompt complexity. An input prompt, for example derived from a user query, is received. The input prompt is decomposed into a prompt tree comprising a plurality of nodes. The plurality of nodes comprise: a plurality of leaf nodes corresponding to simple sub-prompts of the input query; a plurality of branch nodes of sub-prompts each corresponding to multiple simple sub-prompts; and a root node corresponding to the input prompt. A prompt complexity is determined based on a path length of the prompt tree. The prompt complexity is compared to a threshold complexity. If the prompt complexity is above the threshold complexity, the input prompt is included in a set of training prompts and/or a set of evaluation prompts.
    Type: Application
    Filed: October 5, 2023
    Publication date: March 13, 2025
    Inventors: Swaroop Mishra, Ragha Kotikalapudi, Obaid Sarvana, Sahitya Potluri, YaGuang Li, Taylor Bos, Steven Zheng, Hanzhao Lin, Chenkai Kuang, Heng-Tze Cheng, Ed H. Chi, Quoc Le
  • Publication number: 20250045534
    Abstract: Implementations relate to a method implemented by one or more processors, the method including: receiving natural language (NL) based input associated with a client device; generating, using a large language model (LLM) and based on processing the NL based input, LLM output; determining, based on the LLM output, a sequence of LLM responses, the sequence of LLM responses including at least one intermediate LLM response and a final LLM response. In some implementations, the method may further include causing the final LLM response to be rendered at the client device. In additional or alternative implementations, the method may further include storing, as an instance of training data for fine-tuning the LLM or an additional LLM, the NL based input along with the final LLM response.
    Type: Application
    Filed: October 10, 2023
    Publication date: February 6, 2025
    Inventors: Swaroop Mishra, Ragha Kotikalapudi, Sahitya Potluri, Taylor Bos, YaGuang Li, Hanzhao Lin, Steven Zheng, Yu Du, Chen Zhu, Chenkai Kuang, Xinying Song, Heng-Tze Cheng, Ed H. Chi, Quoc Le
  • Publication number: 20240394471
    Abstract: Implementations relate to improving instruction following capabilities of large language models (LLMs) using instruction decomposition, self-evaluation, and optionally progressive refinement. Processor(s) of a system can: obtain natural language (NL) based input, generate a plurality of candidate responses and evaluate the candidate responses based on instructions included in the NL based input, using an LLM, and progressively refine the candidate responses until it is determined that one or more termination criteria are satisfied. In some implementations, the NL based input can be received from a client device. In these implementations, a given candidate response that is progressively refined can be rendered for presentation at the client device and responsive to the NL base input. In additional or alternative implementations, the NL based input can be obtained from database(s). In these implementations, a given candidate response that is progressively refined can be utilized in fine-tuning of the LLM.
    Type: Application
    Filed: August 8, 2023
    Publication date: November 28, 2024
    Inventors: Ragha Kotikalapudi, Swaroop Mishra, Sahitya Potluri, Taylor Bos, Yu Du, Chen Zhu, Steven Zheng, Hanzhao Lin, Summer Yue, Heng-Tze Cheng, Quoc Le, Ed H. Chi
  • Patent number: 7767634
    Abstract: Lubricating grease compositions have a titanium complex grease component along with a mineral/synthetic oil-based component and a conventional soap or grease. The conventional soaps and greases may be lithium complex, calcium sulfonate, or aluminum complex-based, among others, with and without additives. The compositions are high performance geases, exhibiting improved drop point, extreme pressure, antiwear, oil separation, and shelf life properties.
    Type: Grant
    Filed: June 21, 2006
    Date of Patent: August 3, 2010
    Assignee: Indian Oil Corporation Limited
    Inventors: Gopal Swaroop Mishra, Paramsivam Senthivel, Suresh Chandra Nagar, Anoop Kumar, Kanta Prasad Naithani, Ravinder Kumar Malhotra, Ram Prakash Verma, Brij Mohan Bansal