Patents by Inventor Benjamin Zweig
Benjamin Zweig has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12346664Abstract: The present technology provides an interaction paradigm whereby a prompt source can continue to interact with the generative response engine through a conversational interface while the generative response engine is processing a task, especially a long-running task. A prompt source can provide additional prompts to modify or clarify the task. The prompt source can also provide additional tasks or subtasks. The generative response engine can also provide intermediate responses in the conversational interface. For example, the generative response engine can respond to prompts provided by the prompt source during the performance of the long-running task. The generative response engine can also determine that it should ask for additional details or clarification, and in response to such a determination, the generative response engine can provide intermediate responses in the conversation interface to encourage further input from the prompt source.Type: GrantFiled: June 5, 2024Date of Patent: July 1, 2025Assignee: OpenAI OpCo, LLC.Inventors: Noah Deutsch, Benjamin Zweig
-
Publication number: 20250200361Abstract: The present technology provides for the learning of information relevant to a user account by a generative response engine and accessing this information when preparing personalized responses to prompts provided by the user account. A further aspect of the present technology is that the user account does not need to explicitly tell the generative response engine to remember a particular information. Instead, the present technology is configured such that the generative response engine can learn such facts, preferences, or contexts from conversational prompts provided to the chatbot without providing explicit instructions to remember the data. A further aspect of the present technology is that the user account can request that the generative response engine forget some learned facts too.Type: ApplicationFiled: June 3, 2024Publication date: June 19, 2025Applicant: OpenAI OpCo, LLC.Inventors: Prasad Chakka, Dave Cummings, Noah Deutsch, William Fedus, Tarun Gogineni, Yuchen He, Joanne Jang, Lien Mamitsuka, Warren Ouyang, Yilei Qian, John Schulman, Javi Soto Bustos, Anton Tananaev, Jonathan Ward, Marvin Zhang, Benjamin Zweig
-
Publication number: 20250139057Abstract: Disclosed herein are methods and systems for generating metadata from content using one or more machine learning models. In an embodiment, a method may include receiving the content through a graphical user interface associated with the large language model, generating a first file by tokenizing the content into an input format for the large language model and merging the tokenized content with a content instruction, inputting the first file into the large language model, generating, using the large language model, metadata from at least the first file, the metadata reflecting a context associated with the content, generating a second file, the second file comprising the metadata, and displaying the generated metadata on the graphical user interface.Type: ApplicationFiled: October 30, 2023Publication date: May 1, 2025Applicant: OpenAI Opco, LLCInventors: Noah DEUTSCH, Benjamin ZWEIG, Valerie ZERFAS, Madeline SIMENS, Michael HEATON, Nicholas TURLEY
-
Publication number: 20250104243Abstract: Disclosed embodiments may include a method of interacting with a multimodal machine learning model; the method may include providing a graphical user interface associated with a multimodal machine learning model. The method may further include displaying an image to a user in the graphical user interface. The method may also include receiving a textual prompt from the user and then generating input data using the image and the textual prompt. The method may further include generating an output at least in part by applying the input data to the multimodal machine learning model, the multimodal machine learning model configured using prompt engineering to identify a location in the image conditioned on the image and the textual prompt, wherein the output includes a first location indication. The method may also include displaying, in the graphical user interface, an emphasis indicator at the indicated first location in the image.Type: ApplicationFiled: June 13, 2024Publication date: March 27, 2025Applicant: OpenAI Opco, LLCInventors: Noah DEUTSCH, Benjamin ZWEIG
-
Publication number: 20250103859Abstract: The disclosed embodiments may include a method of interacting with a multimodal machine learning model; the method may include providing a graphical user interface associated with a multimodal machine learning model. The method may further include displaying an image to a user in the graphical user interface. The method may also include receiving a textual prompt from the user and then generating input data using the image and the textual prompt. The method may further include generating an output at least in part by applying the input data to the multimodal machine learning model, the multimodal machine learning model configured using prompt engineering to identify a location in the image conditioned on the image and the textual prompt, wherein the output comprises a first location indication. The method may also include displaying, in the graphical user interface, an emphasis indicator at the indicated first location in the image.Type: ApplicationFiled: June 18, 2024Publication date: March 27, 2025Applicant: c/o OpenAI Opco, LLCInventors: Noah DEUTSCH, Nicholas TURLEY, Benjamin ZWEIG
-
Patent number: 12164548Abstract: The present technology pertains to a generative response engine that can adapt a user interface provided by its front end to receive inputs in a visual format and to provide visual formats in response to prompts. In some embodiments, the generative response engine can provide a greater variety of outputs that can be rendered by the front end. Collectively, the present technology can render dynamic user interface elements in response to prompts received by the generative response engine. Generative response engines that can provide dynamic and multimodal responses that are appropriate to a task are useful for an increased range of tasks.Type: GrantFiled: March 15, 2024Date of Patent: December 10, 2024Assignee: OpenAi OPCo, LLC.Inventors: Noah Deutsch, Benjamin Zweig
-
Patent number: 12051205Abstract: Disclosed embodiments may include a method of interacting with a multimodal machine learning model; the method may include providing a graphical user interface associated with a multimodal machine learning model. The method may further include displaying an image to a user in the graphical user interface. The method may also include receiving a textual prompt from the user and then generating input data using the image and the textual prompt. The method may further include generating an output at least in part by applying the input data to the multimodal machine learning model, the multimodal machine learning model configured using prompt engineering to identify a location in the image conditioned on the image and the textual prompt, wherein the output includes a first location indication. The method may also include displaying, in the graphical user interface, an emphasis indicator at the indicated first location in the image.Type: GrantFiled: September 27, 2023Date of Patent: July 30, 2024Assignee: OpenAI OpCo, LLCInventors: Noah Deutsch, Benjamin Zweig
-
Patent number: 12039431Abstract: The disclosed embodiments may include a method of interacting with a multimodal machine learning model; the method may include providing a graphical user interface associated with a multimodal machine learning model. The method may further include displaying an image to a user in the graphical user interface. The method may also include receiving a textual prompt from the user and then generating input data using the image and the textual prompt. The method may further include generating an output at least in part by applying the input data to the multimodal machine learning model, the multimodal machine learning model configured using prompt engineering to identify a location in the image conditioned on the image and the textual prompt, wherein the output comprises a first location indication. The method may also include displaying, in the graphical user interface, an emphasis indicator at the indicated first location in the image.Type: GrantFiled: September 27, 2023Date of Patent: July 16, 2024Assignee: OpenAI OpCo, LLCInventors: Noah Deutsch, Nicholas Turley, Benjamin Zweig