Abstract: The disclosed embodiments may include a method of interacting with a multimodal machine learning model; the method may include providing a graphical user interface associated with a multimodal machine learning model. The method may further include displaying an image to a user in the graphical user interface. The method may also include receiving a textual prompt from the user and then generating input data using the image and the textual prompt. The method may further include generating an output at least in part by applying the input data to the multimodal machine learning model, the multimodal machine learning model configured using prompt engineering to identify a location in the image conditioned on the image and the textual prompt, wherein the output comprises a first location indication. The method may also include displaying, in the graphical user interface, an emphasis indicator at the indicated first location in the image.
Type:
Application
Filed:
June 18, 2024
Publication date:
March 27, 2025
Applicant:
c/o OpenAI Opco, LLC
Inventors:
Noah DEUTSCH, Nicholas TURLEY, Benjamin ZWEIG