Patents by Inventor Michal Pryt

Michal Pryt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250060934
    Abstract: Implementations are described herein for analyzing existing graphical user interfaces (“GUIs”) to facilitate automatic interaction with those GUIs, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those GUIs. For example, in various implementations, a user intent to interact with a particular GUI may be determined based at least in part on a free-form natural language input. Based on the user intent, a target visual cue to be located in the GUI may be identified, and object recognition processing may be performed on a screenshot of the GUI to determine a location of a detected instance of the target visual cue in the screenshot. Based on the location of the detected instance of the target visual cue, an interactive element of the GUI may be identified and automatically populate with data determined from the user intent.
    Type: Application
    Filed: November 1, 2024
    Publication date: February 20, 2025
    Inventors: Joseph Lange, Asier Aguirre, Olivier Siegenthaler, Michal Pryt
  • Patent number: 12147732
    Abstract: Implementations are described herein for analyzing existing graphical user interfaces (“GUIs”) to facilitate automatic interaction with those GUIs, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those GUIs. For example, in various implementations, a user intent to interact with a particular GUI may be determined based at least in part on a free-form natural language input. Based on the user intent, a target visual cue to be located in the GUI may be identified, and object recognition processing may be performed on a screenshot of the GUI to determine a location of a detected instance of the target visual cue in the screenshot. Based on the location of the detected instance of the target visual cue, an interactive element of the GUI may be identified and automatically populate with data determined from the user intent.
    Type: Grant
    Filed: August 16, 2023
    Date of Patent: November 19, 2024
    Assignee: GOOGLE LLC
    Inventors: Joseph Lange, Asier Aguirre, Olivier Siegenthaler, Michal Pryt
  • Publication number: 20230393810
    Abstract: Implementations are described herein for analyzing existing graphical user interfaces (“GUIs”) to facilitate automatic interaction with those GUIs, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those GUIs. For example, in various implementations, a user intent to interact with a particular GUI may be determined based at least in part on a free-form natural language input. Based on the user intent, a target visual cue to be located in the GUI may be identified, and object recognition processing may be performed on a screenshot of the GUI to determine a location of a detected instance of the target visual cue in the screenshot. Based on the location of the detected instance of the target visual cue, an interactive element of the GUI may be identified and automatically populate with data determined from the user intent.
    Type: Application
    Filed: August 16, 2023
    Publication date: December 7, 2023
    Inventors: Joseph Lange, Asier Aguirre, Olivier Siegenthaler, Michal Pryt
  • Patent number: 11775254
    Abstract: Implementations are described herein for analyzing existing graphical user interfaces (“GUIs”) to facilitate automatic interaction with those GUIs, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those GUIs. For example, in various implementations, a user intent to interact with a particular GUI may be determined based at least in part on a free-form natural language input. Based on the user intent, a target visual cue to be located in the GUI may be identified, and object recognition processing may be performed on a screenshot of the GUI to determine a location of a detected instance of the target visual cue in the screenshot. Based on the location of the detected instance of the target visual cue, an interactive element of the GUI may be identified and automatically populate with data determined from the user intent.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: October 3, 2023
    Assignee: GOOGLE LLC
    Inventors: Joseph Lange, Asier Aguirre, Olivier Siegenthaler, Michal Pryt
  • Publication number: 20220050661
    Abstract: Implementations are described herein for analyzing existing graphical user interfaces (“GUIs”) to facilitate automatic interaction with those GUIs, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those GUIs. For example, in various implementations, a user intent to interact with a particular GUI may be determined based at least in part on a free-form natural language input. Based on the user intent, a target visual cue to be located in the GUI may be identified, and object recognition processing may be performed on a screenshot of the GUI to determine a location of a detected instance of the target visual cue in the screenshot. Based on the location of the detected instance of the target visual cue, an interactive element of the GUI may be identified and automatically populate with data determined from the user intent.
    Type: Application
    Filed: January 31, 2020
    Publication date: February 17, 2022
    Inventors: Joseph Lange, Asier Aguirre, Olivier Siegenthaler, Michal Pryt