Patents by Inventor Daniel Dines

Daniel Dines has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250104458
    Abstract: Using a cognitive artificial intelligence (AI) layer (e.g., a generative AI model and one or more other AI/ML models) to perform robotic process automation (RPA) robot repair is disclosed. Computer vision (CV) and/or optical character recognition (OCR) models may be called by RPA robots when performing activities in RPA workflows to identify graphical elements and text, respectively, in a user interface (UI). However, the CV or OCR model may not find a graphical element or text targeted by the activity of the RPA workflow, or the RPA robot may encounter an exception and otherwise fail when performing the activity. In such cases, a cognitive AI layer including a generative AI model may be used to find the target graphical element or text, or to address the robot failure.
    Type: Application
    Filed: September 26, 2023
    Publication date: March 27, 2025
    Applicant: UiPath, Inc.
    Inventors: Daniel DINES, Graham SHELDON, Mircea GRIGORE, Gerd WEISHAAR, Justin GRINDAL
  • Patent number: 12229645
    Abstract: Target-based schema identification and semantic mapping for robotic process automation (RPA) are disclosed. When looking at a source, such as a document, a web form, a user interface of a software application, a data file, etc., it is often difficult for software to determine which fields are labels and which are values associated with those labels. Since values have not yet been entered for various labels (e.g., first name, company, customer number, etc.), these labels are easier to detect than when the target also includes various values associated with the labels. A selection of an empty target may be received and target-based schema identification may be performed on the empty target, determining labels and a type of the target. Semantic matching may then be performed between a source and the target. These features may be performed at design time or runtime.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: February 18, 2025
    Assignee: UiPath, Inc.
    Inventor: Daniel Dines
  • Publication number: 20250054327
    Abstract: Using generative AI to supplement automated information extraction is disclosed. Computer vision (CV) and/or optical character recognition (OCR) models and a generative artificial intelligence (AI) model are used together to extract information (e.g., names, dates, invoice numbers, etc.) from a source. Acceptance threshold(s) may be used to accept predictions for extracted data elements from the models, and the prediction from the generative AI model may be preferred or a human may be tasked with reviewing the element. If no model meets its respective acceptance threshold (whether common or specific to that model), these element(s) may be marked for subsequent human review, or a human can be looped in to correct these element(s). The models may then be retrained using these labeled elements.
    Type: Application
    Filed: August 9, 2023
    Publication date: February 13, 2025
    Applicant: UiPath, Inc.
    Inventor: Daniel Dines
  • Patent number: 12147881
    Abstract: Target-based schema identification and semantic mapping for robotic process automation (RPA) are disclosed. When looking at a source, such as a document, a web form, a user interface of a software application, a data file, etc., it is often difficult for software to determine which fields are labels and which are values associated with those labels. Since values have not yet been entered for various labels (e.g., first name, company, customer number, etc.), these labels are easier to detect than when the target also includes various values associated with the labels. A selection of an empty target may be received and target-based schema identification may be performed on the empty target, determining labels and a type of the target. Semantic matching may then be performed between a source and the target. These features may be performed at design time or runtime.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: November 19, 2024
    Assignee: UiPath, Inc.
    Inventor: Daniel Dines
  • Patent number: 12099704
    Abstract: Graphical element detection using a combined series and delayed parallel execution unified target technique that potentially uses a plurality of graphical element detection techniques, performs default user interface (UI) element detection technique configuration at the application and/or UI type level, or both, is disclosed. The unified target merges multiple techniques of identifying and automating UI elements into a single cohesive approach. A unified target descriptor chains together multiple types of UI descriptors in series, uses them in parallel, or uses at least one technique first for a period of time and then runs at least one other technique in parallel or alternatively if the first technique does not find a match within the time period.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: September 24, 2024
    Assignee: UiPath, Inc.
    Inventor: Daniel Dines
  • Publication number: 20240220581
    Abstract: Artificial intelligence (AI)-driven, semantic, automatic data transfer between a source and a target using task mining is disclosed. Existing task mining technologies gather all the data pertaining to a screen and/or form, along with the fields and elements present therein. However, semantic meaning may be derived and correlations between fields/elements on the screen may be provided automatically. This may include automatic transformation and validation, and a robotic process automation (RPA) workflow/automation can automatically be generated for a user that performs data copy-and-paste functionality between a source and a target without the user indicating those steps. Also, this functionality may be provided despite the fact that the labels in the source and the target are not exactly the same.
    Type: Application
    Filed: January 4, 2023
    Publication date: July 4, 2024
    Applicant: UiPath, Inc.
    Inventors: Daniel Dines, Cosmin Voicu, Michael Aristo Leonard, II
  • Publication number: 20230415338
    Abstract: Target-based schema identification and semantic mapping for robotic process automation (RPA) are disclosed. When looking at a source, such as a document, a web form, a user interface of a software application, a data file, etc., it is often difficult for software to determine which fields are labels and which are values associated with those labels. Since values have not yet been entered for various labels (e.g., first name, company, customer number, etc.), these labels are easier to detect than when the target also includes various values associated with the labels. A selection of an empty target may be received and target-based schema identification may be performed on the empty target, determining labels and a type of the target. Semantic matching may then be performed between a source and the target. These features may be performed at design time or runtime.
    Type: Application
    Filed: May 17, 2022
    Publication date: December 28, 2023
    Applicant: UiPath, Inc.
    Inventor: Daniel DINES
  • Publication number: 20230419161
    Abstract: Target-based schema identification and semantic mapping for robotic process automation (RPA) are disclosed. When looking at a source, such as a document, a web form, a user interface of a software application, a data file, etc., it is often difficult for software to determine which fields are labels and which are values associated with those labels. Since values have not yet been entered for various labels (e.g., first name, company, customer number, etc.), these labels are easier to detect than when the target also includes various values associated with the labels. A selection of an empty target may be received and target-based schema identification may be performed on the empty target, determining labels and a type of the target. Semantic matching may then be performed between a source and the target. These features may be performed at design time or runtime.
    Type: Application
    Filed: May 18, 2022
    Publication date: December 28, 2023
    Applicant: UiPath, Inc.
    Inventor: Daniel DINES
  • Patent number: 11740990
    Abstract: Automation of a process running in a first session via robotic process automation (RPA) robot(s) running in a second session is disclosed. In some aspects, a form is displayed in a user session, but one or more attended RPA robots that retrieve and/or interact with data for an application in the first session run in one or more other sessions. In this manner, the operation of the RPA robot(s) may not prevent the user from using other applications or instances when the RPA robot(s) are running, but the data modifications made or facilitated by the RPA robot(s) may be visible to the user in the first session window.
    Type: Grant
    Filed: July 18, 2022
    Date of Patent: August 29, 2023
    Assignee: UiPath, Inc.
    Inventor: Daniel Dines
  • Patent number: 11734104
    Abstract: Screen response validation of robot execution for robotic process automation (RPA) is disclosed. Whether text, screen changes, images, and/or other expected visual actions occur in an application executing on a computing system that an RPA robot is interacting with may be recognized. Where the robot has been typing may be determined and the physical position on the screen based on the current resolution of where one or more characters, images, windows, etc. appeared may be provided. The physical position of these elements, or the lack thereof, may allow determination of which field(s) the robot is typing in and what the associated application is for the purpose of validation that the application and computing system are responding as intended. When the expected screen changes do not occur, the robot can stop and throw an exception, go back and attempt the intended interaction again, restart the workflow, or take another suitable action.
    Type: Grant
    Filed: October 3, 2022
    Date of Patent: August 22, 2023
    Assignee: UiPath, Inc.
    Inventor: Daniel Dines
  • Publication number: 20230032195
    Abstract: Screen response validation of robot execution for robotic process automation (RPA) is disclosed. Whether text, screen changes, images, and/or other expected visual actions occur in an application executing on a computing system that an RPA robot is interacting with may be recognized. Where the robot has been typing may be determined and the physical position on the screen based on the current resolution of where one or more characters, images, windows, etc. appeared may be provided. The physical position of these elements, or the lack thereof, may allow determination of which field(s) the robot is typing in and what the associated application is for the purpose of validation that the application and computing system are responding as intended. When the expected screen changes do not occur, the robot can stop and throw an exception, go back and attempt the intended interaction again, restart the workflow, or take another suitable action.
    Type: Application
    Filed: October 3, 2022
    Publication date: February 2, 2023
    Applicant: UiPath, Inc.
    Inventor: Daniel DINES
  • Patent number: 11507259
    Abstract: Graphical element detection using a combined series and delayed parallel execution unified target technique that potentially uses a plurality of graphical element detection techniques, performs default user interface (UI) element detection technique configuration at the application and/or UI type level, or both, is disclosed. The unified target merges multiple techniques of identifying and automating UI elements into a single cohesive approach. A unified target descriptor chains together multiple types of UI descriptors in series, uses them in parallel, or uses at least one technique first for a period of time and then runs at least one other technique in parallel or alternatively if the first technique does not find a match within the time period.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: November 22, 2022
    Assignee: UiPath, Inc.
    Inventor: Daniel Dines
  • Publication number: 20220350722
    Abstract: Automation of a process running in a first session via robotic process automation (RPA) robot(s) running in a second session is disclosed. In some aspects, a form is displayed in a user session, but one or more attended RPA robots that retrieve and/or interact with data for an application in the first session run in one or more other sessions. In this manner, the operation of the RPA robot(s) may not prevent the user from using other applications or instances when the RPA robot(s) are running, but the data modifications made or facilitated by the RPA robot(s) may be visible to the user in the first session window.
    Type: Application
    Filed: July 18, 2022
    Publication date: November 3, 2022
    Applicant: UiPath, Inc.
    Inventor: Daniel DINES
  • Publication number: 20220350723
    Abstract: Automation of a process running in a first session via robotic process automation (RPA) robot(s) running in a second session is disclosed. In some aspects, a form is displayed in a user session, but one or more attended RPA robots that retrieve and/or interact with data for an application in the first session run in one or more other sessions. In this manner, the operation of the RPA robot(s) may not prevent the user from using other applications or instances when the RPA robot(s) are running, but the data modifications made or facilitated by the RPA robot(s) may be visible to the user in the first session window.
    Type: Application
    Filed: July 18, 2022
    Publication date: November 3, 2022
    Applicant: UiPath, Inc.
    Inventor: Daniel DINES
  • Patent number: 11461164
    Abstract: Screen response validation of robot execution for robotic process automation (RPA) is disclosed. Whether text, screen changes, images, and/or other expected visual actions occur in an application executing on a computing system that an RPA robot is interacting with may be recognized. Where the robot has been typing may be determined and the physical position on the screen based on the current resolution of where one or more characters, images, windows, etc. appeared may be provided. The physical position of these elements, or the lack thereof, may allow determination of which field(s) the robot is typing in and what the associated application is for the purpose of validation that the application and computing system are responding as intended. When the expected screen changes do not occur, the robot can stop and throw an exception, go back and attempt the intended interaction again, restart the workflow, or take another suitable action.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: October 4, 2022
    Assignee: UiPath, Inc.
    Inventor: Daniel Dines
  • Publication number: 20220276767
    Abstract: Graphical element detection using a combined series and delayed parallel execution unified target technique that potentially uses a plurality of graphical element detection techniques, performs default user interface (UI) element detection technique configuration at the application and/or UI type level, or both, is disclosed. The unified target merges multiple techniques of identifying and automating UI elements into a single cohesive approach. A unified target descriptor chains together multiple types of UI descriptors in series, uses them in parallel, or uses at least one technique first for a period of time and then runs at least one other technique in parallel or alternatively if the first technique does not find a match within the time period.
    Type: Application
    Filed: May 17, 2022
    Publication date: September 1, 2022
    Applicant: UiPath, Inc.
    Inventor: Daniel DINES
  • Patent number: 11392477
    Abstract: Automation of a process running in a first session via robotic process automation (RPA) robot(s) running in a second session is disclosed. In some aspects, a form is displayed in a user session, but one or more attended RPA robots that retrieve and/or interact with data for an application in the first session run in one or more other sessions. In this manner, the operation of the RPA robot(s) may not prevent the user from using other applications or instances when the RPA robot(s) are running, but the data modifications made or facilitated by the RPA robot(s) may be visible to the user in the first session window.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: July 19, 2022
    Assignee: UiPath, Inc.
    Inventor: Daniel Dines
  • Publication number: 20220197677
    Abstract: Graphical element detection using a combination of user interface (UI) descriptor attributes from two or more graphical element detection techniques is disclosed. UI descriptors may be used to compare attributes for a given UI descriptor with attributes of UI elements found at runtime in the UI. At runtime, the attributes for the UI elements found in the UI can be searched for matches with attributes for a respective RPA workflow activity, and if an exact match or a match within a matching threshold is found, the UI element may be identified and interacted with accordingly.
    Type: Application
    Filed: March 14, 2022
    Publication date: June 23, 2022
    Applicant: UiPath, Inc.
    Inventor: Daniel DINES
  • Publication number: 20220197676
    Abstract: Graphical element detection using a combination of user interface (UI) descriptor attributes from two or more graphical element detection techniques is disclosed. UI descriptors may be used to compare attributes for a given UI descriptor with attributes of UI elements found at runtime in the UI. At runtime, the attributes for the UI elements found in the UI can be searched for matches with attributes for a respective RPA workflow activity, and if an exact match or a match within a matching threshold is found, the UI element may be identified and interacted with accordingly.
    Type: Application
    Filed: March 14, 2022
    Publication date: June 23, 2022
    Applicant: UiPath, Inc.
    Inventor: Daniel DINES
  • Patent number: 11301268
    Abstract: Graphical element detection using a combination of user interface (UI) descriptor attributes from two or more graphical element detection techniques is disclosed. UI descriptors may be used to compare attributes for a given UI descriptor with attributes of UI elements found at runtime in the UI. At runtime, the attributes for the UI elements found in the UI can be searched for matches with attributes for a respective RPA workflow activity, and if an exact match or a match within a matching threshold is found, the UI element may be identified and interacted with accordingly.
    Type: Grant
    Filed: August 11, 2020
    Date of Patent: April 12, 2022
    Assignee: UiPath, Inc.
    Inventor: Daniel Dines