Patents by Inventor Angelo Dalli

Angelo Dalli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12367407
    Abstract: Typical autonomous systems implement black-box models for tasks such as motion detection and triaging failure events, and as a result are unable to provide an explanation for its input features. An explainable framework may utilize one or more explainable white-box architectures. Explainable models allow for a new set of capabilities in industrial, commercial, and non-commercial applications, such as behavioral prediction and boundary settings, and therefore may provide additional safety mechanisms to be a part of the control loop of automated machinery, apparatus, and systems. An embodiment may provide a practical solution for the safe operation of automated machinery and systems based on the anticipation and prediction of consequences. The ability to guarantee a safe mode of operation in an autonomous system which may include machinery and robots which interact with human beings is a major unresolved problem which may be solved by an exemplary explainable framework.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: July 22, 2025
    Assignee: UMNAI LIMITED
    Inventors: Angelo Dalli, Matthew Grech, Mauro Pirrone
  • Publication number: 20250086458
    Abstract: An exemplary model search may provide optimal explainable models based on a dataset. An exemplary embodiment may identify features from a training dataset, and may map feature costs to the identified features. The search space may be sampled to generate initial or seed candidates, which may be chosen based on one or more objectives and/or constraints. The candidates may be iteratively optimized until an exit condition is met. The optimization may be performed by an external optimizer. The external optimizer may iteratively apply constraints to the candidates to quantify a fitness level of each of the seed candidates. The fitness level may be based on the constraints and objectives. The candidates may be a set of data, or may be trained to form explainable models. The external optimizer may optimize the explainable models until the exit conditions are met.
    Type: Application
    Filed: November 25, 2024
    Publication date: March 13, 2025
    Applicant: UMNAI Limited
    Inventors: Angelo DALLI, Kristian D’AMATO, Mauro PIRRONE
  • Patent number: 12182708
    Abstract: An exemplary model search may provide optimal explainable models based on a dataset. An exemplary embodiment may identify features from a training dataset, and may map feature costs to the identified features. The search space may be sampled to generate initial or seed candidates, which may be chosen based on one or more objectives and/or constraints. The candidates may be iteratively optimized until an exit condition is met. The optimization may be performed by an external optimizer. The external optimizer may iteratively apply constraints to the candidates to quantify a fitness level of each of the seed candidates. The fitness level may be based on the constraints and objectives. The candidates may be a set of data, or may be trained to form explainable models. The external optimizer may optimize the explainable models until the exit conditions are met.
    Type: Grant
    Filed: April 5, 2023
    Date of Patent: December 31, 2024
    Assignee: UMNAI Limited
    Inventors: Angelo Dalli, Kristian D'Amato, Mauro Pirrone
  • Publication number: 20240249143
    Abstract: An exemplary embodiment provides an autoencoder which is explainable. An exemplary autoencoder may explain the degree to which each feature of the input attributed to the output of the system, which may be a compressed data representation. An exemplary embodiment may be used for classification, such as anomaly detection, as well as other scenarios where an autoencoder is input to another machine learning system or when an autoencoder is a component in an end-to-end deep learning architecture. An exemplary embodiment provides an explainable generative adversarial network that adds explainable generation, simulation and discrimination capabilities. The underlying architecture of an exemplary embodiment may be based on an explainable or interpretable neural network, allowing the underlying architecture to be a fully explainable white-box machine learning system.
    Type: Application
    Filed: March 6, 2024
    Publication date: July 25, 2024
    Applicant: UMNAI Limited
    Inventors: Angelo DALLI, Mauro PIRRONE, Matthew GRECH
  • Patent number: 11948083
    Abstract: An exemplary embodiment provides an autoencoder which is explainable. An exemplary autoencoder may explain the degree to which each feature of the input attributed to the output of the system, which may be a compressed data representation. An exemplary embodiment may be used for classification, such as anomaly detection, as well as other scenarios where an autoencoder is input to another machine learning system or when an autoencoder is a component in an end-to-end deep learning architecture. An exemplary embodiment provides an explainable generative adversarial network that adds explainable generation, simulation and discrimination capabilities. The underlying architecture of an exemplary embodiment may be based on an explainable or interpretable neural network, allowing the underlying architecture to be a fully explainable white-box machine learning system.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: April 2, 2024
    Assignee: UMNAI Limited
    Inventors: Angelo Dalli, Mauro Pirrone, Matthew Grech
  • Patent number: 11900236
    Abstract: An exemplary embodiment may provide an interpretable neural network with hierarchical conditions and partitions. A local function f(x) may model the feature attribution within a specific partition. The combination of all the local functions creates a globally interpretable model. Further, INNs may utilize an external process to identify suitable partitions during their initialization and may support training using back-propagation and related techniques.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: February 13, 2024
    Assignee: UMNAI Limited
    Inventors: Angelo Dalli, Mauro Pirrone
  • Patent number: 11886986
    Abstract: Explainable neural networks may be designed to be easily implementable in hardware efficiently, leading to substantial speed and space improvements. An exemplary embodiment extends upon possible hardware embodiments of XNNs, making them suitable for low power applications, smartphones, mobile computing devices, autonomous machines, server accelerators, Internet of Things (IoT) and edge computing applications amongst many other applications. The capability of XNNs to be transformed from one form to another while preserving their logical equivalence is exploited to create efficient, secure hardware implementations that are optimized for the desired application domain and predictable in their behavior.
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: January 30, 2024
    Assignee: UMNAI Limited
    Inventors: Angelo Dalli, Mauro Pirrone
  • Patent number: 11816587
    Abstract: An exemplary embodiment may describe a convolutional explainable neural network. A CNN-XNN may receive input, such as 2D or multi-dimensional data, a patient history, or any other relevant information. The input data is segmented into various objects and a knowledge encoding layer may identify and extract various features from the segmented objects. The features may be weighted. An output layer may provide predictions and explanations based on the previous layers. The explanation may be determined using a reverse indexing mechanism (Backmap). The explanation may be processed using a Kernel Labeler method that allows the labelling of the progressive refinement of patterns, symbols and concepts from any data format that allows a pattern recognition kernel to be defined allowing integration of neurosymbolic processing within CNN-XNNs. The optional addition of meta-data and causal logic allows for the integration of connectionist models with symbolic logic processing.
    Type: Grant
    Filed: October 6, 2021
    Date of Patent: November 14, 2023
    Assignee: UMNAI Limited
    Inventors: Angelo Dalli, Mauro Pirrone, Matthew Grech
  • Patent number: 11797835
    Abstract: An explainable transducer transformer (XTT) may be a finite state transducer, together with an Explainable Transformer. Variants of the XTT may include an explainable Transformer-Encoder and an explainable Transformer-Decoder. An exemplary Explainable Transducer may be used as a partial replacement in trained Explainable Neural Network (XNN) architectures or logically equivalent architectures. An Explainable Transformer may replace black-box model components of a Transformer with white-box model equivalents, in both the sub-layers of the encoder and decoder layers of the Transformer. XTTs may utilize an Explanation and Interpretation Generation System (EIGS), to generate explanations and filter such explanations to produce an interpretation of the answer, explanation, and its justification.
    Type: Grant
    Filed: January 18, 2023
    Date of Patent: October 24, 2023
    Assignee: UMNAI Limited
    Inventors: Angelo Dalli, Matthew Grech, Mauro Pirrone
  • Publication number: 20230325666
    Abstract: An exemplary embodiment may present a behavior modeling architecture that is intended to assist in handling, modelling, predicting and verifying the behavior of machine learning models to assure the safety of such systems meets the required specifications and adapt such architecture according to the execution sequences of the behavioral model. An embodiment may enable conditions in a behavioral model to be integrated in the execution sequence of behavioral modeling in order to monitor the probability likelihoods of certain paths in a system. An embodiment allows for real-time monitoring during training and prediction of machine learning models. Conditions may also be utilized to trigger system-knowledge injection in a white-box model in order to maintain the behavior of a system within defined boundaries. An embodiment further enables additional formal verification constraints to be set on the output or internal parts of white-box models.
    Type: Application
    Filed: June 8, 2023
    Publication date: October 12, 2023
    Applicant: UMNAI Limited
    Inventors: Angelo DALLI, Matthew GRECH, Mauro PIRRONE
  • Publication number: 20230259771
    Abstract: An exemplary model search may provide optimal explainable models based on a dataset. An exemplary embodiment may identify features from a training dataset, and may map feature costs to the identified features. The search space may be sampled to generate initial or seed candidates, which may be chosen based on one or more objectives and/or constraints. The candidates may be iteratively optimized until an exit condition is met. The optimization may be performed by an external optimizer. The external optimizer may iteratively apply constraints to the candidates to quantify a fitness level of each of the seed candidates. The fitness level may be based on the constraints and objectives. The candidates may be a set of data, or may be trained to form explainable models. The external optimizer may optimize the explainable models until the exit conditions are met.
    Type: Application
    Filed: April 5, 2023
    Publication date: August 17, 2023
    Applicant: UMNAI Limited
    Inventors: Angelo DALLI, Kristian D’AMATO, Mauro PIRRONE
  • Patent number: 11715007
    Abstract: An exemplary embodiment may present a behavior modeling architecture that is intended to assist in handling, modelling, predicting and verifying the behavior of machine learning models to assure the safety of such systems meets the required specifications and adapt such architecture according to the execution sequences of the behavioral model. An embodiment may enable conditions in a behavioral model to be integrated in the execution sequence of behavioral modeling in order to monitor the probability likelihoods of certain paths in a system. An embodiment allows for real-time monitoring during training and prediction of machine learning models. Conditions may also be utilized to trigger system-knowledge injection in a white-box model in order to maintain the behavior of a system within defined boundaries. An embodiment further enables additional formal verification constraints to be set on the output or internal parts of white-box models.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: August 1, 2023
    Assignee: UMNAI Limited
    Inventors: Angelo Dalli, Matthew Grech, Mauro Pirrone
  • Publication number: 20230153599
    Abstract: An explainable transducer transformer (XTT) may be a finite state transducer, together with an Explainable Transformer. Variants of the XTT may include an explainable Transformer-Encoder and an explainable Transformer-Decoder. An exemplary Explainable Transducer may be used as a partial replacement in trained Explainable Neural Network (XNN) architectures or logically equivalent architectures. An Explainable Transformer may replace black-box model components of a Transformer with white-box model equivalents, in both the sub-layers of the encoder and decoder layers of the Transformer. XTTs may utilize an Explanation and Interpretation Generation System (EIGS), to generate explanations and filter such explanations to produce an interpretation of the answer, explanation, and its justification.
    Type: Application
    Filed: January 18, 2023
    Publication date: May 18, 2023
    Applicant: UMNAI Limited
    Inventors: Angelo DALLI, Matthew GRECH, Mauro PIRRONE
  • Patent number: 11651216
    Abstract: An exemplary model search may provide optimal explainable models based on a dataset. An exemplary embodiment may identify features from a training dataset, and may map feature costs to the identified features. The search space may be sampled to generate initial or seed candidates, which may be chosen based on one or more objectives and/or constraints. The candidates may be iteratively optimized until an exit condition is met. The optimization may be performed by an external optimizer. The external optimizer may iteratively apply constraints to the candidates to quantify a fitness level of each of the seed candidates. The fitness level may be based on the constraints and objectives. The candidates may be a set of data, or may be trained to form explainable models. The external optimizer may optimize the explainable models until the exit conditions are met.
    Type: Grant
    Filed: June 9, 2022
    Date of Patent: May 16, 2023
    Assignee: UMNAI Limited
    Inventors: Angelo Dalli, Kristian D'Amato, Mauro Pirrone
  • Patent number: 11593631
    Abstract: An explainable transducer transformer (XTT) may be a finite state transducer, together with an Explainable Transformer. Variants of the XTT may include an explainable Transformer-Encoder and an explainable Transformer-Decoder. An exemplary Explainable Transducer may be used as a partial replacement in trained Explainable Neural Network (XNN) architectures or logically equivalent architectures. An Explainable Transformer may replace black-box model components of a Transformer with white-box model equivalents, in both the sub-layers of the encoder and decoder layers of the Transformer. XTTs may utilize an Explanation and Interpretation Generation System (EIGS), to generate explanations and filter such explanations to produce an interpretation of the answer, explanation, and its justification.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: February 28, 2023
    Assignee: UMNAI Limited
    Inventors: Angelo Dalli, Matthew Grech, Mauro Pirrone
  • Publication number: 20230004846
    Abstract: Typical autonomous systems implement black-box models for tasks such as motion detection and triaging failure events, and as a result are unable to provide an explanation for its input features. An explainable framework may utilize one or more explainable white-box architectures. Explainable models allow for a new set of capabilities in industrial, commercial, and non-commercial applications, such as behavioral prediction and boundary settings, and therefore may provide additional safety mechanisms to be a part of the control loop of automated machinery, apparatus, and systems. An embodiment may provide a practical solution for the safe operation of automated machinery and systems based on the anticipation and prediction of consequences. The ability to guarantee a safe mode of operation in an autonomous system which may include machinery and robots which interact with human beings is a major unresolved problem which may be solved by an exemplary explainable framework.
    Type: Application
    Filed: September 13, 2022
    Publication date: January 5, 2023
    Applicant: UMNAI Limited
    Inventors: Angelo DALLI, Matthew GRECH, Mauro PIRRONE
  • Publication number: 20220414440
    Abstract: Explainable neural networks may be designed to be easily implementable in hardware efficiently, leading to substantial speed and space improvements. An exemplary embodiment extends upon possible hardware embodiments of XNNs, making them suitable for low power applications, smartphones, mobile computing devices, autonomous machines, server accelerators, Internet of Things (IoT) and edge computing applications amongst many other applications. The capability of XNNs to be transformed from one form to another while preserving their logical equivalence is exploited to create efficient, secure hardware implementations that are optimized for the desired application domain and predictable in their behavior.
    Type: Application
    Filed: August 31, 2022
    Publication date: December 29, 2022
    Applicant: UMNAI Limited
    Inventors: Angelo DALLI, Mauro PIRRONE
  • Publication number: 20220398460
    Abstract: An exemplary model search may provide optimal explainable models based on a dataset. An exemplary embodiment may identify features from a training dataset, and may map feature costs to the identified features. The search space may be sampled to generate initial or seed candidates, which may be chosen based on one or more objectives and/or constraints. The candidates may be iteratively optimized until an exit condition is met. The optimization may be performed by an external optimizer. The external optimizer may iteratively apply constraints to the candidates to quantify a fitness level of each of the seed candidates. The fitness level may be based on the constraints and objectives. The candidates may be a set of data, or may be trained to form explainable models. The external optimizer may optimize the explainable models until the exit conditions are met.
    Type: Application
    Filed: June 9, 2022
    Publication date: December 15, 2022
    Applicant: UMNAI Limited
    Inventors: Angelo DALLI, Kristian D'AMATO, Mauro PIRRONE
  • Publication number: 20220391670
    Abstract: An exemplary embodiment provides an explanation and interpretation generation system for creating explanations in different human and machine-readable formats from an explainable and/or interpretable machine learning model. An extensible explanation architecture may allow for seamless third-party integration. Explanation scaffolding may be implemented for generating domain specific explanations, while interpretation scaffolding may facilitate the generation of domain and scenario specific interpretations. An exemplary explanation filter interpretation model may provide an explanation and interpretation generation system optional filtering and interpretation filtering and briefing capabilities. An embodiment may cluster explanations into concepts to incorporate information such as taxonomies, ontologies, causal models, statistical hypotheses, data quality controls, domain specific knowledge and allow for collaborative human knowledge injection.
    Type: Application
    Filed: August 16, 2022
    Publication date: December 8, 2022
    Applicant: UMNAI Limited
    Inventors: Angelo DALLI, Olga Maximovna FINKEL, Matthew GRECH, Mauro PIRRONE
  • Patent number: 11468308
    Abstract: Explainable neural networks may be designed to be easily implementable in hardware efficiently, leading to substantial speed and space improvements. An exemplary embodiment extends upon possible hardware embodiments of XNNs, making them suitable for low power applications, smartphones, mobile computing devices, autonomous machines, server accelerators, Internet of Things (IoT) and edge computing applications amongst many other applications. The capability of XNNs to be transformed from one form to another while preserving their logical equivalence is exploited to create efficient, secure hardware implementations that are optimized for the desired application domain and predictable in their behavior.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: October 11, 2022
    Assignee: UMNAI Limited
    Inventors: Angelo Dalli, Mauro Pirrone