Patents by Inventor Angelo Dalli
Angelo Dalli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12367407Abstract: Typical autonomous systems implement black-box models for tasks such as motion detection and triaging failure events, and as a result are unable to provide an explanation for its input features. An explainable framework may utilize one or more explainable white-box architectures. Explainable models allow for a new set of capabilities in industrial, commercial, and non-commercial applications, such as behavioral prediction and boundary settings, and therefore may provide additional safety mechanisms to be a part of the control loop of automated machinery, apparatus, and systems. An embodiment may provide a practical solution for the safe operation of automated machinery and systems based on the anticipation and prediction of consequences. The ability to guarantee a safe mode of operation in an autonomous system which may include machinery and robots which interact with human beings is a major unresolved problem which may be solved by an exemplary explainable framework.Type: GrantFiled: September 13, 2022Date of Patent: July 22, 2025Assignee: UMNAI LIMITEDInventors: Angelo Dalli, Matthew Grech, Mauro Pirrone
-
Publication number: 20250086458Abstract: An exemplary model search may provide optimal explainable models based on a dataset. An exemplary embodiment may identify features from a training dataset, and may map feature costs to the identified features. The search space may be sampled to generate initial or seed candidates, which may be chosen based on one or more objectives and/or constraints. The candidates may be iteratively optimized until an exit condition is met. The optimization may be performed by an external optimizer. The external optimizer may iteratively apply constraints to the candidates to quantify a fitness level of each of the seed candidates. The fitness level may be based on the constraints and objectives. The candidates may be a set of data, or may be trained to form explainable models. The external optimizer may optimize the explainable models until the exit conditions are met.Type: ApplicationFiled: November 25, 2024Publication date: March 13, 2025Applicant: UMNAI LimitedInventors: Angelo DALLI, Kristian D’AMATO, Mauro PIRRONE
-
Patent number: 12182708Abstract: An exemplary model search may provide optimal explainable models based on a dataset. An exemplary embodiment may identify features from a training dataset, and may map feature costs to the identified features. The search space may be sampled to generate initial or seed candidates, which may be chosen based on one or more objectives and/or constraints. The candidates may be iteratively optimized until an exit condition is met. The optimization may be performed by an external optimizer. The external optimizer may iteratively apply constraints to the candidates to quantify a fitness level of each of the seed candidates. The fitness level may be based on the constraints and objectives. The candidates may be a set of data, or may be trained to form explainable models. The external optimizer may optimize the explainable models until the exit conditions are met.Type: GrantFiled: April 5, 2023Date of Patent: December 31, 2024Assignee: UMNAI LimitedInventors: Angelo Dalli, Kristian D'Amato, Mauro Pirrone
-
Publication number: 20240249143Abstract: An exemplary embodiment provides an autoencoder which is explainable. An exemplary autoencoder may explain the degree to which each feature of the input attributed to the output of the system, which may be a compressed data representation. An exemplary embodiment may be used for classification, such as anomaly detection, as well as other scenarios where an autoencoder is input to another machine learning system or when an autoencoder is a component in an end-to-end deep learning architecture. An exemplary embodiment provides an explainable generative adversarial network that adds explainable generation, simulation and discrimination capabilities. The underlying architecture of an exemplary embodiment may be based on an explainable or interpretable neural network, allowing the underlying architecture to be a fully explainable white-box machine learning system.Type: ApplicationFiled: March 6, 2024Publication date: July 25, 2024Applicant: UMNAI LimitedInventors: Angelo DALLI, Mauro PIRRONE, Matthew GRECH
-
Patent number: 11948083Abstract: An exemplary embodiment provides an autoencoder which is explainable. An exemplary autoencoder may explain the degree to which each feature of the input attributed to the output of the system, which may be a compressed data representation. An exemplary embodiment may be used for classification, such as anomaly detection, as well as other scenarios where an autoencoder is input to another machine learning system or when an autoencoder is a component in an end-to-end deep learning architecture. An exemplary embodiment provides an explainable generative adversarial network that adds explainable generation, simulation and discrimination capabilities. The underlying architecture of an exemplary embodiment may be based on an explainable or interpretable neural network, allowing the underlying architecture to be a fully explainable white-box machine learning system.Type: GrantFiled: November 16, 2021Date of Patent: April 2, 2024Assignee: UMNAI LimitedInventors: Angelo Dalli, Mauro Pirrone, Matthew Grech
-
Patent number: 11900236Abstract: An exemplary embodiment may provide an interpretable neural network with hierarchical conditions and partitions. A local function f(x) may model the feature attribution within a specific partition. The combination of all the local functions creates a globally interpretable model. Further, INNs may utilize an external process to identify suitable partitions during their initialization and may support training using back-propagation and related techniques.Type: GrantFiled: November 4, 2021Date of Patent: February 13, 2024Assignee: UMNAI LimitedInventors: Angelo Dalli, Mauro Pirrone
-
Patent number: 11886986Abstract: Explainable neural networks may be designed to be easily implementable in hardware efficiently, leading to substantial speed and space improvements. An exemplary embodiment extends upon possible hardware embodiments of XNNs, making them suitable for low power applications, smartphones, mobile computing devices, autonomous machines, server accelerators, Internet of Things (IoT) and edge computing applications amongst many other applications. The capability of XNNs to be transformed from one form to another while preserving their logical equivalence is exploited to create efficient, secure hardware implementations that are optimized for the desired application domain and predictable in their behavior.Type: GrantFiled: August 31, 2022Date of Patent: January 30, 2024Assignee: UMNAI LimitedInventors: Angelo Dalli, Mauro Pirrone
-
Patent number: 11816587Abstract: An exemplary embodiment may describe a convolutional explainable neural network. A CNN-XNN may receive input, such as 2D or multi-dimensional data, a patient history, or any other relevant information. The input data is segmented into various objects and a knowledge encoding layer may identify and extract various features from the segmented objects. The features may be weighted. An output layer may provide predictions and explanations based on the previous layers. The explanation may be determined using a reverse indexing mechanism (Backmap). The explanation may be processed using a Kernel Labeler method that allows the labelling of the progressive refinement of patterns, symbols and concepts from any data format that allows a pattern recognition kernel to be defined allowing integration of neurosymbolic processing within CNN-XNNs. The optional addition of meta-data and causal logic allows for the integration of connectionist models with symbolic logic processing.Type: GrantFiled: October 6, 2021Date of Patent: November 14, 2023Assignee: UMNAI LimitedInventors: Angelo Dalli, Mauro Pirrone, Matthew Grech
-
Patent number: 11797835Abstract: An explainable transducer transformer (XTT) may be a finite state transducer, together with an Explainable Transformer. Variants of the XTT may include an explainable Transformer-Encoder and an explainable Transformer-Decoder. An exemplary Explainable Transducer may be used as a partial replacement in trained Explainable Neural Network (XNN) architectures or logically equivalent architectures. An Explainable Transformer may replace black-box model components of a Transformer with white-box model equivalents, in both the sub-layers of the encoder and decoder layers of the Transformer. XTTs may utilize an Explanation and Interpretation Generation System (EIGS), to generate explanations and filter such explanations to produce an interpretation of the answer, explanation, and its justification.Type: GrantFiled: January 18, 2023Date of Patent: October 24, 2023Assignee: UMNAI LimitedInventors: Angelo Dalli, Matthew Grech, Mauro Pirrone
-
Publication number: 20230325666Abstract: An exemplary embodiment may present a behavior modeling architecture that is intended to assist in handling, modelling, predicting and verifying the behavior of machine learning models to assure the safety of such systems meets the required specifications and adapt such architecture according to the execution sequences of the behavioral model. An embodiment may enable conditions in a behavioral model to be integrated in the execution sequence of behavioral modeling in order to monitor the probability likelihoods of certain paths in a system. An embodiment allows for real-time monitoring during training and prediction of machine learning models. Conditions may also be utilized to trigger system-knowledge injection in a white-box model in order to maintain the behavior of a system within defined boundaries. An embodiment further enables additional formal verification constraints to be set on the output or internal parts of white-box models.Type: ApplicationFiled: June 8, 2023Publication date: October 12, 2023Applicant: UMNAI LimitedInventors: Angelo DALLI, Matthew GRECH, Mauro PIRRONE
-
Publication number: 20230259771Abstract: An exemplary model search may provide optimal explainable models based on a dataset. An exemplary embodiment may identify features from a training dataset, and may map feature costs to the identified features. The search space may be sampled to generate initial or seed candidates, which may be chosen based on one or more objectives and/or constraints. The candidates may be iteratively optimized until an exit condition is met. The optimization may be performed by an external optimizer. The external optimizer may iteratively apply constraints to the candidates to quantify a fitness level of each of the seed candidates. The fitness level may be based on the constraints and objectives. The candidates may be a set of data, or may be trained to form explainable models. The external optimizer may optimize the explainable models until the exit conditions are met.Type: ApplicationFiled: April 5, 2023Publication date: August 17, 2023Applicant: UMNAI LimitedInventors: Angelo DALLI, Kristian D’AMATO, Mauro PIRRONE
-
Patent number: 11715007Abstract: An exemplary embodiment may present a behavior modeling architecture that is intended to assist in handling, modelling, predicting and verifying the behavior of machine learning models to assure the safety of such systems meets the required specifications and adapt such architecture according to the execution sequences of the behavioral model. An embodiment may enable conditions in a behavioral model to be integrated in the execution sequence of behavioral modeling in order to monitor the probability likelihoods of certain paths in a system. An embodiment allows for real-time monitoring during training and prediction of machine learning models. Conditions may also be utilized to trigger system-knowledge injection in a white-box model in order to maintain the behavior of a system within defined boundaries. An embodiment further enables additional formal verification constraints to be set on the output or internal parts of white-box models.Type: GrantFiled: August 27, 2021Date of Patent: August 1, 2023Assignee: UMNAI LimitedInventors: Angelo Dalli, Matthew Grech, Mauro Pirrone
-
Publication number: 20230153599Abstract: An explainable transducer transformer (XTT) may be a finite state transducer, together with an Explainable Transformer. Variants of the XTT may include an explainable Transformer-Encoder and an explainable Transformer-Decoder. An exemplary Explainable Transducer may be used as a partial replacement in trained Explainable Neural Network (XNN) architectures or logically equivalent architectures. An Explainable Transformer may replace black-box model components of a Transformer with white-box model equivalents, in both the sub-layers of the encoder and decoder layers of the Transformer. XTTs may utilize an Explanation and Interpretation Generation System (EIGS), to generate explanations and filter such explanations to produce an interpretation of the answer, explanation, and its justification.Type: ApplicationFiled: January 18, 2023Publication date: May 18, 2023Applicant: UMNAI LimitedInventors: Angelo DALLI, Matthew GRECH, Mauro PIRRONE
-
Patent number: 11651216Abstract: An exemplary model search may provide optimal explainable models based on a dataset. An exemplary embodiment may identify features from a training dataset, and may map feature costs to the identified features. The search space may be sampled to generate initial or seed candidates, which may be chosen based on one or more objectives and/or constraints. The candidates may be iteratively optimized until an exit condition is met. The optimization may be performed by an external optimizer. The external optimizer may iteratively apply constraints to the candidates to quantify a fitness level of each of the seed candidates. The fitness level may be based on the constraints and objectives. The candidates may be a set of data, or may be trained to form explainable models. The external optimizer may optimize the explainable models until the exit conditions are met.Type: GrantFiled: June 9, 2022Date of Patent: May 16, 2023Assignee: UMNAI LimitedInventors: Angelo Dalli, Kristian D'Amato, Mauro Pirrone
-
Patent number: 11593631Abstract: An explainable transducer transformer (XTT) may be a finite state transducer, together with an Explainable Transformer. Variants of the XTT may include an explainable Transformer-Encoder and an explainable Transformer-Decoder. An exemplary Explainable Transducer may be used as a partial replacement in trained Explainable Neural Network (XNN) architectures or logically equivalent architectures. An Explainable Transformer may replace black-box model components of a Transformer with white-box model equivalents, in both the sub-layers of the encoder and decoder layers of the Transformer. XTTs may utilize an Explanation and Interpretation Generation System (EIGS), to generate explanations and filter such explanations to produce an interpretation of the answer, explanation, and its justification.Type: GrantFiled: December 17, 2021Date of Patent: February 28, 2023Assignee: UMNAI LimitedInventors: Angelo Dalli, Matthew Grech, Mauro Pirrone
-
Publication number: 20230004846Abstract: Typical autonomous systems implement black-box models for tasks such as motion detection and triaging failure events, and as a result are unable to provide an explanation for its input features. An explainable framework may utilize one or more explainable white-box architectures. Explainable models allow for a new set of capabilities in industrial, commercial, and non-commercial applications, such as behavioral prediction and boundary settings, and therefore may provide additional safety mechanisms to be a part of the control loop of automated machinery, apparatus, and systems. An embodiment may provide a practical solution for the safe operation of automated machinery and systems based on the anticipation and prediction of consequences. The ability to guarantee a safe mode of operation in an autonomous system which may include machinery and robots which interact with human beings is a major unresolved problem which may be solved by an exemplary explainable framework.Type: ApplicationFiled: September 13, 2022Publication date: January 5, 2023Applicant: UMNAI LimitedInventors: Angelo DALLI, Matthew GRECH, Mauro PIRRONE
-
Publication number: 20220414440Abstract: Explainable neural networks may be designed to be easily implementable in hardware efficiently, leading to substantial speed and space improvements. An exemplary embodiment extends upon possible hardware embodiments of XNNs, making them suitable for low power applications, smartphones, mobile computing devices, autonomous machines, server accelerators, Internet of Things (IoT) and edge computing applications amongst many other applications. The capability of XNNs to be transformed from one form to another while preserving their logical equivalence is exploited to create efficient, secure hardware implementations that are optimized for the desired application domain and predictable in their behavior.Type: ApplicationFiled: August 31, 2022Publication date: December 29, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Mauro PIRRONE
-
Publication number: 20220398460Abstract: An exemplary model search may provide optimal explainable models based on a dataset. An exemplary embodiment may identify features from a training dataset, and may map feature costs to the identified features. The search space may be sampled to generate initial or seed candidates, which may be chosen based on one or more objectives and/or constraints. The candidates may be iteratively optimized until an exit condition is met. The optimization may be performed by an external optimizer. The external optimizer may iteratively apply constraints to the candidates to quantify a fitness level of each of the seed candidates. The fitness level may be based on the constraints and objectives. The candidates may be a set of data, or may be trained to form explainable models. The external optimizer may optimize the explainable models until the exit conditions are met.Type: ApplicationFiled: June 9, 2022Publication date: December 15, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Kristian D'AMATO, Mauro PIRRONE
-
Publication number: 20220391670Abstract: An exemplary embodiment provides an explanation and interpretation generation system for creating explanations in different human and machine-readable formats from an explainable and/or interpretable machine learning model. An extensible explanation architecture may allow for seamless third-party integration. Explanation scaffolding may be implemented for generating domain specific explanations, while interpretation scaffolding may facilitate the generation of domain and scenario specific interpretations. An exemplary explanation filter interpretation model may provide an explanation and interpretation generation system optional filtering and interpretation filtering and briefing capabilities. An embodiment may cluster explanations into concepts to incorporate information such as taxonomies, ontologies, causal models, statistical hypotheses, data quality controls, domain specific knowledge and allow for collaborative human knowledge injection.Type: ApplicationFiled: August 16, 2022Publication date: December 8, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Olga Maximovna FINKEL, Matthew GRECH, Mauro PIRRONE
-
Patent number: 11468308Abstract: Explainable neural networks may be designed to be easily implementable in hardware efficiently, leading to substantial speed and space improvements. An exemplary embodiment extends upon possible hardware embodiments of XNNs, making them suitable for low power applications, smartphones, mobile computing devices, autonomous machines, server accelerators, Internet of Things (IoT) and edge computing applications amongst many other applications. The capability of XNNs to be transformed from one form to another while preserving their logical equivalence is exploited to create efficient, secure hardware implementations that are optimized for the desired application domain and predictable in their behavior.Type: GrantFiled: April 30, 2021Date of Patent: October 11, 2022Assignee: UMNAI LimitedInventors: Angelo Dalli, Mauro Pirrone