Patents by Inventor Mauro PIRRONE
Mauro PIRRONE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250086458Abstract: An exemplary model search may provide optimal explainable models based on a dataset. An exemplary embodiment may identify features from a training dataset, and may map feature costs to the identified features. The search space may be sampled to generate initial or seed candidates, which may be chosen based on one or more objectives and/or constraints. The candidates may be iteratively optimized until an exit condition is met. The optimization may be performed by an external optimizer. The external optimizer may iteratively apply constraints to the candidates to quantify a fitness level of each of the seed candidates. The fitness level may be based on the constraints and objectives. The candidates may be a set of data, or may be trained to form explainable models. The external optimizer may optimize the explainable models until the exit conditions are met.Type: ApplicationFiled: November 25, 2024Publication date: March 13, 2025Applicant: UMNAI LimitedInventors: Angelo DALLI, Kristian D’AMATO, Mauro PIRRONE
-
Publication number: 20240249143Abstract: An exemplary embodiment provides an autoencoder which is explainable. An exemplary autoencoder may explain the degree to which each feature of the input attributed to the output of the system, which may be a compressed data representation. An exemplary embodiment may be used for classification, such as anomaly detection, as well as other scenarios where an autoencoder is input to another machine learning system or when an autoencoder is a component in an end-to-end deep learning architecture. An exemplary embodiment provides an explainable generative adversarial network that adds explainable generation, simulation and discrimination capabilities. The underlying architecture of an exemplary embodiment may be based on an explainable or interpretable neural network, allowing the underlying architecture to be a fully explainable white-box machine learning system.Type: ApplicationFiled: March 6, 2024Publication date: July 25, 2024Applicant: UMNAI LimitedInventors: Angelo DALLI, Mauro PIRRONE, Matthew GRECH
-
Publication number: 20230325666Abstract: An exemplary embodiment may present a behavior modeling architecture that is intended to assist in handling, modelling, predicting and verifying the behavior of machine learning models to assure the safety of such systems meets the required specifications and adapt such architecture according to the execution sequences of the behavioral model. An embodiment may enable conditions in a behavioral model to be integrated in the execution sequence of behavioral modeling in order to monitor the probability likelihoods of certain paths in a system. An embodiment allows for real-time monitoring during training and prediction of machine learning models. Conditions may also be utilized to trigger system-knowledge injection in a white-box model in order to maintain the behavior of a system within defined boundaries. An embodiment further enables additional formal verification constraints to be set on the output or internal parts of white-box models.Type: ApplicationFiled: June 8, 2023Publication date: October 12, 2023Applicant: UMNAI LimitedInventors: Angelo DALLI, Matthew GRECH, Mauro PIRRONE
-
Publication number: 20230259771Abstract: An exemplary model search may provide optimal explainable models based on a dataset. An exemplary embodiment may identify features from a training dataset, and may map feature costs to the identified features. The search space may be sampled to generate initial or seed candidates, which may be chosen based on one or more objectives and/or constraints. The candidates may be iteratively optimized until an exit condition is met. The optimization may be performed by an external optimizer. The external optimizer may iteratively apply constraints to the candidates to quantify a fitness level of each of the seed candidates. The fitness level may be based on the constraints and objectives. The candidates may be a set of data, or may be trained to form explainable models. The external optimizer may optimize the explainable models until the exit conditions are met.Type: ApplicationFiled: April 5, 2023Publication date: August 17, 2023Applicant: UMNAI LimitedInventors: Angelo DALLI, Kristian D’AMATO, Mauro PIRRONE
-
Publication number: 20230153599Abstract: An explainable transducer transformer (XTT) may be a finite state transducer, together with an Explainable Transformer. Variants of the XTT may include an explainable Transformer-Encoder and an explainable Transformer-Decoder. An exemplary Explainable Transducer may be used as a partial replacement in trained Explainable Neural Network (XNN) architectures or logically equivalent architectures. An Explainable Transformer may replace black-box model components of a Transformer with white-box model equivalents, in both the sub-layers of the encoder and decoder layers of the Transformer. XTTs may utilize an Explanation and Interpretation Generation System (EIGS), to generate explanations and filter such explanations to produce an interpretation of the answer, explanation, and its justification.Type: ApplicationFiled: January 18, 2023Publication date: May 18, 2023Applicant: UMNAI LimitedInventors: Angelo DALLI, Matthew GRECH, Mauro PIRRONE
-
Publication number: 20230004846Abstract: Typical autonomous systems implement black-box models for tasks such as motion detection and triaging failure events, and as a result are unable to provide an explanation for its input features. An explainable framework may utilize one or more explainable white-box architectures. Explainable models allow for a new set of capabilities in industrial, commercial, and non-commercial applications, such as behavioral prediction and boundary settings, and therefore may provide additional safety mechanisms to be a part of the control loop of automated machinery, apparatus, and systems. An embodiment may provide a practical solution for the safe operation of automated machinery and systems based on the anticipation and prediction of consequences. The ability to guarantee a safe mode of operation in an autonomous system which may include machinery and robots which interact with human beings is a major unresolved problem which may be solved by an exemplary explainable framework.Type: ApplicationFiled: September 13, 2022Publication date: January 5, 2023Applicant: UMNAI LimitedInventors: Angelo DALLI, Matthew GRECH, Mauro PIRRONE
-
Publication number: 20220414440Abstract: Explainable neural networks may be designed to be easily implementable in hardware efficiently, leading to substantial speed and space improvements. An exemplary embodiment extends upon possible hardware embodiments of XNNs, making them suitable for low power applications, smartphones, mobile computing devices, autonomous machines, server accelerators, Internet of Things (IoT) and edge computing applications amongst many other applications. The capability of XNNs to be transformed from one form to another while preserving their logical equivalence is exploited to create efficient, secure hardware implementations that are optimized for the desired application domain and predictable in their behavior.Type: ApplicationFiled: August 31, 2022Publication date: December 29, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Mauro PIRRONE
-
Publication number: 20220398460Abstract: An exemplary model search may provide optimal explainable models based on a dataset. An exemplary embodiment may identify features from a training dataset, and may map feature costs to the identified features. The search space may be sampled to generate initial or seed candidates, which may be chosen based on one or more objectives and/or constraints. The candidates may be iteratively optimized until an exit condition is met. The optimization may be performed by an external optimizer. The external optimizer may iteratively apply constraints to the candidates to quantify a fitness level of each of the seed candidates. The fitness level may be based on the constraints and objectives. The candidates may be a set of data, or may be trained to form explainable models. The external optimizer may optimize the explainable models until the exit conditions are met.Type: ApplicationFiled: June 9, 2022Publication date: December 15, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Kristian D'AMATO, Mauro PIRRONE
-
Publication number: 20220391670Abstract: An exemplary embodiment provides an explanation and interpretation generation system for creating explanations in different human and machine-readable formats from an explainable and/or interpretable machine learning model. An extensible explanation architecture may allow for seamless third-party integration. Explanation scaffolding may be implemented for generating domain specific explanations, while interpretation scaffolding may facilitate the generation of domain and scenario specific interpretations. An exemplary explanation filter interpretation model may provide an explanation and interpretation generation system optional filtering and interpretation filtering and briefing capabilities. An embodiment may cluster explanations into concepts to incorporate information such as taxonomies, ontologies, causal models, statistical hypotheses, data quality controls, domain specific knowledge and allow for collaborative human knowledge injection.Type: ApplicationFiled: August 16, 2022Publication date: December 8, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Olga Maximovna FINKEL, Matthew GRECH, Mauro PIRRONE
-
Publication number: 20220198254Abstract: An explainable transducer transformer (XTT) may be a finite state transducer, together with an Explainable Transformer. Variants of the XTT may include an explainable Transformer-Encoder and an explainable Transformer-Decoder. An exemplary Explainable Transducer may be used as a partial replacement in trained Explainable Neural Network (XNN) architectures or logically equivalent architectures. An Explainable Transformer may replace black-box model components of a Transformer with white-box model equivalents, in both the sub-layers of the encoder and decoder layers of the Transformer. XTTs may utilize an Explanation and Interpretation Generation System (EIGS), to generate explanations and filter such explanations to produce an interpretation of the answer, explanation, and its justification.Type: ApplicationFiled: December 17, 2021Publication date: June 23, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Matthew GRECH, Mauro PIRRONE
-
Publication number: 20220172050Abstract: An exemplary embodiment provides an autoencoder which is explainable. An exemplary autoencoder may explain the degree to which each feature of the input attributed to the output of the system, which may be a compressed data representation. An exemplary embodiment may be used for classification, such as anomaly detection, as well as other scenarios where an autoencoder is input to another machine learning system or when an autoencoder is a component in an end-to-end deep learning architecture. An exemplary embodiment provides an explainable generative adversarial network that adds explainable generation, simulation and discrimination capabilities. The underlying architecture of an exemplary embodiment may be based on an explainable or interpretable neural network, allowing the underlying architecture to be a fully explainable white-box machine learning system.Type: ApplicationFiled: November 16, 2021Publication date: June 2, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Mauro PIRRONE, Matthew GRECH
-
Publication number: 20220156614Abstract: Typical autonomous systems implement black-box models for tasks such as motion detection and triaging failure events, and as a result are unable to provide an explanation for its input features. An explainable framework may utilize one or more explainable white-box architectures. Explainable models allow for a new set of capabilities in industrial, commercial, and non-commercial applications, such as behavioral prediction and boundary settings, and therefore may provide additional safety mechanisms to be a part of the control loop of automated machinery, apparatus, and systems. An embodiment may provide a practical solution for the safe operation of automated machinery and systems based on the anticipation and prediction of consequences. The ability to guarantee a safe mode of operation in an autonomous system which may include machinery and robots which interact with human beings is a major unresolved problem which may be solved by an exemplary explainable framework.Type: ApplicationFiled: November 12, 2021Publication date: May 19, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Matthew GRECH, Mauro PIRRONE
-
Publication number: 20220147876Abstract: An exemplary embodiment may provide an explainable reinforcement learning system. Explanations may be incorporated into an exemplary reinforcement learning agent/model or a corresponding environmental model. The explanations may be incorporated into an agent's state and/or action space. An explainable Bellman equation may implement an explainable state and explainable action as part of an explainable reward function. An explainable reinforcement learning induction method may implement a dataset to provide a white-box model which mimics a black-box reinforcement learning system. An explainable generative adversarial imitation learning model may implement an explainable generative adversarial network to train the occupancy measure of a policy and may generate multiple levels of explanations. Explainable reinforcement learning may be implemented on a quantum computing system using an embodiment of an explainable Bellman equation.Type: ApplicationFiled: November 12, 2021Publication date: May 12, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Mauro PIRRONE, Matthew GRECH
-
Publication number: 20220138532Abstract: An exemplary embodiment may provide an interpretable neural network with hierarchical conditions and partitions. A local function f(x) may model the feature attribution within a specific partition. The combination of all the local functions creates a globally interpretable model. Further, INNs may utilize an external process to identify suitable partitions during their initialization and may support training using back-propagation and related techniques.Type: ApplicationFiled: November 4, 2021Publication date: May 5, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Mauro PIRRONE
-
Publication number: 20220114417Abstract: An exemplary embodiment provides an explanation and interpretation generation system for creating explanations in different human and machine-readable formats from an explainable and/or interpretable machine learning model. An extensible explanation architecture may allow for seamless third-party integration. Explanation scaffolding may be implemented for generating domain specific explanations, while interpretation scaffolding may facilitate the generation of domain and scenario specific interpretations. An exemplary explanation filter interpretation model may provide an explanation and interpretation generation system optional filtering and interpretation filtering and briefing capabilities. An embodiment may cluster explanations into concepts to incorporate information such as taxonomies, ontologies, causal models, statistical hypotheses, data quality controls, domain specific knowledge and allow for collaborative human knowledge injection.Type: ApplicationFiled: October 13, 2021Publication date: April 14, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Olga Maximovna FINKEL, Matthew GRECH, Mauro PIRRONE
-
Publication number: 20220108179Abstract: Human knowledge may be injected in an explainable AI system in order to improve the model's generalization error, model accuracy, interpretability of the model, avoid or eliminate bias, while providing a path towards the integration of connectionist systems with symbolic and causal logic in a combined AI system. Human knowledge injection may be implemented by harnessing the white-box nature of explainable/interpretable models. In one exemplary embodiment, a user applies intuition to model-specific cases or exceptions. In another embodiment, an explainable model may be embedded in workflow systems which enable users to apply pre-hoc and post-hoc operations. A third exemplary embodiment implements human-assisted focusing. An exemplary embodiment also presents a method to train and refine explainable or interpretable models without losing the injected knowledge defined by humans when applying gradient descent techniques.Type: ApplicationFiled: December 15, 2021Publication date: April 7, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Mauro PIRRONE
-
Publication number: 20220067520Abstract: An exemplary embodiment may present a behavior modeling architecture that is intended to assist in handling, modelling, predicting and verifying the behavior of machine learning models to assure the safety of such systems meets the required specifications and adapt such architecture according to the execution sequences of the behavioral model. An embodiment may enable conditions in a behavioral model to be integrated in the execution sequence of behavioral modeling in order to monitor the probability likelihoods of certain paths in a system. An embodiment allows for real-time monitoring during training and prediction of machine learning models. Conditions may also be utilized to trigger system-knowledge injection in a white-box model in order to maintain the behavior of a system within defined boundaries. An embodiment further enables additional formal verification constraints to be set on the output or internal parts of white-box models.Type: ApplicationFiled: August 27, 2021Publication date: March 3, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Matthew GRECH, Mauro PIRRONE
-
Publication number: 20220027737Abstract: An exemplary embodiment may describe a convolutional explainable neural network. A CNN-XNN may receive input, such as 2D or multi-dimensional data, a patient history, or any other relevant information. The input data is segmented into various objects and a knowledge encoding layer may identify and extract various features from the segmented objects. The features may be weighted. An output layer may provide predictions and explanations based on the previous layers. The explanation may be determined using a reverse indexing mechanism (Backmap). The explanation may be processed using a Kernel Labeler method that allows the labelling of the progressive refinement of patterns, symbols and concepts from any data format that allows a pattern recognition kernel to be defined allowing integration of neurosymbolic processing within CNN-XNNs. The optional addition of meta-data and causal logic allows for the integration of connectionist models with symbolic logic processing.Type: ApplicationFiled: October 6, 2021Publication date: January 27, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Mauro PIRRONE, Matthew GRECH
-
Publication number: 20220012591Abstract: Bias may be detected globally and locally by harnessing the white-box nature of the eXplainable artificial intelligence, eXplainable Neural Nets, Interpretable Neural Nets, eXplainable Transducer Transformers, eXplainable Spiking Nets, eXplainable Memory Net and eXplainable Reinforcement Learning models. Methods for detecting bias, strength, and weakness of data sets and the resulting models may be described. A method may implement global bias detection which utilizes the coefficients of the model to identify, minimize, and/or correct potential bias within a desired error tolerance. Another method makes use of local feature importance extracted from the rule-based model coefficients to locally identify bias. A third method aggregates the feature importance over the results/explanations of multiple samples. Bias may also be detected in multi-dimensional data such as images. A backmap reverse indexing mechanism may be implemented.Type: ApplicationFiled: July 8, 2021Publication date: January 13, 2022Applicant: UMNAI LimitedInventors: Angelo DALLI, Mauro PIRRONE
-
Publication number: 20210350211Abstract: A method, and system for a distributed artificial intelligence architecture may be shown and described. An embodiment may present an exemplary distributed explainable neural network (XNN) architecture, whereby multiple XNNs may be processed in parallel in order to increase performance. The distributed architecture may include a parallel execution step which may combine parallel XNNs into an aggregate model by calculating the average (or weighted average) from the parallel models. A distributed hybrid XNN/XAI architecture may include multiple independent models which can work independently without relying on the full distributed architecture. An exemplary architecture may be useful for large datasets where the training data cannot fit in the CPU/GPU memory of a single machine. The component XNNs can be standard plain XNNs or any XNN/XAI variants such as convolutional XNNs (CNN-XNNs), predictive XNNS (PR-XNNs), and the like, together with the white-box portions of grey-box models like INNs.Type: ApplicationFiled: May 7, 2021Publication date: November 11, 2021Applicant: UMNAI LimitedInventors: Angelo DALLI, Mauro PIRRONE