Patents by Inventor SAEID ALLAHDADIAN

SAEID ALLAHDADIAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230362180
    Abstract: Techniques for implementing a semi-supervised framework for purpose-oriented anomaly detection are provided. In one technique, a data item in inputted into an unsupervised anomaly detection model, which generates first output. Based on the first output, it is determined whether the data item represents an anomaly. In response to determining that the data item represents an anomaly, the data item is inputted into a supervised classification model, which generates second output that indicates whether the data item is unknown. In response to determining that the data item is unknown, a training instance is generated based on the data item. The supervised classification model is updated based on the training instance.
    Type: Application
    Filed: May 9, 2022
    Publication date: November 9, 2023
    Inventors: Milos Vasic, Saeid Allahdadian, Matteo Casserini, Felix Schmidt, Andrew Brownsword
  • Patent number: 11704386
    Abstract: Herein are feature extraction mechanisms that receive parsed log messages as inputs and transform them into numerical feature vectors for machine learning models (MLMs). In an embodiment, a computer extracts fields from a log message. Each field specifies a name, a text value, and a type. For each field, a field transformer for the field is dynamically selected based the field's name and/or the field's type. The field transformer converts the field's text value into a value of the field's type. A feature encoder for the value of the field's type is dynamically selected based on the field's type and/or a range of the field's values that occur in a training corpus of an MLM. From the feature encoder, an encoding of the value of the field's typed is stored into a feature vector. Based on the MLM and the feature vector, the log message is detected as anomalous.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: July 18, 2023
    Assignee: Oracle International Corporation
    Inventors: Amin Suzani, Saeid Allahdadian, Milos Vasic, Matteo Casserini, Hamed Ahmadi, Felix Schmidt, Andrew Brownsword, Nipun Agarwal
  • Publication number: 20230043993
    Abstract: Herein are machine learning techniques that adjust reconstruction loss of a reconstructive model, such as a principal component analysis (PCA), based on importances of features. In an embodiment having a reconstructive model that more or less accurately reconstructs its input, a computer measures, for each feature, a respective importance that is based on the reconstructive model. For example, importance may be based on grading samples that the reconstructive model correctly or incorrectly inferenced. For each feature during production inferencing, a respective original loss from the reconstructive model measures a difference between a value of the feature in an input and a reconstructed value of the feature generated by the reconstructive model. For each feature, the respective importance of the feature is applied to the respective original loss to generate a respective weighted loss, which compensates for concept drift.
    Type: Application
    Filed: August 4, 2021
    Publication date: February 9, 2023
    Inventors: SAEID ALLAHDADIAN, YUTING SUN, NAVANEETH JAMADAGNI, FELIX SCHMIDT, MARIA VLACHOPOULOU
  • Publication number: 20230024884
    Abstract: Herein are machine learning techniques that adjust reconstruction loss of a reconstructive model such as an autoencoder based on importances of values of features. In an embodiment and before, during, or after training, the reconstructive model that more or less accurately reconstructs its input, a computer measures, for each distinct value of each feature, a respective importance that is not based on the reconstructive model. For example, importance may be based solely on a training corpus. For each feature during or after training, a respective original loss from the reconstructive model measures a difference between a value of the feature in an input and a reconstructed value of the feature generated by the reconstructive model. For each feature, the respective importance of the input value of the feature is applied to the respective original loss to generate a respective weighted loss. The weighted losses of the features of the input are collectively detected as anomalous or non-anomalous.
    Type: Application
    Filed: July 20, 2021
    Publication date: January 26, 2023
    Inventors: MATTEO CASSERINI, SAEID ALLAHDADIAN, FELIX SCHMIDT, ANDREW BROWNSWORD
  • Publication number: 20220318684
    Abstract: Techniques are provided for sparse ensembling of unsupervised machine learning models. In an embodiment, the proposed architecture is composed of multiple unsupervised machine learning models that each produce a score as output and a gating network that analyzes the inputs and outputs of the unsupervised machine learning models to select an optimal ensemble of unsupervised machine learning models. The gating network is trained to choose a minimal number of the multiple unsupervised machine learning models whose scores are combined to create a final score that matches or closely resembles a final score that is computed using all the scores of the multiple unsupervised machine learning models.
    Type: Application
    Filed: April 2, 2021
    Publication date: October 6, 2022
    Inventors: SAEID ALLAHDADIAN, AMIN SUZANI, MILOS VASIC, MATTEO CASSERINI, ANDREW BROWNSWORD, FELIX SCHMIDT, NIPUN AGARWAL
  • Publication number: 20220292304
    Abstract: Herein are feature extraction mechanisms that receive parsed log messages as inputs and transform them into numerical feature vectors for machine learning models (MLMs). In an embodiment, a computer extracts fields from a log message. Each field specifies a name, a text value, and a type. For each field, a field transformer for the field is dynamically selected based the field's name and/or the field's type. The field transformer converts the field's text value into a value of the field's type. A feature encoder for the value of the field's type is dynamically selected based on the field's type and/or a range of the field's values that occur in a training corpus of an MLM. From the feature encoder, an encoding of the value of the field's typed is stored into a feature vector. Based on the MLM and the feature vector, the log message is detected as anomalous or not.
    Type: Application
    Filed: March 12, 2021
    Publication date: September 15, 2022
    Inventors: AMIN SUZANI, SAEID ALLAHDADIAN, MILOS VASIC, MATTEO CASSERINI, HAMED AHMADI, FELIX SCHMIDT, ANDREW BROWNSWORD, NIPUN AGARWAL
  • Publication number: 20220188694
    Abstract: Approaches herein relate to model decay of an anomaly detector due to concept drift. Herein are machine learning techniques for dynamically self-tuning an anomaly score threshold. In an embodiment in a production environment, a computer receives an item in a stream of items. A machine learning (ML) model hosted by the computer infers by calculation an anomaly score for the item. Whether the item is anomalous or not is decided based on the anomaly score and an adaptive anomaly threshold that dynamically fluctuates. A moving standard deviation of anomaly scores is adjusted based on a moving average of anomaly scores. The moving average of anomaly scores is then adjusted based on the anomaly score. The adaptive anomaly threshold is then adjusted based on the moving average of anomaly scores and the moving standard deviation of anomaly scores.
    Type: Application
    Filed: December 15, 2020
    Publication date: June 16, 2022
    Inventors: Amin Suzani, Matteo Casserini, Milos Vasic, Saeid Allahdadian, Andrew Brownsword, Hamed Ahmadi, Felix Schmidt, Nipun Agarwal
  • Publication number: 20220188410
    Abstract: Approaches herein relate to reconstructive models such as an autoencoder for anomaly detection. Herein are machine learning techniques that detect and suppress any feature that causes model decay by concept drift. In an embodiment in a production environment, a computer initializes an unsuppressed subset of features with a plurality of features that an already-trained reconstructive model can process. A respective reconstruction error of each feature of the unsuppressed subset of features is calculated. The computer detects that a respective moving average based on the reconstruction error of a particular feature of the unsuppressed subset of features exceeds a respective feature suppression threshold of the particular feature, which causes removal of the particular feature from the unsuppressed subset of features.
    Type: Application
    Filed: December 15, 2020
    Publication date: June 16, 2022
    Inventors: SAEID ALLAHDADIAN, ANDREW BROWNSWORD, MILOS VASIC, MATTEO CASSERINI, AMIN SUZANI, HAMED AHMADI, FELIX SCHMIDT, NIPUN AGARWAL
  • Publication number: 20220156578
    Abstract: Approaches herein relate to reconstructive models such as an autoencoder for anomaly detection. Herein are machine learning techniques that measure inference confidence based on reconstruction error trends. In an embodiment, a computer hosts a reconstructive model that encodes and decodes features. Based on that decoding, the following are automatically calculated: a respective reconstruction error of each feature, a respective moving average of reconstruction errors of each feature, an average of the moving averages of the reconstruction errors of all features, a standard deviation of the moving averages of the reconstruction errors of all features, and a confidence of decoding the features that is based on a ratio of the average of the moving averages of the reconstruction errors to the standard deviation of the moving averages of the reconstruction errors. The computer detects and indicates that a threshold exceeds the confidence of decoding, which may cause important automatic reactions herein.
    Type: Application
    Filed: November 16, 2020
    Publication date: May 19, 2022
    Inventors: SAEID ALLAHDADIAN, MATTEO CASSERINI, ANDREW BROWNSWORD, AMIN SUZANI, MILOS VASIC, FELIX SCHMIDT, NIPUN AGARWAL
  • Publication number: 20220108181
    Abstract: A multilayer perceptron herein contains an already-trained combined sequence of residual blocks that contains a semantic sequence of residual blocks and a contextual sequence of residual blocks. The semantic sequence of residual blocks contains a semantic sequence of layers of an autoencoder. The contextual sequence of residual blocks contains a contextual sequence of layers of a recurrent neural network. Each residual block of the combined sequence of residual blocks is used based on a respective survival probability. By the autoencoder and based on the using each residual block of the semantic sequence, a previous entry of a log is semantically encoded. By the recurrent neural network and based on the using each residual block of the contextual sequence, a next entry of the log is predicted. In an embodiment during training, survival probabilities are hyperparameters that are learned and used to probabilistically skip residual blocks such that the multilayer perceptron has stochastic depth.
    Type: Application
    Filed: October 7, 2020
    Publication date: April 7, 2022
    Inventors: HAMED AHMADI, SAEID ALLAHDADIAN, MATTEO CASSERINI, MILOS VASIC, AMIN SUZANI, FELIX SCHMIDT, ANDREW BROWNSWORD, NIPUN AGARWAL