Patents by Inventor Ahmed Ataallah Ataallah Abobakr

Ahmed Ataallah Ataallah Abobakr has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240095584
    Abstract: Techniques are disclosed herein for objective function optimization in target based hyperparameter tuning. In one aspect, a computer-implemented method is provided that includes initializing a machine learning algorithm with a set of hyperparameter values and obtaining a hyperparameter objective function that comprises a domain score for each domain that is calculated based on a number of instances within an evaluation dataset that are correctly or incorrectly predicted by the machine learning algorithm during a given trial. For each trial of a hyperparameter tuning process: training the machine learning algorithm to generate a machine learning model, running the machine learning model in different domains using the set of hyperparameter values, evaluating the machine learning model for each domain, and once the machine learning model has reached convergence, outputting at least one machine learning model.
    Type: Application
    Filed: May 15, 2023
    Publication date: March 21, 2024
    Applicant: Oracle International Corporation
    Inventors: Ying Xu, Vladislav Blinov, Ahmed Ataallah Ataallah Abobakr, Thanh Long Duong, Mark Edward Johnson, Elias Luqman Jalaluddin, Xin Xu, Srinivasa Phani Kumar Gadde, Vishal Vishnoi, Poorya Zaremoodi, Umanga Bista
  • Publication number: 20240086767
    Abstract: Techniques are disclosed herein for continuous hyperparameter tuning with automatic domain weight adjustment based on periodic performance checkpoints. In one aspect, a method is provided that includes initializing a machine learning algorithm with a set of hyperparameter values and obtaining a hyperparameter objective function that is defined at least in part on a plurality of domains of a search space that is associated with the machine learning algorithm. For each trial of a hyperparameter tuning process: running the machine learning algorithm in different domains using the set of hyperparameter values, periodically checking a performance of the machine learning algorithm in the different domains based on the hyperparameter objective function; and continuing hyperparameter tuning with a new set of hyperparameter values after automatically adjusting the domain weights according to a regression status of the different domains.
    Type: Application
    Filed: April 3, 2023
    Publication date: March 14, 2024
    Applicant: Oracle International Corporation
    Inventors: Ying Xu, Vladislav Blinov, Ahmed Ataallah Ataallah Abobakr, Mark Edward Johnson, Thanh Long Duong, Srinivasa Phani Kumar Gadde, Vishal Vishnoi, Xin Xu, Elias Luqman Jalaluddin, Umanga Bista
  • Publication number: 20240028963
    Abstract: An augmentation and feature caching subsystem is described for training AI/ML models. In one particular aspect, a method is provided that includes receiving data comprising training examples, one or more augmentation configuration hyperparameters and one or more feature extraction configuration hyperparameters; generating a first key based on one of the training examples and the one or more augmentation configuration hyperparameters; searching a first key-value storage based on the first key; obtaining one or more augmentations based on the search of the first key-value storage; applying the obtained one or more augmentations to the training examples to result in augmented training examples; generating a second key based on one of the augmented training examples and the one or more feature extraction configuration hyperparameters; searching a second key-value storage based on the second key; obtaining one or more features based on the search of the second key-value storage.
    Type: Application
    Filed: July 11, 2023
    Publication date: January 25, 2024
    Applicant: Oracle International Corporation
    Inventors: Vladislav Blinov, Vishal Vishnoi, Thanh Long Duong, Mark Edward Johnson, Xin Xu, Elias Luqman Jalaluddin, Ying Xu, Ahmed Ataallah Ataallah Abobakr, Umanga Bista, Thanh Tien Vu
  • Publication number: 20230419127
    Abstract: Novel techniques are described for negative entity-aware augmentation using a two-stage augmentation to improve the stability of the model to entity value changes for intent prediction. In some embodiments, a method comprises accessing a first set of training data for an intent prediction model, the first set of training data comprising utterances and intent labels; applying one or more negative entity-aware data augmentation techniques to the first set of training data, depending on the tuning requirements for hyper-parameters, to result in a second set of training data, where the one or more negative entity-aware data augmentation techniques comprise Keyword Augmentation Technique (“KAT”) plus entity without context technique and KAT plus entity in random context as OOD technique; combining the first set of training data and the second set of training data to generate expanded training data; and training the intent prediction model using the expanded training data.
    Type: Application
    Filed: February 1, 2023
    Publication date: December 28, 2023
    Applicant: Oracle International Corporation
    Inventors: Ahmed Ataallah Ataallah Abobakr, Shivashankar Subramanian, Ying Xu, Vladislav Blinov, Umanga Bista, Tuyen Quang Pham, Thanh Long Duong, Mark Edward Johnson, Elias Luqman Jalaluddin, Vanshika Sridharan, Xin Xu, Srinivasa Phani Kumar Gadde, Vishal Vishnoi
  • Publication number: 20230419040
    Abstract: Novel techniques are described for data augmentation using a two-stage entity-aware augmentation to improve model robustness to entity value changes for intent prediction.
    Type: Application
    Filed: February 1, 2023
    Publication date: December 28, 2023
    Applicant: Oracle International Corporation
    Inventors: Ahmed Ataallah Ataallah Abobakr, Shivashankar Subramanian, Ying Xu, Vladislav Blinov, Umanga Bista, Tuyen Quang Pham, Thanh Long Duong, Mark Edward Johnson, Elias Luqman Jalaluddin, Vanshika Sridharan, Xin Xu, Srinivasa Phani Kumar Gadde, Vishal Vishnoi
  • Publication number: 20230419052
    Abstract: Novel techniques are described for positive entity-aware augmentation using a two-stage augmentation to improve the stability of the model to entity value changes for intent prediction. In one particular aspect, a method is provided that includes accessing a first set of training data for an intent prediction model, the first set of training data comprising utterances and intent labels; applying one or more positive data augmentation techniques to the first set of training data, depending on the tuning requirements for hyper-parameters, to result in a second set of training data, where the positive data augmentation techniques comprise Entity-Aware (“EA”) technique and a two-stage augmentation technique; combining the first set of training data and the second set of training data to generate expanded training data; and training the intent prediction model using the expanded training data.
    Type: Application
    Filed: February 1, 2023
    Publication date: December 28, 2023
    Applicant: Oracle International Corporation
    Inventors: Ahmed Ataallah Ataallah Abobakr, Shivashankar Subramanian, Ying Xu, Vladislav Blinov, Umanga Bista, Tuyen Quang Pham, Thanh Long Duong, Mark Edward Johnson, Elias Luqman Jalaluddin, Vanshika Sridharan, Xin XU, Srinivasa Phani Kumar Gadde, Vishal Vishnoi
  • Publication number: 20230376700
    Abstract: Techniques are provided for generating training data to facilitate fine-tuning embedding models. Training data including anchor utterances is obtained. Positive utterances and negative utterances are generated from the anchor utterances. Tuples including the anchor utterances, the positive utterances, and the negative utterances are formed. Embeddings for the tuples are generated and a pre-trained embedding model is fine-tuned based on the embeddings. The fine-tuned model can be deployed to a system.
    Type: Application
    Filed: May 9, 2023
    Publication date: November 23, 2023
    Applicant: Oracle International Corporation
    Inventors: Umanga Bista, Vladislav Blinov, Mark Edward Johnson, Ahmed Ataallah Ataallah Abobakr, Thanh Long Duong, Srinivasa Phani Kumar Gadde, Vishal Vishnoi, Elias Luqman Jalaluddin, Xin Xu, Shivashankar Subramanian
  • Publication number: 20230153687
    Abstract: Techniques for named entity bias detection and mitigation for sentence sentiment analysis. In one particular aspect, a method is provided that includes obtaining a training set of labeled examples for training a machine learning model to classify sentiment, preparing a list of named entities using one or more data sources, for each example in the training set of labeled examples with a named entity, replacing the named entity with a corresponding entity type tag to generate a labeled template data set, executing a sampling process for each entity type t within the labeled template data set to generate a augmented invariance data set comprising one or more invariance groups having labeled examples for each entity type t, and training the machine learning model using labeled examples from the augmented invariance data set.
    Type: Application
    Filed: November 10, 2022
    Publication date: May 18, 2023
    Applicant: Oracle International Corporation
    Inventors: Duy Vu, Varsha Kuppur Rajendra, Shivashankar Subramanian, Ahmed Ataallah Ataallah Abobakr, Thanh Long Duong, Mark Edward Johnson
  • Publication number: 20230100508
    Abstract: Techniques disclosed herein relate generally to text classification and include techniques for fusing word embeddings with word scores for text classification. In one particular aspect, a method for text classification is provided that includes obtaining an embedding vector for a textual unit, based on a plurality of word embedding vectors and a plurality of word scores. The plurality of word embedding vectors includes a corresponding word embedding vector for each of a plurality of words of the textual unit, and the plurality of word scores includes a corresponding word score for each of the plurality of words of the textual unit. The method also includes passing the embedding vector for the textual unit through at least one feed-forward layer to obtain a final layer output, and performing a classification on the final layer output.
    Type: Application
    Filed: September 29, 2022
    Publication date: March 30, 2023
    Applicant: Oracle International Corporation
    Inventors: Ahmed Ataallah Ataallah Abobakr, Mark Edward Johnson, Thanh Long Duong, Vladislav Blinov, Yu-Heng Hong, Cong Duy Vu Hoang, Duy Vu
  • Publication number: 20230095673
    Abstract: Techniques for extracting key information from a document using machine-learning models in a chatbot system is disclosed herein. In one particular aspect, a method is provided that includes receiving a set of data, which includes key fields, within a document at a data processing system that includes a table detection module, a key information extraction module, and a table extraction module. Text information and corresponding location data are extracted via optical character recognition. The table detection module detects whether one or more tables are present in the document and, if applicable, a location of each of the tables. The key information extraction module extracts text from the key fields. The table extraction module extracts each of the tables based on input from the optical character recognition and the table detection module. Extraction results include the text from the key fields and each of the tables can be output.
    Type: Application
    Filed: August 15, 2022
    Publication date: March 30, 2023
    Applicant: Oracle International Corporation
    Inventors: Yakupitiyage Don Thanuja Samodhye Dharmasiri, Xu Zhong, Ahmed Ataallah Ataallah Abobakr, Hongtao Yang, Budhaditya Saha, Shaoke Xu, Shashi Prasad Suravarapu, Mark Edward Johnson, Thanh Long Duong