Patents by Inventor Umanga Bista
Umanga Bista has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240095584Abstract: Techniques are disclosed herein for objective function optimization in target based hyperparameter tuning. In one aspect, a computer-implemented method is provided that includes initializing a machine learning algorithm with a set of hyperparameter values and obtaining a hyperparameter objective function that comprises a domain score for each domain that is calculated based on a number of instances within an evaluation dataset that are correctly or incorrectly predicted by the machine learning algorithm during a given trial. For each trial of a hyperparameter tuning process: training the machine learning algorithm to generate a machine learning model, running the machine learning model in different domains using the set of hyperparameter values, evaluating the machine learning model for each domain, and once the machine learning model has reached convergence, outputting at least one machine learning model.Type: ApplicationFiled: May 15, 2023Publication date: March 21, 2024Applicant: Oracle International CorporationInventors: Ying Xu, Vladislav Blinov, Ahmed Ataallah Ataallah Abobakr, Thanh Long Duong, Mark Edward Johnson, Elias Luqman Jalaluddin, Xin Xu, Srinivasa Phani Kumar Gadde, Vishal Vishnoi, Poorya Zaremoodi, Umanga Bista
-
Publication number: 20240086767Abstract: Techniques are disclosed herein for continuous hyperparameter tuning with automatic domain weight adjustment based on periodic performance checkpoints. In one aspect, a method is provided that includes initializing a machine learning algorithm with a set of hyperparameter values and obtaining a hyperparameter objective function that is defined at least in part on a plurality of domains of a search space that is associated with the machine learning algorithm. For each trial of a hyperparameter tuning process: running the machine learning algorithm in different domains using the set of hyperparameter values, periodically checking a performance of the machine learning algorithm in the different domains based on the hyperparameter objective function; and continuing hyperparameter tuning with a new set of hyperparameter values after automatically adjusting the domain weights according to a regression status of the different domains.Type: ApplicationFiled: April 3, 2023Publication date: March 14, 2024Applicant: Oracle International CorporationInventors: Ying Xu, Vladislav Blinov, Ahmed Ataallah Ataallah Abobakr, Mark Edward Johnson, Thanh Long Duong, Srinivasa Phani Kumar Gadde, Vishal Vishnoi, Xin Xu, Elias Luqman Jalaluddin, Umanga Bista
-
Publication number: 20240028963Abstract: An augmentation and feature caching subsystem is described for training AI/ML models. In one particular aspect, a method is provided that includes receiving data comprising training examples, one or more augmentation configuration hyperparameters and one or more feature extraction configuration hyperparameters; generating a first key based on one of the training examples and the one or more augmentation configuration hyperparameters; searching a first key-value storage based on the first key; obtaining one or more augmentations based on the search of the first key-value storage; applying the obtained one or more augmentations to the training examples to result in augmented training examples; generating a second key based on one of the augmented training examples and the one or more feature extraction configuration hyperparameters; searching a second key-value storage based on the second key; obtaining one or more features based on the search of the second key-value storage.Type: ApplicationFiled: July 11, 2023Publication date: January 25, 2024Applicant: Oracle International CorporationInventors: Vladislav Blinov, Vishal Vishnoi, Thanh Long Duong, Mark Edward Johnson, Xin Xu, Elias Luqman Jalaluddin, Ying Xu, Ahmed Ataallah Ataallah Abobakr, Umanga Bista, Thanh Tien Vu
-
Publication number: 20230419052Abstract: Novel techniques are described for positive entity-aware augmentation using a two-stage augmentation to improve the stability of the model to entity value changes for intent prediction. In one particular aspect, a method is provided that includes accessing a first set of training data for an intent prediction model, the first set of training data comprising utterances and intent labels; applying one or more positive data augmentation techniques to the first set of training data, depending on the tuning requirements for hyper-parameters, to result in a second set of training data, where the positive data augmentation techniques comprise Entity-Aware (“EA”) technique and a two-stage augmentation technique; combining the first set of training data and the second set of training data to generate expanded training data; and training the intent prediction model using the expanded training data.Type: ApplicationFiled: February 1, 2023Publication date: December 28, 2023Applicant: Oracle International CorporationInventors: Ahmed Ataallah Ataallah Abobakr, Shivashankar Subramanian, Ying Xu, Vladislav Blinov, Umanga Bista, Tuyen Quang Pham, Thanh Long Duong, Mark Edward Johnson, Elias Luqman Jalaluddin, Vanshika Sridharan, Xin XU, Srinivasa Phani Kumar Gadde, Vishal Vishnoi
-
Publication number: 20230419040Abstract: Novel techniques are described for data augmentation using a two-stage entity-aware augmentation to improve model robustness to entity value changes for intent prediction.Type: ApplicationFiled: February 1, 2023Publication date: December 28, 2023Applicant: Oracle International CorporationInventors: Ahmed Ataallah Ataallah Abobakr, Shivashankar Subramanian, Ying Xu, Vladislav Blinov, Umanga Bista, Tuyen Quang Pham, Thanh Long Duong, Mark Edward Johnson, Elias Luqman Jalaluddin, Vanshika Sridharan, Xin Xu, Srinivasa Phani Kumar Gadde, Vishal Vishnoi
-
Publication number: 20230419127Abstract: Novel techniques are described for negative entity-aware augmentation using a two-stage augmentation to improve the stability of the model to entity value changes for intent prediction. In some embodiments, a method comprises accessing a first set of training data for an intent prediction model, the first set of training data comprising utterances and intent labels; applying one or more negative entity-aware data augmentation techniques to the first set of training data, depending on the tuning requirements for hyper-parameters, to result in a second set of training data, where the one or more negative entity-aware data augmentation techniques comprise Keyword Augmentation Technique (“KAT”) plus entity without context technique and KAT plus entity in random context as OOD technique; combining the first set of training data and the second set of training data to generate expanded training data; and training the intent prediction model using the expanded training data.Type: ApplicationFiled: February 1, 2023Publication date: December 28, 2023Applicant: Oracle International CorporationInventors: Ahmed Ataallah Ataallah Abobakr, Shivashankar Subramanian, Ying Xu, Vladislav Blinov, Umanga Bista, Tuyen Quang Pham, Thanh Long Duong, Mark Edward Johnson, Elias Luqman Jalaluddin, Vanshika Sridharan, Xin Xu, Srinivasa Phani Kumar Gadde, Vishal Vishnoi
-
Publication number: 20230376700Abstract: Techniques are provided for generating training data to facilitate fine-tuning embedding models. Training data including anchor utterances is obtained. Positive utterances and negative utterances are generated from the anchor utterances. Tuples including the anchor utterances, the positive utterances, and the negative utterances are formed. Embeddings for the tuples are generated and a pre-trained embedding model is fine-tuned based on the embeddings. The fine-tuned model can be deployed to a system.Type: ApplicationFiled: May 9, 2023Publication date: November 23, 2023Applicant: Oracle International CorporationInventors: Umanga Bista, Vladislav Blinov, Mark Edward Johnson, Ahmed Ataallah Ataallah Abobakr, Thanh Long Duong, Srinivasa Phani Kumar Gadde, Vishal Vishnoi, Elias Luqman Jalaluddin, Xin Xu, Shivashankar Subramanian
-
Publication number: 20220067273Abstract: A computer-implemented method and data display system for identifying and visualizing differences between a first set comprising one or more text items and a second set comprising one or more text items is disclosed. The method includes extracting a collection of named entities from the first and second sets of one or more text items, generating a composite graph structure from the collection of named entities, the composite graph structure configured to display differences between the first and second set of text items and then displaying spatially the composite graph structure.Type: ApplicationFiled: December 19, 2019Publication date: March 3, 2022Inventors: Minjeong Shin, Dongwoo Kim, Jae Hee Lee, Umanga Bista, Lexing Xie