Patents by Inventor Shahid Reza

Shahid Reza has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12175204
    Abstract: Techniques for dynamically developing a contextual set of prompts based on relevant aspects extracted from s set of training data. One technique includes obtaining training data comprising text examples and associated labels, extracting aspects from the training data, generating prompting templates based on the training data and the extracted aspects, concatenating each of the text examples with the respective generated prompting template to create prompting functions, training a machine learning language model on the prompting functions to predict a solution for a task, where the training is formulated as a masked language modeling problem with blanks of the prompting templates being set as text labels and expected output for the task being set as specified solution labels, and the training learns or updates model parameters of the machine learning language model for performing the task. The machine learning language model is provided with the learned or updated model parameters.
    Type: Grant
    Filed: January 25, 2022
    Date of Patent: December 24, 2024
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Shahid Reza, Nagaraj N. Bhat
  • Publication number: 20240143934
    Abstract: A method includes accessing document including sentences, document being associated with configuration flag indicating whether ABSA, SLSA, or both are to be performed; inputting the document into language model that generates chunks of token embeddings for the document; and, based on the configuration flag, performing at least one from among the ABSA and the SLSA by inputting the chunks of token embeddings into a multi-task model. When performing the SLSA, a part of token embeddings in each of the chunks is masked, and the masked token embeddings do not belong to a particular sentence on which the SLSA is performed.
    Type: Application
    Filed: October 12, 2023
    Publication date: May 2, 2024
    Applicant: Oracle International Corporation
    Inventors: Poorya Zaremoodi, Duy Vu, Nagaraj N. Bhat, Srijon Sarkar, Varsha Kuppur Rajendra, Thanh Long Duong, Mark Edward Johnson, Pramir Sarkar, Shahid Reza
  • Publication number: 20240135116
    Abstract: A computer-implemented method includes: accessing a plurality of datasets, where each dataset of the plurality of datasets includes training examples; selecting datasets that include the training examples in a source language and a target language; and sampling, based on a sampling weight that is determined for each of the selected datasets, the training examples from the selected datasets to generate the training batches; training an ML model for performing at least a first task using the training examples of the training batches, by interleavingly inputting the training batches to the ML model; and outputting the trained ML model configured to perform the at least the first task on input utterances provided in at least one among the source language and the target language. The sampling weight is determined for each of the selected datasets based on one or more attributes common to the training examples of the selected dataset.
    Type: Application
    Filed: October 12, 2023
    Publication date: April 25, 2024
    Applicant: Oracle International Corporation
    Inventors: Duy Vu, Poorya Zaremoodi, Nagaraj N. Bhat, Srijon Sarkar, Varsha Kuppur Rajendra, Thanh Long Duong, Mark Edward Johnson, Pramir Sarkar, Shahid Reza
  • Publication number: 20230237277
    Abstract: Techniques for dynamically developing a contextual set of prompts based on relevant aspects extracted from s set of training data. One technique includes obtaining training data comprising text examples and associated labels, extracting aspects from the training data, generating prompting templates based on the training data and the extracted aspects, concatenating each of the text examples with the respective generated prompting template to create prompting functions, training a machine learning language model on the prompting functions to predict a solution for a task, where the training is formulated as a masked language modeling problem with blanks of the prompting templates being set as text labels and expected output for the task being set as specified solution labels, and the training learns or updates model parameters of the machine learning language model for performing the task. The machine learning language model is provided with the learned or updated model parameters.
    Type: Application
    Filed: January 25, 2022
    Publication date: July 27, 2023
    Applicant: Oracle International Corporation
    Inventors: Shahid Reza, Nagaraj N. Bhat
  • Publication number: 20230131834
    Abstract: A system is disclosed that is configured to perform various bias checks on an machine learning (ML) model in order to identify one or more biases, if any, that may be inherent to the ML model. Bias evaluation results generated from performing the checks are then reported to a user, such as to a consumer of the ML model, a data scientist responsible for modeling and training the ML model, and others. The bias evaluation system performs one or more bias checks by generating synthetic datasets using attributes present in the ML model or a training dataset used to train the ML model. Prediction data is then generated by inputting the synthetically generated input data points of the synthetic datasets into the ML model. The prediction data is then processed and evaluated for biases. Results of the evaluation may be compiled into a bias evaluation report.
    Type: Application
    Filed: October 22, 2021
    Publication date: April 27, 2023
    Applicant: Oracle International Corporation
    Inventors: Hari Bhaskar Sankaranarayanan, Shahid Reza, Arpit Katiyar
  • Publication number: 20230100303
    Abstract: Systems and methods for fractional inference on GPU and CPU for large scale deployment of customized transformers based language models are disclosed herein. The method can include, receiving data for use in generation of a machine learning model output, ingesting the data with a first machine learning model on a Graphic Processing Unit, receiving at least one intermediate output from the first machine learning model at a temporary store, receiving the at least one intermediate output from the temporary store at a Central Processing Unit, ingesting the at least one intermediate output with a second machine learning model on the Central Processing Unit, and outputting a prediction with the second machine learning model.
    Type: Application
    Filed: September 28, 2021
    Publication date: March 30, 2023
    Applicant: Oracle International Corporation
    Inventors: Siddhant Jain, Saransh Mehta, Shahid Reza
  • Publication number: 20230048920
    Abstract: Systems and methods for implementing federated learning engine for integration of vertical and horizontal AI are disclosed herein. A method can include receiving a global model from a central aggregator communicatingly connected with a plurality of user environments, which global model including a plurality of layers. The method can include training a mini model on top of the global model with data gathered within the user environment, uploading the at least a portion of the mini model to the central aggregator, receiving a plurality of mini models, and creating a fusion model based on the received plurality of mini models.
    Type: Application
    Filed: August 11, 2021
    Publication date: February 16, 2023
    Applicant: Oracle International Corporation
    Inventors: Rajarshi Bhose, Shahid Reza, Siddhant Jain