Patents by Inventor RAVINDRAN SUBBIAH

RAVINDRAN SUBBIAH has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12093695
    Abstract: This disclosure relates generally relates to method and system to process asynchronous and distributed training tasks. Training a large-scale deep neural network (DNN) model with large-scale training data is time-consuming. The method creates a work queue (Q) with a set of predefined number of tasks comprising a training data. Here, set of central processing units (CPUs) information and a set of graphics processing units (GPUs) information are fetched from the current environment to initiate a parallel process asynchronously on the work queue (Q) to train a set of deep learning models with optimized resources using a data pre-processing technique, to compute a transformed training data and training by using an asynchronous model training technique, the set of deep learning models on each GPU asynchronously with the transformed training data based on a set of asynchronous model parameters.
    Type: Grant
    Filed: February 22, 2023
    Date of Patent: September 17, 2024
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Amit Kalele, Ravindran Subbiah, Anubhav Jain
  • Publication number: 20240184996
    Abstract: Human-understandable explanations of Artificial Intelligence (AI) based models are crucial to building transparency and trust in AI based solutions. More importantly, these explanations need to be contextual, applicable to the domain the model is used in and relevant to the concerned stakeholder. Conventionally, there is a lack of communicating these explanations to various stakeholders in a language that they can understand and relate to. The present disclosure facilitates the conversational agents (chat bots) with intelligence and actions that would help them communicate the right information to the right stakeholder in the right way. In the present disclosure, contextual explanation for user queries is generated based on the output from AI models. Here, the impacting features are obtained from the explainer model associated with the prediction model and the contextual information is generated. Further, the contextual information is converted to the contextual explanation to the user.
    Type: Application
    Filed: November 29, 2023
    Publication date: June 6, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: Jayashree ARUNKUMAR, Ravindran Subbiah, Jyoti Bhat, Amit Kalele
  • Publication number: 20230297388
    Abstract: This disclosure relates generally relates to method and system to process asynchronous and distributed training tasks. Training a large-scale deep neural network (DNN) model with large-scale training data is time-consuming. The method creates a work queue (Q) with a set of predefined number of tasks comprising a training data. Here, set of central processing units (CPUs) information and a set of graphics processing units (GPUs) information are fetched from the current environment to initiate a parallel process asynchronously on the work queue (Q) to train a set of deep learning models with optimized resources using a data pre-processing technique, to compute a transformed training data and training by using an asynchronous model training technique, the set of deep learning models on each GPU asynchronously with the transformed training data based on a set of asynchronous model parameters.
    Type: Application
    Filed: February 22, 2023
    Publication date: September 21, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: AMIT KALELE, RAVINDRAN SUBBIAH, ANUBHAV JAIN
  • Publication number: 20230289613
    Abstract: State of art approaches independently use a Pruning-weight Clustering-Quantization (PCQ) or Knowledge Distillation (KD) for model optimization and require critical manual intervention. Embodiments of the present disclosure provide a method and system for the two-step hierarchical model optimization approach for generating optimized model DL model. The method comprises a AutoPCQ technique followed by conditional application of an automated KD (AKD) technique. The AutoPCQ technique formulates a problem of configuration selection of the DL model as an optimization problem by iteratively applying Bayesian optimization and Reinforcement Learning. Further, the AKD technique formulates automated search of a student model as the optimization problem with the DL model representing a teacher model. A search space for the student model is defined by a restricted Neural Network Architecture Search that restricts the search space.
    Type: Application
    Filed: December 1, 2022
    Publication date: September 14, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: AMIT KALELE, RAVINDRAN SUBBIAH, ANUBHAV JAIN, ISHANK GOEL