Patents by Inventor Silpa VADAKKEEVEETIL SREELATHA

Silpa VADAKKEEVEETIL SREELATHA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11699275
    Abstract: This disclosure relates generally to visio-linguistic understanding. Conventional methods use contextual visio-linguistic reasoner for visio-linguistic understanding which requires more compute power and large amount of pre-training data. Embodiments of the present disclosure provide a method for visio-linguistic understanding using contextual language model reasoner. The method converts the visual information of an input image into a format that the contextual language model reasoner understands and accepts for a downstream task. The method utilizes the image captions and confidence score associated with the image captions along with a knowledge graph to obtain a combined input in a format compatible with the contextual language model reasoner. Contextual embeddings corresponding to the downstream task is obtained using the combined input. The disclosed method is used to solve several downstream tasks such as scene understanding, visual question answering, visual common-sense reasoning and so on.
    Type: Grant
    Filed: June 16, 2021
    Date of Patent: July 11, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Sai Sree Bhargav Kurma, Kanika Kalra, Silpa Vadakkeeveetil Sreelatha, Manasi Patwardhan, Shirish Subhash Karande
  • Publication number: 20220019734
    Abstract: This disclosure relates generally to visio-linguistic understanding. Conventional methods use contextual visio-linguistic reasoner for visio-linguistic understanding which requires more compute power and large amount of pre-training data. Embodiments of the present disclosure provide a method for visio-linguistic understanding using contextual language model reasoner. The method converts the visual information of an input image into a format that the contextual language model reasoner understands and accepts for a downstream task. The method utilizes the image captions and confidence score associated with the image captions along with a knowledge graph to obtain a combined input in a format compatible with the contextual language model reasoner. Contextual embeddings corresponding to the downstream task is obtained using the combined input. The disclosed method is used to solve several downstream tasks such as scene understanding, visual question answering, visual common-sense reasoning and so on.
    Type: Application
    Filed: June 16, 2021
    Publication date: January 20, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Sai Sree Bhargav KURMA, Kanika KALRA, Silpa VADAKKEEVEETIL SREELATHA, Manasi PATWARDHAN, Shirish Subhash KARANDE