Patents by Inventor Arindam CHOWDHURY

Arindam CHOWDHURY has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220222956
    Abstract: This disclosure relates generally to intelligent visual reasoning over graphical illustrations using a MAC unit. Prior arts use visual attention to map particular words in a question to specific areas in an image to memorize the corresponding answers, thereby resulting in a limited capability to answer questions of a specific type. The present disclosure incorporates the MAC unit to enable reasoning capabilities and accordingly attend to an area in the image to find the answer. The present disclosure therefore allows generalizing over a possible set of questions with varying complexities so that an unseen question can also be answered correctly based on the reasoning methods that it has learned. The system and method of the present disclosure can be used for understanding of visual information when processing documents like business reports, research papers, consensus reports etc. containing charts and reduce the time spent in manual analysis.
    Type: Application
    Filed: May 28, 2020
    Publication date: July 14, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: MONIKA SHARMA, ARINDAM CHOWDHURY, LOVEKESH VIG, SHIKHA GUPTA
  • Patent number: 10970531
    Abstract: This disclosure relates to digitization of industrial inspection sheets. Digital scanning of paper based inspection sheets is a common process in factory settings. The paper based scans have data pertaining to millions of faults detected over several decades of inspection. The technical challenge ranges from image preprocessing and layout analysis to word and graphic item recognition. This disclosure provides a visual pipeline that works in the presence of both static and dynamic background in the scans, variability in machine template diagrams, unstructured shape of graphical objects to be identified and variability in the strokes of handwritten text. The pipeline incorporates a capsule and spatial transformer network based classifier for accurate text reading and a customized Connectionist Text Proposal Network (CTPN) for text detection in addition to hybrid techniques for arrow detection and dialogue cloud removal.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: April 6, 2021
    Assignee: Tata Consultancy Services Limited
    Inventors: Rohit Rahul, Arindam Chowdhury, Lovekesh Vig, . Animesh, Samarth Mittal
  • Patent number: 10936897
    Abstract: Various methods are using SQL based data extraction for extracting relevant information from images. These are rule based methods of generating SQL-Query from NL, if any new English sentences are to be handled then manual intervention is required. Further becomes difficult for non-technical user. A system and method for extracting relevant from the images using a conversational interface and database querying have been provided. The system eliminates noisy effects, identifying the type of documents and detect various entities for diagrams. Further a schema is designed which allows an easy to understand abstraction of the entities detected by the deep vision models and the relationships between them. Relevant information and fields can then be extracted from the document by writing SQL queries on top of the relationship tables. A natural language based interface is added so that a non-technical user, specifying the queries in natural language, can fetch the information effortlessly.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: March 2, 2021
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Lovekesh Vig, Gautam Shroff, Arindam Chowdhury, Rohit Rahul, Gunjan Sehgal, Vishwanath Doreswamy, Monika Sharma, Ashwin Srinivasan
  • Patent number: 10839246
    Abstract: The present disclosure provides systems and methods for end-to-end handwritten text recognition using neural networks. Most existing hybrid architectures involve high memory consumption and large number of computations to convert an offline handwritten text into a machine readable text with respective variations in conversion accuracy. The method combine a deep Convolutional Neural Network (CNN) with a RNN (Recurrent Neural Network) based encoder unit and decoder unit to map a handwritten text image to a sequence of characters corresponding to text present in the scanned handwritten text input image. The deep CNN is used to extract features from handwritten text image whereas the RNN based encoder unit and decoder unit is used to generate converted text as a set of characters. The disclosed method requires less memory consumption and less number of computations with better conversion accuracy over the existing hybrid architectures.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: November 17, 2020
    Assignee: Tata Consultancy Services Limited
    Inventors: Arindam Chowdhury, Lovekesh Vig
  • Publication number: 20200175304
    Abstract: Various methods are using SQL based data extraction for extracting relevant information from images. These are rule based methods of generating SQL-Query from NL, if any new English sentences are to be handled then manual intervention is required. Further becomes difficult for non-technical user. A system and method for extracting relevant from the images using a conversational interface and database querying have been provided. The system eliminates noisy effects, identifying the type of documents and detect various entities for diagrams. Further a schema is designed which allows an easy to understand abstraction of the entities detected by the deep vision models and the relationships between them. Relevant information and fields can then be extracted from the document by writing SQL queries on top of the relationship tables. A natural language based interface is added so that a non-technical user, specifying the queries in natural language, can fetch the information effortlessly.
    Type: Application
    Filed: March 14, 2019
    Publication date: June 4, 2020
    Applicant: Tata Consultancy Services Limited
    Inventors: Lovekesh VIG, Gautam SHROFF, Arindam CHOWDHURY, Rohit RAHUL, Gunjan SEHGAL, Vishwanath DORESWAMY, Monika SHARMA, Ashwin SRINIVASAN
  • Publication number: 20200167557
    Abstract: This disclosure relates to digitization of industrial inspection sheets. Digital scanning of paper based inspection sheets is a common process in factory settings. The paper based scans have data pertaining to millions of faults detected over several decades of inspection. The technical challenge ranges from image preprocessing and layout analysis to word and graphic item recognition. This disclosure provides a visual pipeline that works in the presence of both static and dynamic background in the scans, variability in machine template diagrams, unstructured shape of graphical objects to be identified and variability in the strokes of handwritten text. The pipeline incorporates a capsule and spatial transformer network based classifier for accurate text reading and a customized Connectionist Text Proposal Network (CTPN) for text detection in addition to hybrid techniques for arrow detection and dialogue cloud removal.
    Type: Application
    Filed: February 25, 2019
    Publication date: May 28, 2020
    Applicant: Tata Consultancy Services Limited
    Inventors: Rohit RAHUL, Arindam CHOWDHURY, Lovekesh VIG, . ANIMESH, Samarth MITTAL
  • Publication number: 20200026951
    Abstract: The present disclosure provides systems and methods for end-to-end handwritten text recognition using neural networks. Most existing hybrid architectures involve high memory consumption and large number of computations to convert an offline handwritten text into a machine readable text with respective variations in conversion accuracy. The method combine a deep Convolutional Neural Network (CNN) with a RNN (Recurrent Neural Network) based encoder unit and decoder unit to map a handwritten text image to a sequence of characters corresponding to text present in the scanned handwritten text input image. The deep CNN is used to extract features from handwritten text image whereas the RNN based encoder unit and decoder unit is used to generate converted text as a set of characters. The disclosed method requires less memory consumption and less number of computations with better conversion accuracy over the existing hybrid architectures.
    Type: Application
    Filed: July 10, 2019
    Publication date: January 23, 2020
    Applicant: Tata Consultancy Services Limited
    Inventors: Arindam CHOWDHURY, Lovekesh VIG