Patents by Inventor VIVEK Bangalore Sampathkumar

VIVEK Bangalore Sampathkumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104377
    Abstract: This disclosure relates generally to the field of Electroencephalogram (EEG) classification, and, more particularly, to method and system for EEG motor imagery classification. Existing deep learning works employ the sensor-space for EEG graph representations wherein the channels of the EEG are considered as nodes and connection between the nodes are either predefined or are based on certain heuristics. However, these representations are ineffective and fail to accurately capture the underlying brain's functional networks. Embodiments of present disclosure provide a method of training a weighted adjacency matrix and a Graph Neural Network (GNN) to accurately represent the EEG signals. The method also trains a graph, a node, and an edge classifier to perform graph classification (i.e. motor imagery classification), node and edge classification. Thus, representations generated by the GNN can be additionally used for node and edge classification unlike state of the art methods.
    Type: Application
    Filed: June 14, 2023
    Publication date: March 28, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: JAYAVARDHANA RAMA GUBBI LAKSHMINARASIMHA, ADARSH ANAND, KARTIK MURALIDHARAN, ARPAN PAL, VIVEK BANGALORE SAMPATHKUMAR, RAMESH KUMAR RAMAKRISHNAN
  • Publication number: 20240020962
    Abstract: The disclosure generally relates to scene graph generation. Scene graph captures rich semantic information of an image by representing objects and their relationships as nodes and edges of a graph and has several applications including image retrieval, action recognition, visual question answering, autonomous driving, robotics. However, to leverage scene graphs, computationally efficient scene graph generation methods are required, which is very challenging to generate due presence of a quadratic number of potential edges and computationally intensive/non-scalable techniques for detecting the relationship between each object pair using the traditional approach. The disclosure proposes a combination of edge proposal neural network and the Graph neural network with spatial message passing (GNN-SMP) along with several techniques including a feature extraction technique, object detection technique, un-labelled graph generation technique and a scene graph generation technique to generate scene graphs.
    Type: Application
    Filed: June 29, 2023
    Publication date: January 18, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: Jayavardhana Rama GUBBI LAKSHMINARASIMHA, Vivek Bangalore SAMPATHKUMAR, Rajan Mindigal Alasingara BHATTACHAR, Balamuralidhar PURUSHOTHAMAN, Arpan PAL
  • Publication number: 20240013522
    Abstract: This disclosure relates generally to identification and mitigation of bias while training deep learning models. Conventional methods do not provide effective methods for bias identification, and they require pre-defined concepts and rules for bias mitigation. The embodiments of the present disclosure train an auto-encoder to produce a generalized representation of an input image by decomposing into a set of latent embedding. The set of latent embedding are used to learn the shape and color concepts of the input image. The feature specialization is done by training an auto-encoder to reconstruct the input image using the shape embedding modulated by color embedding. To identify the bias, permutation invariant neural network is trained for classification task and attribution scores corresponding to each concept embedding are computed. The method also performs de-biasing the classifier by training it with a set of counterfactual images generated by modifying the latent embedding learned by the auto-encoder.
    Type: Application
    Filed: June 13, 2023
    Publication date: January 11, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: Jayavardhana Rama GUBBI LAKSHMINARASIMHA, Vartika SENGAR, Vivek Bangalore SAMPATHKUMAR, Gaurab BHATTACHARYA, Balamuralidhar PURUSHOTHAMAN, Arpan PAL
  • Publication number: 20230047937
    Abstract: The disclosure herein relates to methods and systems for generating an end-to-end de-smoking model for removing smoke present in a video. Conventional data-driven based de-smoking approaches are limited mainly due to lack of suitable training data. Further, the conventional data-driven based de-smoking approaches are not end-to-end for removing the smoke present in the video. The de-smoking model of the present disclosure is trained end-to-end with the use of synthesized smoky video frames that are obtained by source aware smoke synthesis approach. The end-to-end de-smoking model localize and remove the smoke present in the video, using dynamic properties of the smoke. Hence the end-to-end de-smoking model simultaneously identifies the regions affected with the smoke and performs the de-smoking with minimal artifacts. localized smoke removal and color restoration of a real-time video.
    Type: Application
    Filed: December 16, 2021
    Publication date: February 16, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: Jayavardhana Rama GUBBI LAKSHMINARASIMHA, Vartika Sengar, Vivek Bangalore Sampathkumar, Aparna Kanakatte Gurumurthy, Murali Poduval, Balamuralidhar Purushothaman, Karthik Seemakurthy, Avik Ghose, Srinivasan Jayaraman
  • Publication number: 20220366618
    Abstract: The disclosure herein relates to methods and systems for localized smoke removal and color restoration of a real-time video. Conventional techniques apply the de-smoking process only on a single image, by finding the regions having the smoke, based on manual air-light estimation. In addition, regaining original colors of de-smoked image is quite challenging. The present disclosure herein solves the technical problems. In the first stage, video frames having the smoky and smoke-free video frames are identified, from the video received in the real-time. In the second stage, an air-light is estimated automatically using a combined feature map. An intermediate de-smoked video frame for each smoky video frame is generated based on the air-light using a de-smoking algorithm. In the third and the last stage, a smoke-free video reference frame is used to compensate for color distortions introduced by the de-smoking algorithm in the second stage.
    Type: Application
    Filed: December 20, 2021
    Publication date: November 17, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Jayavardhana Rama Gubbi Lakshminarasimha, KARTHIK SEEMAKURTHY, VARTIKA SENGAR, APARNA KANAKATTE GURUMURTHY, AVIK GHOSE, BALAMURALIDHAR PURUSHOTHAMAN, MURALI PODUVAL, JAYEETA SAHA, SRINIVASAN JAYARAMAN, VIVEK Bangalore Sampathkumar