Patents by Inventor Anima MAJUMDER

Anima MAJUMDER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230153607
    Abstract: The present disclosure provides an adaptive meta-learning technique for determining robotic action. Conventional methods are focusing on task-relevant aspects of the input observations and fails to provide adaptive learning. Initially, a plurality of images pertaining to a visual demonstration for a robot are received by the system. Further, a plurality of vector embeddings are computed based on the plurality of images using an attentive embedding network. The attentive embedding network includes a first Convolutional Neural Network (CNN), a fully connected layer and a plurality of spatial attention modules. Finally, a control action is computed based on the plurality of vector embeddings, an image from the plurality of images, a robot joint state vector and robot joint velocity vector using a control network. The control network comprises a second CNN and a plurality of fully connected layers. The control network is connected to the attentive embedding using multiplicative spatial skip connections.
    Type: Application
    Filed: October 21, 2022
    Publication date: May 18, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: Vishal Kumar BHUTANI, Anima Majumder DUTTA, Rajesh SINHA, Samrat DUTTA
  • Publication number: 20220130062
    Abstract: Depth estimation of images using deep learning methods is a wide range of application in Augmented Reality, 3D graphics and robotics. Conventional methods are supervised, which requires explicit ground truth depth information for training and the conventional unsupervised methods fails to provide a generalized solution. The present disclosure estimates accurate depth information and confidence map of a given monocular image in an unsupervised manner. A depth Neural Network receives a monocular image and predicts per-pixel depth map and a confidence map. The depth NN utilizes a negative exponential of photometric loss as ground truth information. The predicted confidence-map is further used to estimate per-pixel uncertainty map. The pose NN predicts a plurality of pose vectors between a plurality of the consecutive monocular images. Finally, the Bayesian inference module is computes the fused depth information and the fused uncertainty map.
    Type: Application
    Filed: October 21, 2021
    Publication date: April 28, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: VISHAL KUMAR BHUTANI, MADHU BABU VANKADARI, ANIMA MAJUMDER DUTTA, OMPRAKASH MANOJKUMAR JHA, SAMRAT DUTTA
  • Publication number: 20220044065
    Abstract: This disclosure relates generally to system and method for parameter compression of capsule networks using deep features. The conventional capsule networks have distinct capability of retaining spatial correlations between extracted features but that comes at a cost of intensive computational, cost, memory usage and bandwidth requirement. The embodiments herein disclose a system and method for employing a lightweight deep features based capsule network that is capable of compressing the parameters. In an embodiment, the system includes a deep feature based capsule network such that the capsule layer is preceded by feature blocks. Said feature blocks comprises convolutional operation with a kernel size 3, followed by convolutional operation with kernel of size 1, and a Batch Normalization layer, and hence are able to extract deep features.
    Type: Application
    Filed: July 16, 2021
    Publication date: February 10, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Chandan Kumar SINGH, Vivek Kumar GANGWAR, Anima MAJUMDER, Prakash Chanderlal AMBWANI, Rajesh SINHA
  • Patent number: 10936905
    Abstract: Object annotation is images is tedious time consuming task when large volume of data needs to annotated. Existing methods limit to semiautomatic approaches for annotation. The embodiments herein provide a method and system for a deep network based architecture for automatic object annotation. The deep network utilized is a two stage network with first stage as an annotation model comprising a Faster Region-based Fully Convolutional Networks (F-RCNN) and Region-based Fully Convolutional Networks (RFCN) providing for two class classification to generate annotated images from a set of single object test images. Further, the newly annotated test object images are then used to synthetically generate cluttered images and their corresponding annotations, which are used to train the second stage of the deep network comprising the multi-class object detection/classification model designed using the F-RCNN and the RFCN as base networks to automatically annotate input test image in real time.
    Type: Grant
    Filed: July 5, 2019
    Date of Patent: March 2, 2021
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Chandan Kumar Singh, Anima Majumder, Swagat Kumar, Laxmidhar Behera
  • Publication number: 20200193222
    Abstract: Object annotation is images is tedious time consuming task when large volume of data needs to annotated. Existing methods limit to semiautomatic approaches for annotation. The embodiments herein provide a method and system for a deep network based architecture for automatic object annotation. The deep network utilized is a two stage network with first stage as an annotation model comprising a Faster Region-based Fully Convolutional Networks (F-RCNN) and Region-based Fully Convolutional Networks (RFCN) providing for two class classification to generate annotated images from a set of single object test images. Further, the newly annotated test object images are then used to synthetically generate cluttered images and their corresponding annotations, which are used to train the second stage of the deep network comprising the multi-class object detection/classification model designed using the F-RCNN and the RFCN as base networks to automatically annotate input test image in real time.
    Type: Application
    Filed: July 5, 2019
    Publication date: June 18, 2020
    Applicant: Tata Consultancy Services Limited
    Inventors: Chandan Kumar SINGH, Anima MAJUMDER, Swagat KUMAR, Laxmidhar BEHERA