Patents by Inventor Jeet Dutta

Jeet Dutta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11967133
    Abstract: Embodiments of the present disclosure provide a method and system for co-operative and cascaded inference on the edge device using an integrated Deep Learning (DL) model for object detection and localization, which comprises a strong classifier trained on largely available datasets and a weak localizer trained on scarcely available datasets, and work in coordination to first detect object (fire) in every input frame using the classifier, and then trigger a localizer only for the frames that are classified as fire frames. The classifier and the localizer of the integrated DL model are jointly trained using Multitask Learning approach. Works in literature hardly address the technical challenge of embedding such integrated DL model to be deployed on edge devices. The method provides an optimal hardware software partitioning approach for components or segments of the integrated DL model which achieves a tradeoff between latency and accuracy in object classification and localization.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: April 23, 2024
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Swarnava Dey, Jayeeta Mondal, Jeet Dutta, Arpan Pal, Arijit Mukherjee, Balamuralidhar Purushothaman
  • Publication number: 20240046099
    Abstract: This disclosure relates generally to method and system for jointly pruning and hardware acceleration of pre-trained deep learning models. The present disclosure enables pruning a plurality of DNN models layers using an optimal pruning ratio. The method processes a pruning request to transform the plurality of DNN models and the plurality of hardware accelerators into a plurality of pruned hardware accelerated DNN models based on at least one user option. The first pruning search option executes a hardware pruning search technique to perform search on each DNN model and each processor based on at least one of a performance indicator and an optimal pruning ratio. The second pruning search option executes an optimal pruning search technique, to perform search on each layer with corresponding pruning ratio.
    Type: Application
    Filed: July 18, 2023
    Publication date: February 8, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: JEET DUTTA, Arpan PAL, ARIJIT MUKHERJEE, SWARNAVA DEY
  • Publication number: 20230334330
    Abstract: State of art techniques existing method refer to handling multiple objectives such as accuracy and latency. However, the reward functions are static and not tunable at user end. Further, for NN search with hardware constraints, approaches combine various techniques such as Reinforcement learning, Evolutionary Algorithm (EA) etc., however hardly any work attempts to disclose combining different NAS approaches in unison to reduce the search space of other. Embodiments of the present disclosure provide a method and system for automated creation of tiny Deep Learning (DL) models to be deployed on a platform having a set of hardware constraints. The method performs a coarse-grained search using a Fast EA NAS model and then utilizes a fine-grained search to identify customized and optimized tiny model. The coarse-grained search and the fine-grained search performed by agents based on a weighted multi-objective reward function, which are tunable at user end.
    Type: Application
    Filed: March 2, 2023
    Publication date: October 19, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: SHALINIA MUKHOPADHYAY, RAJIB LOCHAN C JANA, AVIK GHOSE, SWARNAVA DEY, JEET DUTTA
  • Patent number: 11735166
    Abstract: Automatic speech recognition techniques are implemented in resource constrained devices such as edge devices in internet of things where on-device speech recognition is required for low latency and privacy preservation. Existing neural network models for speech recognition have a large size and are not suitable for deployment in such devices. The present disclosure provides an architecture of a size constrained neural network and a method of training the size constrained neural network. The architecture of the size constrained neural network provides a way of increasing or decreasing number of feature blocks to achieve an accuracy-model size trade off. The method of training the size constrained neural network comprises creating a training dataset with short utterances and training the size constrained neural network with the training dataset to learn short term dependencies in the utterances. The trained size constrained neural network model is suitable for deployment in resource constrained devices.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: August 22, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Swarnava Dey, Jeet Dutta
  • Publication number: 20220375199
    Abstract: Embodiments of the present disclosure provide a method and system for co-operative and cascaded inference on the edge device using an integrated Deep Learning (DL) model for object detection and localization, which comprises a strong classifier trained on largely available datasets and a weak localizer trained on scarcely available datasets, and work in coordination to first detect object (fire) in every input frame using the classifier, and then trigger a localizer only for the frames that are classified as fire frames. The classifier and the localizer of the integrated DL model are jointly trained using Multitask Learning approach. Works in literature hardly address the technical challenge of embedding such integrated DL model to be deployed on edge devices. The method provides an optimal hardware software partitioning approach for components or segments of the integrated DL model which achieves a tradeoff between latency and accuracy in object classification and localization.
    Type: Application
    Filed: October 12, 2021
    Publication date: November 24, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Swarnava DEY, JAYEETA MONDAL, JEET DUTTA, ARPAN PAL, ARIJIT MUKHERJEE, BALAMURALIDHAR PURUSHOTHAMAN
  • Publication number: 20220157297
    Abstract: Automatic speech recognition techniques are implemented in resource constrained devices such as edge devices in internet of things where on-device speech recognition is required for low latency and privacy preservation. Existing neural network models for speech recognition have a large size and are not suitable for deployment in such devices. The present disclosure provides an architecture of a size constrained neural network and a method of training the size constrained neural network. The architecture of the size constrained neural network provides a way of increasing or decreasing number of feature blocks to achieve an accuracy-model size trade off. The method of training the size constrained neural network comprises creating a training dataset with short utterances and training the size constrained neural network with the training dataset to learn short term dependencies in the utterances. The trained size constrained neural network model is suitable for deployment in resource constrained devices.
    Type: Application
    Filed: June 29, 2021
    Publication date: May 19, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Swarnava Dey, Jeet Dutta