Patents by Inventor Swarnava Dey

Swarnava Dey has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11967133
    Abstract: Embodiments of the present disclosure provide a method and system for co-operative and cascaded inference on the edge device using an integrated Deep Learning (DL) model for object detection and localization, which comprises a strong classifier trained on largely available datasets and a weak localizer trained on scarcely available datasets, and work in coordination to first detect object (fire) in every input frame using the classifier, and then trigger a localizer only for the frames that are classified as fire frames. The classifier and the localizer of the integrated DL model are jointly trained using Multitask Learning approach. Works in literature hardly address the technical challenge of embedding such integrated DL model to be deployed on edge devices. The method provides an optimal hardware software partitioning approach for components or segments of the integrated DL model which achieves a tradeoff between latency and accuracy in object classification and localization.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: April 23, 2024
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Swarnava Dey, Jayeeta Mondal, Jeet Dutta, Arpan Pal, Arijit Mukherjee, Balamuralidhar Purushothaman
  • Publication number: 20240046099
    Abstract: This disclosure relates generally to method and system for jointly pruning and hardware acceleration of pre-trained deep learning models. The present disclosure enables pruning a plurality of DNN models layers using an optimal pruning ratio. The method processes a pruning request to transform the plurality of DNN models and the plurality of hardware accelerators into a plurality of pruned hardware accelerated DNN models based on at least one user option. The first pruning search option executes a hardware pruning search technique to perform search on each DNN model and each processor based on at least one of a performance indicator and an optimal pruning ratio. The second pruning search option executes an optimal pruning search technique, to perform search on each layer with corresponding pruning ratio.
    Type: Application
    Filed: July 18, 2023
    Publication date: February 8, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: JEET DUTTA, Arpan PAL, ARIJIT MUKHERJEE, SWARNAVA DEY
  • Publication number: 20230334330
    Abstract: State of art techniques existing method refer to handling multiple objectives such as accuracy and latency. However, the reward functions are static and not tunable at user end. Further, for NN search with hardware constraints, approaches combine various techniques such as Reinforcement learning, Evolutionary Algorithm (EA) etc., however hardly any work attempts to disclose combining different NAS approaches in unison to reduce the search space of other. Embodiments of the present disclosure provide a method and system for automated creation of tiny Deep Learning (DL) models to be deployed on a platform having a set of hardware constraints. The method performs a coarse-grained search using a Fast EA NAS model and then utilizes a fine-grained search to identify customized and optimized tiny model. The coarse-grained search and the fine-grained search performed by agents based on a weighted multi-objective reward function, which are tunable at user end.
    Type: Application
    Filed: March 2, 2023
    Publication date: October 19, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: SHALINIA MUKHOPADHYAY, RAJIB LOCHAN C JANA, AVIK GHOSE, SWARNAVA DEY, JEET DUTTA
  • Patent number: 11735166
    Abstract: Automatic speech recognition techniques are implemented in resource constrained devices such as edge devices in internet of things where on-device speech recognition is required for low latency and privacy preservation. Existing neural network models for speech recognition have a large size and are not suitable for deployment in such devices. The present disclosure provides an architecture of a size constrained neural network and a method of training the size constrained neural network. The architecture of the size constrained neural network provides a way of increasing or decreasing number of feature blocks to achieve an accuracy-model size trade off. The method of training the size constrained neural network comprises creating a training dataset with short utterances and training the size constrained neural network with the training dataset to learn short term dependencies in the utterances. The trained size constrained neural network model is suitable for deployment in resource constrained devices.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: August 22, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Swarnava Dey, Jeet Dutta
  • Publication number: 20220375199
    Abstract: Embodiments of the present disclosure provide a method and system for co-operative and cascaded inference on the edge device using an integrated Deep Learning (DL) model for object detection and localization, which comprises a strong classifier trained on largely available datasets and a weak localizer trained on scarcely available datasets, and work in coordination to first detect object (fire) in every input frame using the classifier, and then trigger a localizer only for the frames that are classified as fire frames. The classifier and the localizer of the integrated DL model are jointly trained using Multitask Learning approach. Works in literature hardly address the technical challenge of embedding such integrated DL model to be deployed on edge devices. The method provides an optimal hardware software partitioning approach for components or segments of the integrated DL model which achieves a tradeoff between latency and accuracy in object classification and localization.
    Type: Application
    Filed: October 12, 2021
    Publication date: November 24, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Swarnava DEY, JAYEETA MONDAL, JEET DUTTA, ARPAN PAL, ARIJIT MUKHERJEE, BALAMURALIDHAR PURUSHOTHAMAN
  • Patent number: 11488026
    Abstract: A growing need for inferencing to be run on fog devices exists, in order to reduce the upstream network traffic. However, being computationally constrained in nature, executing complex deep inferencing models on such devices has been proved difficult. A system and method for partitioning of deep convolution neural network for execution of computationally constraint devices at a network edge has been provided. The system is configured to use depth wise input partitioning of convolutional operations in deep convolutional neural network (DCNN). The convolution operation is performed based on an input filter depth and number of filters for determining the appropriate parameters for partitioning based on an inference speedup method. The system uses a master-slave network for partitioning the input. The system is configured to address these problems by depth wise partitioning of input which ensures speedup inference of convolution operations by reducing pixel overlaps.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: November 1, 2022
    Assignee: Tata Consultancy Services Limited
    Inventors: Swarnava Dey, Arijit Mukherjee, Arpan Pal, Balamuralidhar Purushothaman
  • Publication number: 20220284293
    Abstract: Small and compact Deep Learning models are required for embedded Al in several domains. In many industrial use-cases, there are requirements to transform already trained models to ensemble embedded systems or re-train those for a given deployment scenario, with limited data for transfer learning. Moreover, the hardware platforms used in embedded application include FPGAs, AI hardware accelerators, System-on-Chips and on-premises computing elements (Fog/Network Edge). These are interconnected through heterogenous bus/network with different capacities. Method of the present disclosure finds how to automatically partition a given DNN into ensemble devices, considering the effect of accuracy—latency power—tradeoff, due to intermediate compression and effect of quantization due to conversion to AI accelerator SDKs.
    Type: Application
    Filed: September 14, 2021
    Publication date: September 8, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Swarnava DEY, Arpan PAL, Gitesh KULKARNI, Chirabrata BHAUMIK, Arijit UKIL, Jayeeta MONDAL, Ishan SAHU, Aakash TYAGI, Amit SWAIN, Arijit MUKHERJEE
  • Publication number: 20220233896
    Abstract: While facing polluted environments one does not need extreme breath filtering protection all the time. Methods of the art hardly focus on breathability aspect while working on smart masks. Embodiments of the present disclosure provide a system and method for context aware networked mask with dynamic air impedance for optimum breathability. The system provides a smart mask that regulates the air inlet (by modulating the position/pore size of filtering membrane or by using resizable gel) as per the prevalent risk at that instance. The context aware mask with a mesh network, disclosed herein, comprises sensors and processing unit, which use contextual information and Artificial Intelligence to develop models for generating alerts along with mechanism of dynamically regulating the pore size to ensure breathability by adjusting air impedance.
    Type: Application
    Filed: December 7, 2021
    Publication date: July 28, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: SANJAY MADHUKAR KIMBAHUNE, AVIK GHOSE, SUJIT RAGHUNATH SHINDE, ASHISH INDANI, DEVRAJ GOULIKAR, SWARNAVA DEY
  • Publication number: 20220157297
    Abstract: Automatic speech recognition techniques are implemented in resource constrained devices such as edge devices in internet of things where on-device speech recognition is required for low latency and privacy preservation. Existing neural network models for speech recognition have a large size and are not suitable for deployment in such devices. The present disclosure provides an architecture of a size constrained neural network and a method of training the size constrained neural network. The architecture of the size constrained neural network provides a way of increasing or decreasing number of feature blocks to achieve an accuracy-model size trade off. The method of training the size constrained neural network comprises creating a training dataset with short utterances and training the size constrained neural network with the training dataset to learn short term dependencies in the utterances. The trained size constrained neural network model is suitable for deployment in resource constrained devices.
    Type: Application
    Filed: June 29, 2021
    Publication date: May 19, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Swarnava Dey, Jeet Dutta
  • Patent number: 11249488
    Abstract: A system and method for offloading scalable robotic tasks in a mobile robotics framework. The system comprises a cluster of mobile robots and they are connected with a back-end cluster infrastructure. It receives scalable robotic tasks at a mobile robot of the cluster. The scalable robotics tasks include building a map of an unknown environment by using the mobile robot, navigating the environment using the map and localizing the mobile robot on the map. Therefore, the system estimate the map of an unknown environment and at the same time it localizes the mobile robot on the map. Further, the system analyzes the scalable robotics tasks based on computation, communication load and energy usage of each scalable robotic task. And finally the system priorities the scalable robotic tasks to minimize the execution time of the tasks and partitioning the SLAM with computation offloading in edge network and mobile cloud server setup.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: February 15, 2022
    Assignee: Tata Consultancy Services Limited
    Inventors: Swarnava Dey, Arijit Mukherjee
  • Patent number: 11141858
    Abstract: A data driven approach for fault detection in robotic actuation is disclosed. Here, a set of robotic tasks are received and analyzed by a Deep Learning (DL) analytics. The DL analytics includes a stateful (Long Short Term Memory) LSTM. Initially, the stateful LSTM is trained to match a set of activities associated with the robots based on a set of tasks gathered from the robots in a multi robot environment. Here, the stateful LSTM utilizes a master slave framework based load distribution technique and a probabilistic trellis approach to predict a next activity associated with the robot with minimum latency and increased accuracy. Further, the predicted next activity is compared with an actual activity of the robot to identify any faults associated robotic actuation.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: October 12, 2021
    Assignee: Tata Consultancy Services Limited
    Inventors: Avik Ghose, Swarnava Dey, Arijit Mukherjee
  • Patent number: 11062047
    Abstract: This disclosure relates generally to the use of distributed system for computation, and more particularly, relates to a method and system for optimizing computation and communication resource while preserving security in the distributed device for computation. In one embodiment, a system and method of utilizing plurality of constrained edge devices for distributed computation is disclosed. The system enables integration of the edge devices like residential gateways and smart phone into a grid of distributed computation. The edged devices with constrained bandwidth, energy, computation capabilities and combination thereof are optimized dynamically based on condition of communication network. The system further enables scheduling and segregation of data, to be analyzed, between the edge devices. The system may further be configured to preserve privacy associated with the data while sharing the data between the plurality of devices during computation.
    Type: Grant
    Filed: June 9, 2014
    Date of Patent: July 13, 2021
    Assignee: Tata Consultancy Services Ltd.
    Inventors: Arijit Mukherjee, Soma Bandyopadhyay, Arijit Ukil, Abhijan Bhattacharyya, Swarnava Dey, Arpan Pal, Himadri Sekhar Paul
  • Patent number: 10911543
    Abstract: Cloud robotics infrastructures generally support heterogeneous services that are offered by heterogeneous resources whose reliability or availability also varies widely with varying lifetime. For such systems, defining a static redundancy configuration for all services is difficult and often biased. Also, it is not feasible to define a redundancy configuration separately for each unique service. Therefore, in the present disclosure a trade-off between the two is ensured by providing At-most M-Modular Flexible Redundancy Model wherein an exact degree of redundancy is defined and is given to each service in a heterogeneous service environment and monitoring each task and subtask status to ensure that each subtask gets accomplished thereby enabling the tuning of the tradeoff between redundancy and cost and determining efficiency of the system by estimating number of resources utilized to complete specific subtask and comparing the resources utilization with the exact degree of redundancy defined.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: February 2, 2021
    Assignee: Tata Consultancy Services Limited
    Inventors: Swagata Biswas, Swarnava Dey, Arijit Mukherjee, Arpan Pal
  • Patent number: 10776621
    Abstract: Signal analysis is applied in various industries and medical field. In signal analysis, wavelet analysis plays an important role. The wavelet analysis needs to identify a mother wavelet associated with an input signal. However, identifying the mother wavelet associated with the input signal in an automatic way is challenging. Systems and methods of the present disclosure provides signal analysis with automatic selection of wavelets associated with the input signal. The method provided in the present disclosure receives the input signal and a set of parameters associated with the signal. Further, the input signal is analyzed converted into waveform. The waveforms are analyzed to provide image units. Further, the image units are processed by a plurality of deep architectures. The deep architectures provides a set of comparison scores and a matching wavelet family is determined by utilizing the set of comparison scores.
    Type: Grant
    Filed: February 22, 2018
    Date of Patent: September 15, 2020
    Assignee: Tata Consultancy Services Limited
    Inventors: Snehasis Banerjee, Swarnava Dey, Arijit Mukherjee, Swagata Biswas
  • Patent number: 10751881
    Abstract: In current distributed simultaneous localization and mapping (SLAM) implementations on multiple robots in a robotic cluster, failure of a leader robot terminates a map building process between multiple robots. Therefore, a technique for fault-tolerant SLAM in robotic clusters is disclosed. In this technique, robotic localization and mapping SLAM is executed in a resource constrained robotic cluster such that the distributed SLAM is executed in a reliable fashion and self-healed in case of failure of the leader robot. To ensure fault tolerance, the robots are enabled, by time series analysis, to find their individual failure probabilities and use that to enhance cluster reliability in a distributed manner.
    Type: Grant
    Filed: February 21, 2018
    Date of Patent: August 25, 2020
    Assignee: Tata Consultancy Services Limited
    Inventors: Swarnava Dey, Swagata Biswas, Arijit Mukherjee
  • Publication number: 20200143254
    Abstract: A growing need for inferencing to be run on fog devices exists, in order to reduce the upstream network traffic. However, being computationally constrained in nature, executing complex deep inferencing models on such devices has been proved difficult. A system and method for partitioning of deep convolution neural network for execution of computationally constraint devices at a network edge has been provided. The system is configured to use depth wise input partitioning of convolutional operations in deep convolutional neural network (DCNN). The convolution operation is performed based on an input filter depth and number of filters for determining the appropriate parameters for partitioning based on an inference speedup method. The system uses a master-slave network for partitioning the input. The system is configured to address these problems by depth wise partitioning of input which ensures speedup inference of convolution operations by reducing pixel overlaps.
    Type: Application
    Filed: August 8, 2019
    Publication date: May 7, 2020
    Applicant: Tata Consultancy Services Limited
    Inventors: SWARNAVA DEY, Arijit Mukherjee, Arpan Pal, Balamuralidhar Purushothaman
  • Publication number: 20200007624
    Abstract: Cloud robotics infrastructures generally support heterogeneous services that are offered by heterogeneous resources whose reliability or availability also varies widely with varying lifetime. For such systems, defining a static redundancy configuration for all services is difficult and often biased. Also, it is not feasible to define a redundancy configuration separately for each unique service. Therefore, in the present disclosure a trade-off between the two is ensured by providing At-most M-Modular Flexible Redundancy Model wherein an exact degree of redundancy is defined and is given to each service in a heterogeneous service environment and monitoring each task and subtask status to ensure that each subtask gets accomplished thereby enabling the tuning of the tradeoff between redundancy and cost and determining efficiency of the system by estimating number of resources utilized to complete specific subtask and comparing the resources utilization with the exact degree of redundancy defined.
    Type: Application
    Filed: March 14, 2019
    Publication date: January 2, 2020
    Applicant: Tata Consultancy Services Limited
    Inventors: Swagata BISWAS, Swarnava DEY, Arijit MUKHERJEE, Arpan PAL
  • Patent number: 10516726
    Abstract: A method for data partitioning in an internet-of-things (IoT) network is described. The method includes determining number of computing nodes in the IoT network capable of contributing in processing of a data set. At least one capacity parameter associated with each computing node in the IoT network and each communication link between a computing node and a data analytics system can be ascertained. The capacity parameter can indicate a computational capacity for each computing node and communication capacity for each communication link. An availability status, indicating temporal availability, of each of computing nodes and each communication link is determined. The data set is partitioned into subsets, based on the number of computing nodes, the capacity parameter and the availability status, for parallel processing of the subsets.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: December 24, 2019
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Himadri Sekhar Paul, Arijit Mukherjee, Swarnava Dey, Arpan Pal, Ansuman Banerjee
  • Publication number: 20190283254
    Abstract: A data driven approach for fault detection in robotic actuation is disclosed. Here, a set of robotic tasks are received and analyzed by a Deep Learning (DL) analytics. The DL analytics includes a stateful (Long Short Term Memory) LSTM. Initially, the stateful LSTM is trained to match a set of activities associated with the robots based on a set of tasks gathered from the robots in a multi robot environment. Here, the stateful LSTM utilizes a master slave framework based load distribution technique and a probabilistic trellis approach to predict a next activity associated with the robot with minimum latency and increased accuracy. Further, the predicted next activity is compared with an actual activity of the robot to identify any faults associated robotic actuation.
    Type: Application
    Filed: December 5, 2018
    Publication date: September 19, 2019
    Applicant: Tata Consultancy Services Limited
    Inventors: Avik GHOSE, Swarnava DEY, Arijit MUKHERJEE
  • Patent number: 10320704
    Abstract: Methods and devices for controlling execution of a data analytics application on a computing device are described. The devices include an alert app to prompt a user on system load and to recommend the user for proactively controlling the execution of a set of processes to reclaim computational resources required for execution of the data analytics application on the devices.
    Type: Grant
    Filed: March 23, 2015
    Date of Patent: June 11, 2019
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Swarnava Dey, Arijit Mukherjee, Pubali Datta, Himadri Sekhar Paul