Patents by Inventor Brojeshwar Bhowmick

Brojeshwar Bhowmick has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240103607
    Abstract: This disclosure relates generally to system and method for ambient intelligence based user interaction. Prior methods for touchless user interaction are sensitive to ambient temperature in a lab environment, susceptible to noise from metallic surfaces and ambient radio waves and are dependent on ambient lighting. Embodiments of the present disclosure provides a multi-modal sensor fusion method which captures touchless gestures from a user or a group of users with their physical context information fused and tagged to these gestures for user interaction. Further pose graphs are generated for user interaction systems using a data association technique and Gaussian mixture model technique. The disclosed method provides a hands-free interface to operate instruments in a smart space, using principles of ambient intelligence.
    Type: Application
    Filed: April 21, 2023
    Publication date: March 28, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: AMIT SWAIN, CHIRABRATA BHAUMIK, BROJESHWAR BHOWMICK, AVIK GHOSE
  • Patent number: 11941760
    Abstract: Traditional machine learning (ML) based systems used for scene recognition and object recognition have the disadvantage that they require huge quantity of labeled data to generate data models for the purpose of aiding the scene and object recognition. The disclosure herein generally relates to image processing, and, more particularly, to method and system for generating 3D mesh generation using planar and non-planar data. The system extracts planar point cloud and non-planar point cloud from each RGBD image in a sequence of RGBD images fetched as input, and then generates a planar mesh and a non-planar mesh for planar and non-planar objects in the image. A mesh representation is generated by merging the planar mesh and the non-planar mesh. Further, an incremental merging of the mesh representation is performed on the sequence of RGBD images, based on an estimated camera pose information, to generate representation of the scene.
    Type: Grant
    Filed: June 16, 2022
    Date of Patent: March 26, 2024
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Swapna Agarwal, Soumyadip Maity, Hrishav Bakul Barua, Brojeshwar Bhowmick
  • Publication number: 20240078356
    Abstract: Garments in their natural form are represented by meshes, where vertices (entities) are connected (related) to each other through mesh edges. Earlier methods largely ignored this relational nature of garment data while modeling garments and networks. Present disclosure provides a particle-based garment system and method that learn to simulate template garments on the target arbitrary body poses by representing physical state of garment vertices as particles, expressed as nodes in a graph, and dynamics (velocities of garment vertices) is computed through a learned message-passing. The system and method exploit this relational nature of garment data and network implemented to enforce strong relational inductive bias in garment dynamics thereby accurately simulating garments on the target body pose conditioned on body motion and fabric type at any resolution without modification even for loose garments, unlike existing state-of-the-art (SOTA) methods.
    Type: Application
    Filed: June 13, 2023
    Publication date: March 7, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: LOKENDER TIWARI, BROJESHWAR BHOWMICK
  • Patent number: 11893751
    Abstract: This disclosure relates generally to system and method for forecasting location of target in monocular first person view. Conventional systems for location forecasting utilizes complex neural networks and hence are computationally intensive and requires high compute power. The disclosed system includes an efficient and light-weight RNN based network model for predicting motion of targets in first person monocular videos. The network model includes an auto-encoder in the encoding phase and a regularizing layer in the end helps us get better accuracy. The disclosed method relies entirely just on detection bounding boxes for prediction as well as training of the network model and is still capable of transferring zero-shot on a different dataset.
    Type: Grant
    Filed: August 18, 2021
    Date of Patent: February 6, 2024
    Assignee: Tata Consultancy Services Limited
    Inventors: Junaid Ahmed Ansari, Brojeshwar Bhowmick
  • Patent number: 11887238
    Abstract: A method and system for generating 2D animated lip images synchronizing to an audio signal for an unseen subject. The system receives an audio signal and a target lip image of an unseen target subject as inputs from a user and processes these inputs to extract a plurality of high dimensional audio image features. The lip generator system is meta-trained with training dataset which consists of large variety of subjects' ethnicity and vocabulary. The meta-trained model generates realistic animation for previously unseen face and unseen audio when finetuned with only a few-shot samples for a predefined interval of time. Additionally, the method protects intrinsic features of the unseen target subject.
    Type: Grant
    Filed: August 18, 2021
    Date of Patent: January 30, 2024
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Swapna Agarwal, Dipanjan Das, Brojeshwar Bhowmick
  • Publication number: 20240013538
    Abstract: This disclosure addresses the unresolved problems of tackling object disambiguation task for an embodied agent. The embodiments of present disclosure provide a method and system for disambiguation of referred objects for embodied agents. With a phrase-to-graph network disclosed in the system of the present disclosure, any natural language object description indicating the object disambiguation task can be converted into a semantic graph representation. This not only provides a formal representation of the referred object and object instances but also helps to find an ambiguity in disambiguating the referred object using a real-time multi-view aggregation algorithm. The real-time multi-view aggregation algorithm processes multiple observations from an environment and finds the unique instances of the referred object.
    Type: Application
    Filed: June 9, 2023
    Publication date: January 11, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: Chayan SARKAR, Pradip PRAMANICK, Brojeshwar BHOWMICK, Ruddra Dev ROYCHOUDHURY, Sayan PAUL
  • Publication number: 20230351662
    Abstract: This disclosure relates generally to methods and systems for emotion-controllable generalized talking face generation of an arbitrary face image. Most of the conventional techniques for the realistic talking face generation may not be efficient to control the emotion over the face and have limited scope of generalization to an arbitrary unknown target face. The present disclosure proposes a graph convolutional network that uses speech content feature along with an independent emotion input to generate emotion and speech-induced motion on facial geometry-aware landmark representation. The facial geometry-aware landmark representation is further used in by an optical flow-guided texture generation network for producing the texture. A two-branch optical flow-guided texture generation network with motion and texture branches is designed to consider the motion and texture content independently.
    Type: Application
    Filed: February 2, 2023
    Publication date: November 2, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: SANJANA SINHA, SANDIKA BISWAS, BROJESHWAR BHOWMICK
  • Patent number: 11794347
    Abstract: This disclosure relates generally to navigation of a tele-robot in dynamic environment using in-situ intelligence. Tele-robotics is the area of robotics concerned with the control of robots (tele-robots) in a remote environment from a distance. In reality the remote environment where the tele robot navigates may be dynamic in nature with unpredictable movements, making the navigation extremely challenging. The disclosure proposes an in-situ intelligent navigation of a tele-robot in a dynamic environment. The disclosed in-situ intelligence enables the tele-robot to understand the dynamic environment by identification and estimation of future location of objects based on a generating/training a motion model. Further the disclosed techniques also enable communication between a master and the tele-robot (whenever necessary) based on an application layer communication semantic.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: October 24, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Abhijan Bhattacharyya, Ruddra dev Roychoudhury, Sanjana Sinha, Sandika Biswas, Ashis Sau, Madhurima Ganguly, Sayan Paul, Brojeshwar Bhowmick
  • Patent number: 11778162
    Abstract: This disclosure relates generally to method and system for draping a 3D garment on a 3D human body. Dressing digital humans in 3D have gained much attention due to its use in online shopping and draping 3D garments over the 3D human body has immense applications in virtual try-on, animations, and accurate fitment of the 3D garment is the utmost importance. The proposed disclosure is a single unified garment deformation model that learns the shared space of variations for a body shape, a body pose, and a styling garment. The method receives a plurality of human body inputs to construct a 3D skinned garments for the subject. The deep draper network trained using a plurality of losses provides efficient deep neural network based method that predicts fast and accurate 3D garment images. The method couples the geometric and multi-view perceptual constraints that efficiently learn the garment deformation's high-frequency geometry.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: October 3, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Lokender Tiwari, Brojeshwar Bhowmick
  • Publication number: 20230266766
    Abstract: The present disclosure provides a model for semantic navigation for service robots to find out-of-view objects in an indoor environment. Initially, the system receives a target object to be reached by the mobile robot in the indoor environment. Further, a current location of the mobile robot is identified by a localization technique. An embedding corresponding to each of a plurality of visible regions is computed using a pretrained Graph Neural Network GNN. The GNN is pretrained using a trajectory data and a spatial relationship graph associated with the indoor environment. Further, a similarity score is computed for each of the plurality of visible regions based on the corresponding embedding using a scoring technique. An optimal visible region is identified by comparing the similarity score. Finally, a next action to be taken by the mobile robot selected from a plurality of actions based on the optimal visible region.
    Type: Application
    Filed: February 22, 2023
    Publication date: August 24, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: KRITIKA ANAND, SNEHASIS BANERJEE, BROJESHWAR BHOWMICK, MADHAVA KRISHNA KRISHNAN, GULSHAN KUMAR, SAI SHANKAR NARASIMHAN
  • Publication number: 20230236606
    Abstract: This disclosure relates generally to systems and methods for object detection using a geometric semantic map based robot navigation using an architecture to empower a robot to navigate an indoor environment with logical decision making at each intermediate stage. The decision making is further enhanced by knowledge on actuation capability of the robots and that of scenes, objects and their relations maintained in an ontological form. The robot navigates based on a Geometric Semantic map which is a relational combination of geometric and semantic map. In comparison to traditional approaches, the robot's primary task here is not to map the environment, but to reach a target object. Thus, a goal given to the robot is to find an object in an unknown environment with no navigational map and only egocentric RGB camera perception.
    Type: Application
    Filed: October 26, 2022
    Publication date: July 27, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: SNEHASIS BANERJEE, BROJESHWAR BHOWMICK, RUDDRA DEV ROYCHOUDHURY
  • Publication number: 20230213941
    Abstract: The embodiments of present disclosure herein address unresolved problem of cognitive navigation strategies for a telepresence robotic system. This includes giving instruction remotely over network to go to a point in an indoor space, to go an area, to go to an object. Also, human robot interaction to give and understand interaction is not integrated in a common telepresence framework. The embodiments herein provide a telepresence robotic system empowered with a smart navigation which is based on in situ intelligent visual semantic mapping of the live scene captured by a robot. It further presents an edge-centric software architecture of a teledrive comprising a speech recognition based HRI, a navigation module and a real-time WebRTC based communication framework that holds the entire telepresence robotic system together. Additionally, the disclosure provides a robot independent API calls via device driver ROS, making the offering hardware independent and capable of running in any robot.
    Type: Application
    Filed: July 22, 2022
    Publication date: July 6, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: SNEHASIS BANERJEE, PRADIP PRAMANICK, CHAYAN SARKAR, ABHIJAN BHATTACHARYYA, ASHIS SAU, KRITIKA ANAND, RUDDRA DEV ROYCHOUDHURY, BROJESHWAR BHOWMICK
  • Patent number: 11670047
    Abstract: The embodiments herein provide a system and method for integrating objects in monocular simultaneous localization and mapping (SLAM). State of art object SLAM approach use two popular threads. In first, instance specific models are assumed to be known a priori. In second, a general model for an object such as ellipsoids and cuboids is used. However, these generic models just give the label of the object category and do not give much information about the object pose in the map. The method and system disclosed provide a SLAM framework on a real monocular sequence wherein joint optimization is performed on object localization and edges using category level shape priors and bundle adjustment. The method provides a better visualization incorporating object representations in the scene along with the 3D structure of the base SLAM system, which makes it useful for augmented reality (AR) applications.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: June 6, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Dipanjan Das, Brojeshwar Bhowmick, Aniket Pokale, Krishnan Madhava Krishna, Aditya Aggarwal
  • Patent number: 11654573
    Abstract: The disclosure generally relates to methods and systems for enabling human robot interaction by cognition sharing which includes gesture and audio. Conventional techniques that use the gestures and the speech, require extra hardware setup and are limited to navigation in structured outdoor driving environments. The present disclosure herein provides methods and systems that solves the technical problem of enabling the human robot interaction with a two-step approach by transferring the cognitive load from the human to the robot. An accurate shared perspective associated with the task is determined in the first step by computing relative frame transformations based on understanding of navigational gestures of the subject. Then, the shared perspective transformed to the robot in the field view of the robot. The transformed shared perspective is then given to a language grounding technique in the second step, to accurately determine a final goal associated with the task.
    Type: Grant
    Filed: February 4, 2021
    Date of Patent: May 23, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Soumyadip Maity, Gourav Kumar, Ruddra Dev Roy Choudhury, Brojeshwar Bhowmick
  • Publication number: 20230080342
    Abstract: In state of the art methods for object goal navigation, scene understanding is implicit in their goal oriented exploration policies. Implicit scene understanding coupled with navigation is shown to be specific to tasks for which training is done and not generalizable to new tasks. Thus, embodiments of present disclosure propose a method of goal-conditioned exploration wherein scene understanding is decoupled from the exploration policies. Here, the scene understanding required for navigation is provided by a region classification network that is trained using semantic graphs representing the scene and agent can be navigated towards the goal either by using any state of the art pure exploration policies or by traversing through potential sub-goals identified based on a Co-occurrence Likelihood score calculated by using predictions from the region classification network. Hence, the method of present disclosure can be easily generalized to new tasks and new environments.
    Type: Application
    Filed: July 18, 2022
    Publication date: March 16, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: RUDDRA DEV ROYCHOUDHURY, BROJESHWAR BHOWMICK, MADHAVA KRISHNA KRISHNAN, GULSHAN KUMAR, SAI SHANKAR NARASIMHAN, HIMANSU DIDWANIA
  • Patent number: 11597080
    Abstract: Conventional tele-presence robots have their own limitations with respect to task execution, information processing and management. Embodiments of the present disclosure provide a tele-presence robot (TPR) that communicates with a master device associated with a user via an edge device for task execution wherein control command from the master device is parsed for determining instructions set and task type for execution. Based on this determination, the TPR queries for information across storage devices until a response is obtained enough to execute task. The task upon execution is validated with the master device and user. Knowledge acquired, during querying, task execution and validation of the executed task, is dynamically partitioned by the TPR across storage devices namely, on-board memory of the tele-present robot, an edge device, a cloud and a web interface respectively depending upon the task type, operating environment of the tele-presence robot, and other performance affecting parameters.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: March 7, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Chayan Sarkar, Snehasis Banerjee, Pradip Pramanick, Hrishav Bakul Barua, Soumyadip Maity, Dipanjan Das, Brojeshwar Bhowmick, Ashis Sau, Abhijan Bhattacharyya, Arpan Pal, Balamuralidhar Purushothaman, Ruddra Roy Chowdhury
  • Publication number: 20230063722
    Abstract: Traditional machine learning (ML) based systems used for scene recognition and object recognition have the disadvantage that they require huge quantity of labeled data to generate data models for the purpose of aiding the scene and object recognition. The disclosure herein generally relates to image processing, and, more particularly, to method and system for generating 3D mesh generation using planar and non-planar data. The system extracts planar point cloud and non-planar point cloud from each RGBD image in a sequence of RGBD images fetched as input, and then generates a planar mesh and a non-planar mesh for planar and non-planar objects in the image. A mesh representation is generated by merging the planar mesh and the non-planar mesh. Further, an incremental merging of the mesh representation is performed on the sequence of RGBD images, based on an estimated camera pose information, to generate representation of the scene.
    Type: Application
    Filed: June 16, 2022
    Publication date: March 2, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: SWAPNA AGARWAL, SOUMYADIP MAITY, HRISHAV BAKUL BARUA, BROJESHWAR BHOWMICK
  • Patent number: 11573563
    Abstract: Robotic platform for tele-presence applications has gained paramount importance, such as for remote meetings, group discussions, and the like and has sought much attention. There exist some robotic platforms for such tele-presence applications, these lack efficacy in communication and interaction between remote person and avatar robot deployed in another geographic location thus adding network overhead. Embodiments of the present disclosure for edge centric communication protocol for remotely maneuvering tele-presence robot in geographically distributed environment.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: February 7, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Abhijan Bhattacharyya, Ashis Sau, Ruddra Dev Roychoudhury, Hrishav Bakul Barua, Chayan Sarkar, Sayan Paul, Brojeshwar Bhowmick, Arpan Pal, Balamuralidhar Purushothaman
  • Patent number: 11551394
    Abstract: Conventional state-of-the-art methods are limited in their ability to generate realistic animation from audio on any unknown faces and cannot be easily generalized to different facial characteristics and voice accents. Further, these methods fail to produce realistic facial animation for subjects which are quite different than that of distribution of facial characteristics network has seen during training. Embodiments of the present disclosure provide systems and methods that generate audio-speech driven animated talking face using a cascaded generative adversarial network (CGAN), wherein a first GAN is used to transfer lip motion from canonical face to person-specific face. A second GAN based texture generator network is conditioned on person-specific landmark to generate high-fidelity face corresponding to the motion. Texture generator GAN is made more flexible using meta learning to adapt to unknown subject's traits and orientation of face during inference.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: January 10, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Sandika Biswas, Dipanjan Das, Sanjana Sinha, Brojeshwar Bhowmick
  • Patent number: 11526174
    Abstract: The disclosure herein generally relates to the field of autonomous navigation, and, more particularly, to a diverse trajectory proposal for autonomous navigation. The embodiment discloses a hierarchical network based diverse trajectory proposal for autonomous navigation. The hierarchical 2-stage neural network architecture maps the perceived surroundings to diverse trajectories in the form of trajectory waypoints, that an autonomous navigation system can choose to navigate/traverse. The first stage of the disclosed hierarchical 2-stage Neural Network architecture is a Trajectory Proposal Network which generates a set of diverse traversable regions in an environment which can be occupied by the autonomous navigation system in the future. The second stage is a Trajectory Sampling network which predicts a fine-grained trajectory/trajectory waypoint over the diverse traversable regions proposed by Trajectory Proposal Network.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: December 13, 2022
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Brojeshwar Bhowmick, Krishnam Madhava Krishna, Sriram Nochur Narayanan, Gourav Kumar, Abhay Singh, Siva Karthik Mustikovela, Saket Saurav