Patents by Inventor Hrishav Bakul Barua

Hrishav Bakul Barua has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240338834
    Abstract: Estimating temporally consistent 3D human body shape, pose, and motion from a monocular video is a challenging task due to occlusions, poor lightning conditions, complex articulated body poses, depth ambiguity, and limited availability of annotated data. Embodiments of present disclosure provide a method for temporally consistent motion estimation from monocular video. A monocular video of person(s) is captured by a weak perspective camera and spatial features of body of the persons are extracted from each frame of the video. Then, initial estimates of body shape, body pose, and features of the weak perspective camera are obtained. The spatial features and initial estimates are then aggregated to obtain spatio-temporal features by a combination of self-similarity matrices between the spatial features, pose and the camera and self-attention maps of the camera features and the spatial features. The spatio-temporal aggregated features are then used to predict shape and pose parameters of the person(s).
    Type: Application
    Filed: December 20, 2023
    Publication date: October 10, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: LOKENDER TIWARI, SUSHOVAN CHANDA, HRISHAV BAKUL BARUA, BROJESHWAR BHOWMICK, AVINASH SHARMA, AMOGH TIWARI
  • Patent number: 11941760
    Abstract: Traditional machine learning (ML) based systems used for scene recognition and object recognition have the disadvantage that they require huge quantity of labeled data to generate data models for the purpose of aiding the scene and object recognition. The disclosure herein generally relates to image processing, and, more particularly, to method and system for generating 3D mesh generation using planar and non-planar data. The system extracts planar point cloud and non-planar point cloud from each RGBD image in a sequence of RGBD images fetched as input, and then generates a planar mesh and a non-planar mesh for planar and non-planar objects in the image. A mesh representation is generated by merging the planar mesh and the non-planar mesh. Further, an incremental merging of the mesh representation is performed on the sequence of RGBD images, based on an estimated camera pose information, to generate representation of the scene.
    Type: Grant
    Filed: June 16, 2022
    Date of Patent: March 26, 2024
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Swapna Agarwal, Soumyadip Maity, Hrishav Bakul Barua, Brojeshwar Bhowmick
  • Patent number: 11597080
    Abstract: Conventional tele-presence robots have their own limitations with respect to task execution, information processing and management. Embodiments of the present disclosure provide a tele-presence robot (TPR) that communicates with a master device associated with a user via an edge device for task execution wherein control command from the master device is parsed for determining instructions set and task type for execution. Based on this determination, the TPR queries for information across storage devices until a response is obtained enough to execute task. The task upon execution is validated with the master device and user. Knowledge acquired, during querying, task execution and validation of the executed task, is dynamically partitioned by the TPR across storage devices namely, on-board memory of the tele-present robot, an edge device, a cloud and a web interface respectively depending upon the task type, operating environment of the tele-presence robot, and other performance affecting parameters.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: March 7, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Chayan Sarkar, Snehasis Banerjee, Pradip Pramanick, Hrishav Bakul Barua, Soumyadip Maity, Dipanjan Das, Brojeshwar Bhowmick, Ashis Sau, Abhijan Bhattacharyya, Arpan Pal, Balamuralidhar Purushothaman, Ruddra Roy Chowdhury
  • Publication number: 20230063722
    Abstract: Traditional machine learning (ML) based systems used for scene recognition and object recognition have the disadvantage that they require huge quantity of labeled data to generate data models for the purpose of aiding the scene and object recognition. The disclosure herein generally relates to image processing, and, more particularly, to method and system for generating 3D mesh generation using planar and non-planar data. The system extracts planar point cloud and non-planar point cloud from each RGBD image in a sequence of RGBD images fetched as input, and then generates a planar mesh and a non-planar mesh for planar and non-planar objects in the image. A mesh representation is generated by merging the planar mesh and the non-planar mesh. Further, an incremental merging of the mesh representation is performed on the sequence of RGBD images, based on an estimated camera pose information, to generate representation of the scene.
    Type: Application
    Filed: June 16, 2022
    Publication date: March 2, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: SWAPNA AGARWAL, SOUMYADIP MAITY, HRISHAV BAKUL BARUA, BROJESHWAR BHOWMICK
  • Patent number: 11573563
    Abstract: Robotic platform for tele-presence applications has gained paramount importance, such as for remote meetings, group discussions, and the like and has sought much attention. There exist some robotic platforms for such tele-presence applications, these lack efficacy in communication and interaction between remote person and avatar robot deployed in another geographic location thus adding network overhead. Embodiments of the present disclosure for edge centric communication protocol for remotely maneuvering tele-presence robot in geographically distributed environment.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: February 7, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Abhijan Bhattacharyya, Ashis Sau, Ruddra Dev Roychoudhury, Hrishav Bakul Barua, Chayan Sarkar, Sayan Paul, Brojeshwar Bhowmick, Arpan Pal, Balamuralidhar Purushothaman
  • Patent number: 11487577
    Abstract: This disclosure provides systems and methods for robotic task planning when a complex task instruction is provided in natural language. Conventionally robotic task planning relies on a single task or multiple independent or serialized tasks in the task instruction. Alternatively, constraints on space of linguistic variations, ambiguity and complexity of the language may be imposed. In the present disclosure, firstly dependencies between multiple tasks are identified. The tasks are then ordered such that a dependent task is always scheduled for planning after a task it is dependent upon. Moreover, repeated tasks are masked. Thus, resolving task dependencies and ordering dependencies, a complex instruction with multiple interdependent tasks in natural language facilitates generation of a viable task execution plan. Systems and methods of the present disclosure finds application in human-robot interactions.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: November 1, 2022
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Pradip Pramanick, Hrishav Bakul Barua, Chayan Sarkar
  • Patent number: 11354531
    Abstract: This disclosure relates to system and method for enabling a robot to perceive and detect socially interacting groups. Various known systems have limited accuracy due to prevalent rule-driven methods. In case of few data-driven learning methods, they lack datasets with varied conditions of light, occlusion, and backgrounds. The disclosed method and system detect the formation of a social group of people, or, f-formation in real-time in a given scene. The system also detects outliers in the process, i.e., people who are visible but not part of the interacting group. This plays a key role in correct f-formation detection in a real-life crowded environment. Additionally, when a collocated robot plans to join the group it has to detect a pose for itself along with detecting the formation. Thus, the system provides the approach angle for the robot, which can help it to determine the final pose in a socially acceptable manner.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: June 7, 2022
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Hrishav Bakul Barua, Pradip Pramanick, Chayan Sarkar
  • Publication number: 20210406594
    Abstract: This disclosure relates to system and method for enabling a robot to perceive and detect socially interacting groups. Various known systems have limited accuracy due to prevalent rule-driven methods. In case of few data-driven learning methods, they lack datasets with varied conditions of light, occlusion, and backgrounds. The disclosed method and system detect the formation of a social group of people, or, f-formation in real-time in a given scene. The system also detects outliers in the process, i.e., people who are visible but not part of the interacting group. This plays a key role in correct f-formation detection in a real-life crowded environment. Additionally, when a collocated robot plans to join the group it has to detect a pose for itself along with detecting the formation. Thus, the system provides the approach angle for the robot, which can help it to determine the final pose in a socially acceptable manner.
    Type: Application
    Filed: December 30, 2020
    Publication date: December 30, 2021
    Applicant: Tata Consultancy Services Limited
    Inventors: Hrishav Bakul BARUA, Pradip PRAMANICK, Chayan SARKAR
  • Publication number: 20210291363
    Abstract: Conventional tele-presence robots have their own limitations with respect to task execution, information processing and management. Embodiments of the present disclosure provide a tele-presence robot (TPR) that communicates with a master device associated with a user via an edge device for task execution wherein control command from the master device is parsed for determining instructions set and task type for execution. Based on this determination, the TPR queries for information across storage devices until a response is obtained enough to execute task. The task upon execution is validated with the master device and user. Knowledge acquired, during querying, task execution and validation of the executed task, is dynamically partitioned by the TPR across storage devices namely, on-board memory of the tele-present robot, an edge device, a cloud and a web interface respectively depending upon the task type, operating environment of the tele-presence robot, and other performance affecting parameters.
    Type: Application
    Filed: September 9, 2020
    Publication date: September 23, 2021
    Applicant: Tata Consultancy Services Limited
    Inventors: Chayan Sarkar, Snehasis Banerjee, Pradip Pramanick, Hrishav Bakul Barua, Soumyadip Maity, Dipanjan Das, Brojeshwar Bhowmick, Ashis Sau, Abhijan Bhattacharyya, Arpan Pal, Balamuralidhar PURUSHOTHAMAN, Ruddra Roy Chowdhury
  • Patent number: 11127401
    Abstract: This disclosure relates to attention shifting of a robot in a group conversation with two or more attendees, wherein at least one of them is a speaker. State of the art has dealt with several aspects of Human-Robot Interaction (HRI) including responding to a source of sound at a time, addressing a fixed viewing area or determining who is the speaker based on eye gaze direction. However, attention shifting to make the conversation human-like is a challenge. The present disclosure uses audio-visual perception for speaker localization. Only qualified direction of arrivals (DOAs) are used for the audio perception. Further the audio perception is complimented by visual perception employing real time face detection and lip movement detection. Use of HRI rules, clustering of the DOAs, dynamic adjustment of rotation of the robot and a dynamically updated knowledge repository enriches the robot with intelligence to shift attention with minimum human intervention.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: September 21, 2021
    Assignee: Tata Consultancy Services Limited
    Inventors: Chayan Sarkar, Hrishav Bakul Barua, Arpan Pal, Balamuralidhar Purushothaman, Achanna Anil Kumar
  • Publication number: 20210232121
    Abstract: This disclosure provides systems and methods for robotic task planning when a complex task instruction is provided in natural language. Conventionally robotic task planning relies on a single task or multiple independent or serialized tasks in the task instruction. Alternatively, constraints on space of linguistic variations, ambiguity and complexity of the language may be imposed. In the present disclosure, firstly dependencies between multiple tasks are identified. The tasks are then ordered such that a dependent task is always scheduled for planning after a task it is dependent upon. Moreover, repeated tasks are masked. Thus, resolving task dependencies and ordering dependencies, a complex instruction with multiple interdependent tasks in natural language facilitates generation of a viable task execution plan. Systems and methods of the present disclosure finds application in human-robot interactions.
    Type: Application
    Filed: August 31, 2020
    Publication date: July 29, 2021
    Applicant: Tata Consultancy Services Limited
    Inventors: Pradip PRAMANICK, Hrishav Bakul BARUA, Chayan SARKAR
  • Publication number: 20210208581
    Abstract: Robotic platform for tele-presence applications has gained paramount importance, such as for remote meetings, group discussions, and the like and has sought much attention. There exist some robotic platforms for such tele-presence applications, these lack efficacy in communication and interaction between remote person and avatar robot deployed in another geographic location thus adding network overhead. Embodiments of the present disclosure for edge centric communication protocol for remotely maneuvering tele-presence robot in geographically distributed environment.
    Type: Application
    Filed: August 7, 2020
    Publication date: July 8, 2021
    Applicant: Tata Consultancy Services Limited
    Inventors: Abhijan BHATTACHARYYA, Ashis SAU, Ruddra Dev ROYCHOUDHURY, Hrishav Bakul BARUA, Chayan SARKAR, Sayan PAUL, Brojeshwar BHOWMICK, Arpan PAL, Balamuralidhar PURUSHOTHAMAN
  • Publication number: 20210097995
    Abstract: This disclosure relates to attention shifting of a robot in a group conversation with two or more attendees, wherein at least one of them is a speaker. State of the art has dealt with several aspects of Human-Robot Interaction (HRI) including responding to a source of sound at a time, addressing a fixed viewing area or determining who is the speaker based on eye gaze direction. However, attention shifting to make the conversation human-like is a challenge. The present disclosure uses audio-visual perception for speaker localization. Only qualified direction of arrivals (DOAs) are used for the audio perception. Further the audio perception is complimented by visual perception employing real time face detection and lip movement detection. Use of HRI rules, clustering of the DOAs, dynamic adjustment of rotation of the robot and a dynamically updated knowledge repository enriches the robot with intelligence to shift attention with minimum human intervention.
    Type: Application
    Filed: July 22, 2020
    Publication date: April 1, 2021
    Applicant: Tata Consultancy Services Limited
    Inventors: Chayan SARKAR, Hrishav Bakul Barua, Arpan Pal, Balamuralidhar Purushothaman, Achanna Anil Kumar