Patents by Inventor Sanjana SINHA

Sanjana SINHA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240403508
    Abstract: State of the art approaches for 3D garment simulation approaches have the disadvantages that they 1) work on fixed garment type, 2) work on fixed body shapes, and 3) assume fixed garment topology. As a result, they do not offer a generic solution for garment simulation. Method and system disclosed herein use a combination of a body motion aware ARAP garment deformation and a Physics Enforcing Network (PEN), so as to generate garment simulations irrespective of garment type, body shapes, and garment topology, thus offering a generic solution.
    Type: Application
    Filed: May 22, 2024
    Publication date: December 5, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: LOKENDER TIWARI, BROJESHWAR BHOWMICK, SANJANA SINHA
  • Publication number: 20230351662
    Abstract: This disclosure relates generally to methods and systems for emotion-controllable generalized talking face generation of an arbitrary face image. Most of the conventional techniques for the realistic talking face generation may not be efficient to control the emotion over the face and have limited scope of generalization to an arbitrary unknown target face. The present disclosure proposes a graph convolutional network that uses speech content feature along with an independent emotion input to generate emotion and speech-induced motion on facial geometry-aware landmark representation. The facial geometry-aware landmark representation is further used in by an optical flow-guided texture generation network for producing the texture. A two-branch optical flow-guided texture generation network with motion and texture branches is designed to consider the motion and texture content independently.
    Type: Application
    Filed: February 2, 2023
    Publication date: November 2, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: SANJANA SINHA, SANDIKA BISWAS, BROJESHWAR BHOWMICK
  • Patent number: 11794347
    Abstract: This disclosure relates generally to navigation of a tele-robot in dynamic environment using in-situ intelligence. Tele-robotics is the area of robotics concerned with the control of robots (tele-robots) in a remote environment from a distance. In reality the remote environment where the tele robot navigates may be dynamic in nature with unpredictable movements, making the navigation extremely challenging. The disclosure proposes an in-situ intelligent navigation of a tele-robot in a dynamic environment. The disclosed in-situ intelligence enables the tele-robot to understand the dynamic environment by identification and estimation of future location of objects based on a generating/training a motion model. Further the disclosed techniques also enable communication between a master and the tele-robot (whenever necessary) based on an application layer communication semantic.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: October 24, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Abhijan Bhattacharyya, Ruddra dev Roychoudhury, Sanjana Sinha, Sandika Biswas, Ashis Sau, Madhurima Ganguly, Sayan Paul, Brojeshwar Bhowmick
  • Patent number: 11551394
    Abstract: Conventional state-of-the-art methods are limited in their ability to generate realistic animation from audio on any unknown faces and cannot be easily generalized to different facial characteristics and voice accents. Further, these methods fail to produce realistic facial animation for subjects which are quite different than that of distribution of facial characteristics network has seen during training. Embodiments of the present disclosure provide systems and methods that generate audio-speech driven animated talking face using a cascaded generative adversarial network (CGAN), wherein a first GAN is used to transfer lip motion from canonical face to person-specific face. A second GAN based texture generator network is conditioned on person-specific landmark to generate high-fidelity face corresponding to the motion. Texture generator GAN is made more flexible using meta learning to adapt to unknown subject's traits and orientation of face during inference.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: January 10, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Sandika Biswas, Dipanjan Das, Sanjana Sinha, Brojeshwar Bhowmick
  • Publication number: 20220219325
    Abstract: This disclosure relates generally to navigation of a tele-robot in dynamic environment using in-situ intelligence. Tele-robotics is the area of robotics concerned with the control of robots (tele-robots) in a remote environment from a distance. In reality the remote environment where the tele robot navigates may be dynamic in nature with unpredictable movements, making the navigation extremely challenging. The disclosure proposes an in-situ intelligent navigation of a tele-robot in a dynamic environment. The disclosed in-situ intelligence enables the tele-robot to understand the dynamic environment by identification and estimation of future location of objects based on a generating/training a motion model. Further the disclosed techniques also enable communication between a master and the tele-robot (whenever necessary) based on an application layer communication semantic.
    Type: Application
    Filed: March 11, 2021
    Publication date: July 14, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Abhijan BHATTACHARYYA, Ruddra dev ROYCHOUDHURY, Sanjana SINHA, Sandika BISWAS, Ashis SAU, Madhurima GANGULY, Sayan PAUL, Brojeshwar BHOWMICK
  • Patent number: 11295501
    Abstract: Most of the prior art references that generate animations fail to determine and consider head movement data. The prior art references which consider the head movement data for generating the animations rely on a sample video to generate/determine the head movements data, which, as a result, fail to capture changing head motions throughout course of a speech given by a subject in an actual whole length video. The disclosure herein generally relates to generating facial animations, and, more particularly, to a method and system for generating the facial animations from speech signal of a subject. The system determines the head movement, lip movements, and eyeball movements, of the subject, by processing a speech signal collected as input, and uses the head movement, lip movements, and eyeball movements, to generate an animation.
    Type: Grant
    Filed: March 1, 2021
    Date of Patent: April 5, 2022
    Assignee: Tata Consultancy Services Limited
    Inventors: Sandika Biswas, Dipanjan Das, Sanjana Sinha, Brojeshwar Bhowmick
  • Patent number: 11256962
    Abstract: Estimating 3D human pose from monocular images is a challenging problem due to the variety and complexity of human poses and the inherent ambiguity in recovering depth from single view. Recent deep learning based methods show promising results by using supervised learning on 3D pose annotated datasets. However, the lack of large-scale 3D annotated training data makes the 3D pose estimation difficult in-the-wild. Embodiments of the present disclosure provide a method which can effectively predict 3D human poses from only 2D pose in a weakly-supervised manner by using both ground-truth 3D pose and ground-truth 2D pose based on re-projection error minimization as a constraint to predict the 3D joint locations. The method may further utilize additional geometric constraints on reconstructed body parts to regularize the pose in 3D along with minimizing re-projection error to improvise on estimating an accurate 3D pose.
    Type: Grant
    Filed: March 11, 2020
    Date of Patent: February 22, 2022
    Assignee: Tata Consultancy Services Limited
    Inventors: Sandika Biswas, Sanjana Sinha, Kavya Gupta, Brojeshwar Bhowmick
  • Publication number: 20220036617
    Abstract: Conventional state-of-the-art methods are limited in their ability to generate realistic animation from audio on any unknown faces and cannot be easily generalized to different facial characteristics and voice accents. Further, these methods fail to produce realistic facial animation for subjects which are quite different than that of distribution of facial characteristics network has seen during training. Embodiments of the present disclosure provide systems and methods that generate audio-speech driven animated talking face using a cascaded generative adversarial network (CGAN), wherein a first GAN is used to transfer lip motion from canonical face to person-specific face. A second GAN based texture generator network is conditioned on person-specific landmark to generate high-fidelity face corresponding to the motion. Texture generator GAN is made more flexible using meta learning to adapt to unknown subject's traits and orientation of face during inference.
    Type: Application
    Filed: March 11, 2021
    Publication date: February 3, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Sandika BISWAS, Dipanjan DAS, Sanjana SINHA, Brojeshwar BHOWMICK
  • Publication number: 20210366173
    Abstract: Speech-driven facial animation is useful for a variety of applications such as telepresence, chatbots, etc. The necessary attributes of having a realistic face animation are: 1) audiovisual synchronization, (2) identity preservation of the target individual, (3) plausible mouth movements, and (4) presence of natural eye blinks. Existing methods mostly address audio-visual lip synchronization, and synthesis of natural facial gestures for overall video realism. However, existing approaches are not accurate. Present disclosure provides system and method that learn motion of facial landmarks as an intermediate step before generating texture. Person-independent facial landmarks are generated from audio for invariance to different voices, accents, etc. Eye blinks are imposed on facial landmarks and the person-independent landmarks are retargeted to person-specific landmarks to preserve identity related facial structure.
    Type: Application
    Filed: September 29, 2020
    Publication date: November 25, 2021
    Applicant: Tata Consultancy Services Limited
    Inventors: Sanjana SINHA, Sandika BISWAS, Brojeshwar BHOWMICK
  • Patent number: 11176724
    Abstract: Speech-driven facial animation is useful for a variety of applications such as telepresence, chatbots, etc. The necessary attributes of having a realistic face animation are: 1) audiovisual synchronization, (2) identity preservation of the target individual, (3) plausible mouth movements, and (4) presence of natural eye blinks. Existing methods mostly address audio-visual lip synchronization, and synthesis of natural facial gestures for overall video realism. However, existing approaches are not accurate. Present disclosure provides system and method that learn motion of facial landmarks as an intermediate step before generating texture. Person-independent facial landmarks are generated from audio for invariance to different voices, accents, etc. Eye blinks are imposed on facial landmarks and the person-independent landmarks are retargeted to person-specific landmarks to preserve identity related facial structure.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: November 16, 2021
    Assignee: Tata Consultancy Services Limited
    Inventors: Sanjana Sinha, Sandika Biswas, Brojeshwar Bhowmick
  • Patent number: 10980447
    Abstract: Body joint tracking is applied in various industries and medical field. In body joint tracking, marker less devices plays an important role. However, the marker less devices are facing some challenges in providing optimal tracking due to occlusion, ambiguity, lighting conditions, dynamic objects etc. System and method of the present disclosure provides an optimized body joint tracking. Here, motion data pertaining to a first set of motion frames from a motion sensor are received. Further, the motion data are processed to obtain a plurality of 3 dimensional cylindrical models. Here, every cylindrical model among the plurality of 3 dimensional cylindrical model represents a body segment. The coefficients associated with the plurality of 3 dimensional cylindrical models are initialized to obtain a set of initialized cylindrical models. A set of dynamic coefficients associated with the initialized cylindrical models are utilized to track joint motion trajectories of a set of subsequent frames.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: April 20, 2021
    Assignee: Tata Consultancy Limited Services
    Inventors: Sanjana Sinha, Brojeshwar Bhowmick, Aniruddha Sinha, Abhijit Das
  • Publication number: 20200342270
    Abstract: Estimating 3D human pose from monocular images is a challenging problem due to the variety and complexity of human poses and the inherent ambiguity in recovering depth from single view. Recent deep learning based methods show promising results by using supervised learning on 3D pose annotated datasets. However, the lack of large-scale 3D annotated training data makes the 3D pose estimation difficult in-the-wild. Embodiments of the present disclosure provide a method which can effectively predict 3D human poses from only 2D pose in a weakly-supervised manner by using both ground-truth 3D pose and ground-truth 2D pose based on re-projection error minimization as a constraint to predict the 3D joint locations. The method may further utilize additional geometric constraints on reconstructed body parts to regularize the pose in 3D along with minimizing re-projection error to improvise on estimating an accurate 3D pose.
    Type: Application
    Filed: March 11, 2020
    Publication date: October 29, 2020
    Applicant: Tata Consultancy Services Limited
    Inventors: Sandika BISWAS, Sanjana SINHA, Kavya GUPTA, Brojeshwar BHOWMICK
  • Patent number: 10475231
    Abstract: Methods and systems for change detection utilizing three dimensional (3D) point-cloud processing are provided. The method includes detecting changes in the surface based on a surface fitting approach with a locally weighted Moving Least Squares (MLS) approximation. The method includes acquiring and comparing surface geometry of a reference point-cloud defining a reference surface and a template point-cloud defining a template surface at local regions or local surfaces using the surface fitting approach. The method provides effective change detection for both, rigid as well as non-rigid changes, reducing false detections due to presence of noise and is independent of factors such as texture or illumination of an object or scene being tracked for changed detection.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: November 12, 2019
    Assignee: Tata Consultancy Services Limited
    Inventors: Brojeshwar Bhowmick, Swapna Agarwal, Sanjana Sinha, Balamuralidhar Purushothaman, Apurbaa Mallik
  • Publication number: 20190080503
    Abstract: Methods and systems for change detection utilizing three dimensional (3D) point-cloud processing are provided. The method includes detecting changes in the surface based on a surface fitting approach with a locally weighted Moving Least Squares (MLS) approximation. The method includes acquiring and comparing surface geometry of a reference point-cloud defining a reference surface and a template point-cloud defining a template surface at local regions or local surfaces using the surface fitting approach. The method provides effective change detection for both, rigid as well as non-rigid changes, reducing false detections due to presence of noise and is independent of factors such as texture or illumination of an object or scene being tracked for changed detection.
    Type: Application
    Filed: February 15, 2018
    Publication date: March 14, 2019
    Applicant: Tata Consultancy Services Limited
    Inventors: Brojeshwar BHOWMICK, Swapna AGARWAL, Sanjana SINHA, Balamuralidhar PURUSHOTHAMAN, Apurbaa MALLIK
  • Publication number: 20190008421
    Abstract: Body joint tracking is applied in various industries and medical field. In body joint tracking, marker less devices plays an important role. However, the marker less devices are facing some challenges in providing optimal tracking due to occlusion, ambiguity, lighting conditions, dynamic objects etc. System and method of the present disclosure provides an optimized body joint tracking. Here, motion data pertaining to a first set of motion frames from a motion sensor are received. Further, the motion data are processed to obtain a plurality of 3 dimensional cylindrical models. Here, every cylindrical model among the plurality of 3 dimensional cylindrical model represents a body segment. The coefficients associated with the plurality of 3 dimensional cylindrical models are initialized to obtain a set of initialized cylindrical models. A set of dynamic coefficients associated with the initialized cylindrical models are utilized to track joint motion trajectories of a set of subsequent frames.
    Type: Application
    Filed: February 28, 2018
    Publication date: January 10, 2019
    Applicant: Tata Consultancy Services Limited
    Inventors: Sanjana Sinha, Brojeshwar Bhowmick, Aniruddha Sinha, Abhijit Das
  • Patent number: 10068333
    Abstract: Systems and methods for identifying body joint location includes obtaining skeletal data, depth data and red, green, and blue (RGB) data pertaining to a user, obtaining, using input data, an estimate of body joint locations (BJLs) and body segment lengths (BSLs), iteratively identifying, based on the depth data and RGB data, probable correct BJLs in a bounded neighborhood around BJLs that are previously obtained, comparing a body segment length associated with the probable correct BJLs and a reference length, identifying candidate BJLs based on comparison, determining a physical orientation of each body segment by segmenting three dimensional (3D) coordinates of each body segment based on the depth data and performing an analysis on each segmented 3D coordinate. A corrected BJL is identified based on a minimal deviation in direction from the physical orientation of a corresponding body segment along with a feature descriptor of the RGB data and depth data.
    Type: Grant
    Filed: March 22, 2017
    Date of Patent: September 4, 2018
    Assignee: Tata Consultancy Services Limited
    Inventors: Sanjana Sinha, Brojeshwar Bhowmick, Kingshuk Chakravarty, Aniruddha Sinha, Abhijit Das
  • Publication number: 20180085045
    Abstract: A method and system for determining postural balance of the person is provided. The disclosure provides a single limb stance body balance analysis system which will aid medical practitioners to analyze crucial factor for fall risk minimization, injury prevention, fitness and rehabilitation. Skeleton data was captured using Kinect. Two parameters vibration-jitter and force per unit mass (FPUM) are derived for each body part to assess postural stability during SLS. Further, the vibration and force imposed on each joint a first balance score was quantified. A first balance score and a second balance are also calculated by combining vibration index and SLS duration to indicate the postural balance of the person. The vibration index is computed from the vibration profile associated different body segments.
    Type: Application
    Filed: March 22, 2017
    Publication date: March 29, 2018
    Applicant: Tata Consultancy Services Limited
    Inventors: Kingshuk CHAKRAVARTY, Aniruddha SINHA, Brojeshwar BHOWMICK, Sanjana SINHA, Abhijit DAS
  • Publication number: 20180047157
    Abstract: Systems and methods for identifying body joint location includes obtaining skeletal data, depth data and red, green, and blue (RGB) data pertaining to a user, obtaining, using input data, an estimate of body joint locations (BJLs) and body segment lengths (BSLs), iteratively identifying, based on the depth data and RGB data, probable correct BJLs in a bounded neighborhood around BJLs that are previously obtained, comparing a body segment length associated with the probable correct BJLs and a reference length, identifying candidate BJLs based on comparison, determining a physical orientation of each body segment by segmenting three dimensional (3D) coordinates of each body segment based on the depth data and performing an analysis on each segmented 3D coordinate. A corrected BJL is identified based on a minimal deviation in direction from the physical orientation of a corresponding body segment along with a feature descriptor of the RGB data and depth data.
    Type: Application
    Filed: March 22, 2017
    Publication date: February 15, 2018
    Applicant: Tata Consultancy Services Limited
    Inventors: Sanjana SINHA, Brojeshwar Bhowmick, Kingshuk Chakravarty, Aniruddha Sinha, Abhijit Das