Patents by Inventor Sanjana SINHA
Sanjana SINHA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240403508Abstract: State of the art approaches for 3D garment simulation approaches have the disadvantages that they 1) work on fixed garment type, 2) work on fixed body shapes, and 3) assume fixed garment topology. As a result, they do not offer a generic solution for garment simulation. Method and system disclosed herein use a combination of a body motion aware ARAP garment deformation and a Physics Enforcing Network (PEN), so as to generate garment simulations irrespective of garment type, body shapes, and garment topology, thus offering a generic solution.Type: ApplicationFiled: May 22, 2024Publication date: December 5, 2024Applicant: Tata Consultancy Services LimitedInventors: LOKENDER TIWARI, BROJESHWAR BHOWMICK, SANJANA SINHA
-
Publication number: 20230351662Abstract: This disclosure relates generally to methods and systems for emotion-controllable generalized talking face generation of an arbitrary face image. Most of the conventional techniques for the realistic talking face generation may not be efficient to control the emotion over the face and have limited scope of generalization to an arbitrary unknown target face. The present disclosure proposes a graph convolutional network that uses speech content feature along with an independent emotion input to generate emotion and speech-induced motion on facial geometry-aware landmark representation. The facial geometry-aware landmark representation is further used in by an optical flow-guided texture generation network for producing the texture. A two-branch optical flow-guided texture generation network with motion and texture branches is designed to consider the motion and texture content independently.Type: ApplicationFiled: February 2, 2023Publication date: November 2, 2023Applicant: Tata Consultancy Services LimitedInventors: SANJANA SINHA, SANDIKA BISWAS, BROJESHWAR BHOWMICK
-
Patent number: 11794347Abstract: This disclosure relates generally to navigation of a tele-robot in dynamic environment using in-situ intelligence. Tele-robotics is the area of robotics concerned with the control of robots (tele-robots) in a remote environment from a distance. In reality the remote environment where the tele robot navigates may be dynamic in nature with unpredictable movements, making the navigation extremely challenging. The disclosure proposes an in-situ intelligent navigation of a tele-robot in a dynamic environment. The disclosed in-situ intelligence enables the tele-robot to understand the dynamic environment by identification and estimation of future location of objects based on a generating/training a motion model. Further the disclosed techniques also enable communication between a master and the tele-robot (whenever necessary) based on an application layer communication semantic.Type: GrantFiled: March 11, 2021Date of Patent: October 24, 2023Assignee: TATA CONSULTANCY SERVICES LIMITEDInventors: Abhijan Bhattacharyya, Ruddra dev Roychoudhury, Sanjana Sinha, Sandika Biswas, Ashis Sau, Madhurima Ganguly, Sayan Paul, Brojeshwar Bhowmick
-
Audio-speech driven animated talking face generation using a cascaded generative adversarial network
Patent number: 11551394Abstract: Conventional state-of-the-art methods are limited in their ability to generate realistic animation from audio on any unknown faces and cannot be easily generalized to different facial characteristics and voice accents. Further, these methods fail to produce realistic facial animation for subjects which are quite different than that of distribution of facial characteristics network has seen during training. Embodiments of the present disclosure provide systems and methods that generate audio-speech driven animated talking face using a cascaded generative adversarial network (CGAN), wherein a first GAN is used to transfer lip motion from canonical face to person-specific face. A second GAN based texture generator network is conditioned on person-specific landmark to generate high-fidelity face corresponding to the motion. Texture generator GAN is made more flexible using meta learning to adapt to unknown subject's traits and orientation of face during inference.Type: GrantFiled: March 11, 2021Date of Patent: January 10, 2023Assignee: TATA CONSULTANCY SERVICES LIMITEDInventors: Sandika Biswas, Dipanjan Das, Sanjana Sinha, Brojeshwar Bhowmick -
Publication number: 20220219325Abstract: This disclosure relates generally to navigation of a tele-robot in dynamic environment using in-situ intelligence. Tele-robotics is the area of robotics concerned with the control of robots (tele-robots) in a remote environment from a distance. In reality the remote environment where the tele robot navigates may be dynamic in nature with unpredictable movements, making the navigation extremely challenging. The disclosure proposes an in-situ intelligent navigation of a tele-robot in a dynamic environment. The disclosed in-situ intelligence enables the tele-robot to understand the dynamic environment by identification and estimation of future location of objects based on a generating/training a motion model. Further the disclosed techniques also enable communication between a master and the tele-robot (whenever necessary) based on an application layer communication semantic.Type: ApplicationFiled: March 11, 2021Publication date: July 14, 2022Applicant: Tata Consultancy Services LimitedInventors: Abhijan BHATTACHARYYA, Ruddra dev ROYCHOUDHURY, Sanjana SINHA, Sandika BISWAS, Ashis SAU, Madhurima GANGULY, Sayan PAUL, Brojeshwar BHOWMICK
-
Patent number: 11295501Abstract: Most of the prior art references that generate animations fail to determine and consider head movement data. The prior art references which consider the head movement data for generating the animations rely on a sample video to generate/determine the head movements data, which, as a result, fail to capture changing head motions throughout course of a speech given by a subject in an actual whole length video. The disclosure herein generally relates to generating facial animations, and, more particularly, to a method and system for generating the facial animations from speech signal of a subject. The system determines the head movement, lip movements, and eyeball movements, of the subject, by processing a speech signal collected as input, and uses the head movement, lip movements, and eyeball movements, to generate an animation.Type: GrantFiled: March 1, 2021Date of Patent: April 5, 2022Assignee: Tata Consultancy Services LimitedInventors: Sandika Biswas, Dipanjan Das, Sanjana Sinha, Brojeshwar Bhowmick
-
Patent number: 11256962Abstract: Estimating 3D human pose from monocular images is a challenging problem due to the variety and complexity of human poses and the inherent ambiguity in recovering depth from single view. Recent deep learning based methods show promising results by using supervised learning on 3D pose annotated datasets. However, the lack of large-scale 3D annotated training data makes the 3D pose estimation difficult in-the-wild. Embodiments of the present disclosure provide a method which can effectively predict 3D human poses from only 2D pose in a weakly-supervised manner by using both ground-truth 3D pose and ground-truth 2D pose based on re-projection error minimization as a constraint to predict the 3D joint locations. The method may further utilize additional geometric constraints on reconstructed body parts to regularize the pose in 3D along with minimizing re-projection error to improvise on estimating an accurate 3D pose.Type: GrantFiled: March 11, 2020Date of Patent: February 22, 2022Assignee: Tata Consultancy Services LimitedInventors: Sandika Biswas, Sanjana Sinha, Kavya Gupta, Brojeshwar Bhowmick
-
AUDIO-SPEECH DRIVEN ANIMATED TALKING FACE GENERATION USING A CASCADED GENERATIVE ADVERSARIAL NETWORK
Publication number: 20220036617Abstract: Conventional state-of-the-art methods are limited in their ability to generate realistic animation from audio on any unknown faces and cannot be easily generalized to different facial characteristics and voice accents. Further, these methods fail to produce realistic facial animation for subjects which are quite different than that of distribution of facial characteristics network has seen during training. Embodiments of the present disclosure provide systems and methods that generate audio-speech driven animated talking face using a cascaded generative adversarial network (CGAN), wherein a first GAN is used to transfer lip motion from canonical face to person-specific face. A second GAN based texture generator network is conditioned on person-specific landmark to generate high-fidelity face corresponding to the motion. Texture generator GAN is made more flexible using meta learning to adapt to unknown subject's traits and orientation of face during inference.Type: ApplicationFiled: March 11, 2021Publication date: February 3, 2022Applicant: Tata Consultancy Services LimitedInventors: Sandika BISWAS, Dipanjan DAS, Sanjana SINHA, Brojeshwar BHOWMICK -
Publication number: 20210366173Abstract: Speech-driven facial animation is useful for a variety of applications such as telepresence, chatbots, etc. The necessary attributes of having a realistic face animation are: 1) audiovisual synchronization, (2) identity preservation of the target individual, (3) plausible mouth movements, and (4) presence of natural eye blinks. Existing methods mostly address audio-visual lip synchronization, and synthesis of natural facial gestures for overall video realism. However, existing approaches are not accurate. Present disclosure provides system and method that learn motion of facial landmarks as an intermediate step before generating texture. Person-independent facial landmarks are generated from audio for invariance to different voices, accents, etc. Eye blinks are imposed on facial landmarks and the person-independent landmarks are retargeted to person-specific landmarks to preserve identity related facial structure.Type: ApplicationFiled: September 29, 2020Publication date: November 25, 2021Applicant: Tata Consultancy Services LimitedInventors: Sanjana SINHA, Sandika BISWAS, Brojeshwar BHOWMICK
-
Patent number: 11176724Abstract: Speech-driven facial animation is useful for a variety of applications such as telepresence, chatbots, etc. The necessary attributes of having a realistic face animation are: 1) audiovisual synchronization, (2) identity preservation of the target individual, (3) plausible mouth movements, and (4) presence of natural eye blinks. Existing methods mostly address audio-visual lip synchronization, and synthesis of natural facial gestures for overall video realism. However, existing approaches are not accurate. Present disclosure provides system and method that learn motion of facial landmarks as an intermediate step before generating texture. Person-independent facial landmarks are generated from audio for invariance to different voices, accents, etc. Eye blinks are imposed on facial landmarks and the person-independent landmarks are retargeted to person-specific landmarks to preserve identity related facial structure.Type: GrantFiled: September 29, 2020Date of Patent: November 16, 2021Assignee: Tata Consultancy Services LimitedInventors: Sanjana Sinha, Sandika Biswas, Brojeshwar Bhowmick
-
Patent number: 10980447Abstract: Body joint tracking is applied in various industries and medical field. In body joint tracking, marker less devices plays an important role. However, the marker less devices are facing some challenges in providing optimal tracking due to occlusion, ambiguity, lighting conditions, dynamic objects etc. System and method of the present disclosure provides an optimized body joint tracking. Here, motion data pertaining to a first set of motion frames from a motion sensor are received. Further, the motion data are processed to obtain a plurality of 3 dimensional cylindrical models. Here, every cylindrical model among the plurality of 3 dimensional cylindrical model represents a body segment. The coefficients associated with the plurality of 3 dimensional cylindrical models are initialized to obtain a set of initialized cylindrical models. A set of dynamic coefficients associated with the initialized cylindrical models are utilized to track joint motion trajectories of a set of subsequent frames.Type: GrantFiled: February 28, 2018Date of Patent: April 20, 2021Assignee: Tata Consultancy Limited ServicesInventors: Sanjana Sinha, Brojeshwar Bhowmick, Aniruddha Sinha, Abhijit Das
-
Publication number: 20200342270Abstract: Estimating 3D human pose from monocular images is a challenging problem due to the variety and complexity of human poses and the inherent ambiguity in recovering depth from single view. Recent deep learning based methods show promising results by using supervised learning on 3D pose annotated datasets. However, the lack of large-scale 3D annotated training data makes the 3D pose estimation difficult in-the-wild. Embodiments of the present disclosure provide a method which can effectively predict 3D human poses from only 2D pose in a weakly-supervised manner by using both ground-truth 3D pose and ground-truth 2D pose based on re-projection error minimization as a constraint to predict the 3D joint locations. The method may further utilize additional geometric constraints on reconstructed body parts to regularize the pose in 3D along with minimizing re-projection error to improvise on estimating an accurate 3D pose.Type: ApplicationFiled: March 11, 2020Publication date: October 29, 2020Applicant: Tata Consultancy Services LimitedInventors: Sandika BISWAS, Sanjana SINHA, Kavya GUPTA, Brojeshwar BHOWMICK
-
Patent number: 10475231Abstract: Methods and systems for change detection utilizing three dimensional (3D) point-cloud processing are provided. The method includes detecting changes in the surface based on a surface fitting approach with a locally weighted Moving Least Squares (MLS) approximation. The method includes acquiring and comparing surface geometry of a reference point-cloud defining a reference surface and a template point-cloud defining a template surface at local regions or local surfaces using the surface fitting approach. The method provides effective change detection for both, rigid as well as non-rigid changes, reducing false detections due to presence of noise and is independent of factors such as texture or illumination of an object or scene being tracked for changed detection.Type: GrantFiled: February 15, 2018Date of Patent: November 12, 2019Assignee: Tata Consultancy Services LimitedInventors: Brojeshwar Bhowmick, Swapna Agarwal, Sanjana Sinha, Balamuralidhar Purushothaman, Apurbaa Mallik
-
Publication number: 20190080503Abstract: Methods and systems for change detection utilizing three dimensional (3D) point-cloud processing are provided. The method includes detecting changes in the surface based on a surface fitting approach with a locally weighted Moving Least Squares (MLS) approximation. The method includes acquiring and comparing surface geometry of a reference point-cloud defining a reference surface and a template point-cloud defining a template surface at local regions or local surfaces using the surface fitting approach. The method provides effective change detection for both, rigid as well as non-rigid changes, reducing false detections due to presence of noise and is independent of factors such as texture or illumination of an object or scene being tracked for changed detection.Type: ApplicationFiled: February 15, 2018Publication date: March 14, 2019Applicant: Tata Consultancy Services LimitedInventors: Brojeshwar BHOWMICK, Swapna AGARWAL, Sanjana SINHA, Balamuralidhar PURUSHOTHAMAN, Apurbaa MALLIK
-
Publication number: 20190008421Abstract: Body joint tracking is applied in various industries and medical field. In body joint tracking, marker less devices plays an important role. However, the marker less devices are facing some challenges in providing optimal tracking due to occlusion, ambiguity, lighting conditions, dynamic objects etc. System and method of the present disclosure provides an optimized body joint tracking. Here, motion data pertaining to a first set of motion frames from a motion sensor are received. Further, the motion data are processed to obtain a plurality of 3 dimensional cylindrical models. Here, every cylindrical model among the plurality of 3 dimensional cylindrical model represents a body segment. The coefficients associated with the plurality of 3 dimensional cylindrical models are initialized to obtain a set of initialized cylindrical models. A set of dynamic coefficients associated with the initialized cylindrical models are utilized to track joint motion trajectories of a set of subsequent frames.Type: ApplicationFiled: February 28, 2018Publication date: January 10, 2019Applicant: Tata Consultancy Services LimitedInventors: Sanjana Sinha, Brojeshwar Bhowmick, Aniruddha Sinha, Abhijit Das
-
Patent number: 10068333Abstract: Systems and methods for identifying body joint location includes obtaining skeletal data, depth data and red, green, and blue (RGB) data pertaining to a user, obtaining, using input data, an estimate of body joint locations (BJLs) and body segment lengths (BSLs), iteratively identifying, based on the depth data and RGB data, probable correct BJLs in a bounded neighborhood around BJLs that are previously obtained, comparing a body segment length associated with the probable correct BJLs and a reference length, identifying candidate BJLs based on comparison, determining a physical orientation of each body segment by segmenting three dimensional (3D) coordinates of each body segment based on the depth data and performing an analysis on each segmented 3D coordinate. A corrected BJL is identified based on a minimal deviation in direction from the physical orientation of a corresponding body segment along with a feature descriptor of the RGB data and depth data.Type: GrantFiled: March 22, 2017Date of Patent: September 4, 2018Assignee: Tata Consultancy Services LimitedInventors: Sanjana Sinha, Brojeshwar Bhowmick, Kingshuk Chakravarty, Aniruddha Sinha, Abhijit Das
-
Publication number: 20180085045Abstract: A method and system for determining postural balance of the person is provided. The disclosure provides a single limb stance body balance analysis system which will aid medical practitioners to analyze crucial factor for fall risk minimization, injury prevention, fitness and rehabilitation. Skeleton data was captured using Kinect. Two parameters vibration-jitter and force per unit mass (FPUM) are derived for each body part to assess postural stability during SLS. Further, the vibration and force imposed on each joint a first balance score was quantified. A first balance score and a second balance are also calculated by combining vibration index and SLS duration to indicate the postural balance of the person. The vibration index is computed from the vibration profile associated different body segments.Type: ApplicationFiled: March 22, 2017Publication date: March 29, 2018Applicant: Tata Consultancy Services LimitedInventors: Kingshuk CHAKRAVARTY, Aniruddha SINHA, Brojeshwar BHOWMICK, Sanjana SINHA, Abhijit DAS
-
Publication number: 20180047157Abstract: Systems and methods for identifying body joint location includes obtaining skeletal data, depth data and red, green, and blue (RGB) data pertaining to a user, obtaining, using input data, an estimate of body joint locations (BJLs) and body segment lengths (BSLs), iteratively identifying, based on the depth data and RGB data, probable correct BJLs in a bounded neighborhood around BJLs that are previously obtained, comparing a body segment length associated with the probable correct BJLs and a reference length, identifying candidate BJLs based on comparison, determining a physical orientation of each body segment by segmenting three dimensional (3D) coordinates of each body segment based on the depth data and performing an analysis on each segmented 3D coordinate. A corrected BJL is identified based on a minimal deviation in direction from the physical orientation of a corresponding body segment along with a feature descriptor of the RGB data and depth data.Type: ApplicationFiled: March 22, 2017Publication date: February 15, 2018Applicant: Tata Consultancy Services LimitedInventors: Sanjana SINHA, Brojeshwar Bhowmick, Kingshuk Chakravarty, Aniruddha Sinha, Abhijit Das