Patents by Inventor Lokender Tiwari

Lokender Tiwari has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240403508
    Abstract: State of the art approaches for 3D garment simulation approaches have the disadvantages that they 1) work on fixed garment type, 2) work on fixed body shapes, and 3) assume fixed garment topology. As a result, they do not offer a generic solution for garment simulation. Method and system disclosed herein use a combination of a body motion aware ARAP garment deformation and a Physics Enforcing Network (PEN), so as to generate garment simulations irrespective of garment type, body shapes, and garment topology, thus offering a generic solution.
    Type: Application
    Filed: May 22, 2024
    Publication date: December 5, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: LOKENDER TIWARI, BROJESHWAR BHOWMICK, SANJANA SINHA
  • Publication number: 20240338834
    Abstract: Estimating temporally consistent 3D human body shape, pose, and motion from a monocular video is a challenging task due to occlusions, poor lightning conditions, complex articulated body poses, depth ambiguity, and limited availability of annotated data. Embodiments of present disclosure provide a method for temporally consistent motion estimation from monocular video. A monocular video of person(s) is captured by a weak perspective camera and spatial features of body of the persons are extracted from each frame of the video. Then, initial estimates of body shape, body pose, and features of the weak perspective camera are obtained. The spatial features and initial estimates are then aggregated to obtain spatio-temporal features by a combination of self-similarity matrices between the spatial features, pose and the camera and self-attention maps of the camera features and the spatial features. The spatio-temporal aggregated features are then used to predict shape and pose parameters of the person(s).
    Type: Application
    Filed: December 20, 2023
    Publication date: October 10, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: LOKENDER TIWARI, SUSHOVAN CHANDA, HRISHAV BAKUL BARUA, BROJESHWAR BHOWMICK, AVINASH SHARMA, AMOGH TIWARI
  • Publication number: 20240078356
    Abstract: Garments in their natural form are represented by meshes, where vertices (entities) are connected (related) to each other through mesh edges. Earlier methods largely ignored this relational nature of garment data while modeling garments and networks. Present disclosure provides a particle-based garment system and method that learn to simulate template garments on the target arbitrary body poses by representing physical state of garment vertices as particles, expressed as nodes in a graph, and dynamics (velocities of garment vertices) is computed through a learned message-passing. The system and method exploit this relational nature of garment data and network implemented to enforce strong relational inductive bias in garment dynamics thereby accurately simulating garments on the target body pose conditioned on body motion and fabric type at any resolution without modification even for loose garments, unlike existing state-of-the-art (SOTA) methods.
    Type: Application
    Filed: June 13, 2023
    Publication date: March 7, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: LOKENDER TIWARI, BROJESHWAR BHOWMICK
  • Patent number: 11778162
    Abstract: This disclosure relates generally to method and system for draping a 3D garment on a 3D human body. Dressing digital humans in 3D have gained much attention due to its use in online shopping and draping 3D garments over the 3D human body has immense applications in virtual try-on, animations, and accurate fitment of the 3D garment is the utmost importance. The proposed disclosure is a single unified garment deformation model that learns the shared space of variations for a body shape, a body pose, and a styling garment. The method receives a plurality of human body inputs to construct a 3D skinned garments for the subject. The deep draper network trained using a plurality of losses provides efficient deep neural network based method that predicts fast and accurate 3D garment images. The method couples the geometric and multi-view perceptual constraints that efficiently learn the garment deformation's high-frequency geometry.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: October 3, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Lokender Tiwari, Brojeshwar Bhowmick
  • Publication number: 20220368882
    Abstract: This disclosure relates generally to method and system for draping a 3D garment on a 3D human body. Dressing digital humans in 3D have gained much attention due to its use in online shopping and draping 3D garments over the 3D human body has immense applications in virtual try-on, animations, and accurate fitment of the 3D garment is the utmost importance. The proposed disclosure is a single unified garment deformation model that learns the shared space of variations for a body shape, a body pose, and a styling garment. The method receives a plurality of human body inputs to construct a 3D skinned garments for the subject. The deep draper network trained using a plurality of losses provides efficient deep neural network based method that predicts fast and accurate 3D garment images. The method couples the geometric and multi-view perceptual constraints that efficiently learn the garment deformation's high-frequency geometry.
    Type: Application
    Filed: December 29, 2021
    Publication date: November 17, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: LOKENDER TIWARI, BROJESHWAR BHOWMICK
  • Patent number: 11468585
    Abstract: A method for improving geometry-based monocular structure from motion (SfM) by exploiting depth maps predicted by convolutional neural networks (CNNs) is presented. The method includes capturing a sequence of RGB images from an unlabeled monocular video stream obtained by a monocular camera, feeding the RGB images into a depth estimation/refinement module, outputting depth maps, feeding the depth maps and the RGB images to a pose estimation/refinement module, the depths maps and the RGB images collectively defining pseudo RGB-D images, outputting camera poses and point clouds, and constructing a 3D map of a surrounding environment displayed on a visualization device.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: October 11, 2022
    Inventors: Quoc-Huy Tran, Pan Ji, Manmohan Chandraker, Lokender Tiwari
  • Publication number: 20210065391
    Abstract: A method for improving geometry-based monocular structure from motion (SfM) by exploiting depth maps predicted by convolutional neural networks (CNNs) is presented. The method includes capturing a sequence of RGB images from an unlabeled monocular video stream obtained by a monocular camera, feeding the RGB images into a depth estimation/refinement module, outputting depth maps, feeding the depth maps and the RGB images to a pose estimation/refinement module, the depths maps and the RGB images collectively defining pseudo RGB-D images, outputting camera poses and point clouds, and constructing a 3D map of a surrounding environment displayed on a visualization device.
    Type: Application
    Filed: August 7, 2020
    Publication date: March 4, 2021
    Inventors: Quoc-Huy Tran, Pan Ji, Manmohan Chandraker, Lokender Tiwari