Patents by Inventor Lokender Tiwari

Lokender Tiwari has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240078356
    Abstract: Garments in their natural form are represented by meshes, where vertices (entities) are connected (related) to each other through mesh edges. Earlier methods largely ignored this relational nature of garment data while modeling garments and networks. Present disclosure provides a particle-based garment system and method that learn to simulate template garments on the target arbitrary body poses by representing physical state of garment vertices as particles, expressed as nodes in a graph, and dynamics (velocities of garment vertices) is computed through a learned message-passing. The system and method exploit this relational nature of garment data and network implemented to enforce strong relational inductive bias in garment dynamics thereby accurately simulating garments on the target body pose conditioned on body motion and fabric type at any resolution without modification even for loose garments, unlike existing state-of-the-art (SOTA) methods.
    Type: Application
    Filed: June 13, 2023
    Publication date: March 7, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: LOKENDER TIWARI, BROJESHWAR BHOWMICK
  • Patent number: 11778162
    Abstract: This disclosure relates generally to method and system for draping a 3D garment on a 3D human body. Dressing digital humans in 3D have gained much attention due to its use in online shopping and draping 3D garments over the 3D human body has immense applications in virtual try-on, animations, and accurate fitment of the 3D garment is the utmost importance. The proposed disclosure is a single unified garment deformation model that learns the shared space of variations for a body shape, a body pose, and a styling garment. The method receives a plurality of human body inputs to construct a 3D skinned garments for the subject. The deep draper network trained using a plurality of losses provides efficient deep neural network based method that predicts fast and accurate 3D garment images. The method couples the geometric and multi-view perceptual constraints that efficiently learn the garment deformation's high-frequency geometry.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: October 3, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Lokender Tiwari, Brojeshwar Bhowmick
  • Publication number: 20220368882
    Abstract: This disclosure relates generally to method and system for draping a 3D garment on a 3D human body. Dressing digital humans in 3D have gained much attention due to its use in online shopping and draping 3D garments over the 3D human body has immense applications in virtual try-on, animations, and accurate fitment of the 3D garment is the utmost importance. The proposed disclosure is a single unified garment deformation model that learns the shared space of variations for a body shape, a body pose, and a styling garment. The method receives a plurality of human body inputs to construct a 3D skinned garments for the subject. The deep draper network trained using a plurality of losses provides efficient deep neural network based method that predicts fast and accurate 3D garment images. The method couples the geometric and multi-view perceptual constraints that efficiently learn the garment deformation's high-frequency geometry.
    Type: Application
    Filed: December 29, 2021
    Publication date: November 17, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: LOKENDER TIWARI, BROJESHWAR BHOWMICK
  • Patent number: 11468585
    Abstract: A method for improving geometry-based monocular structure from motion (SfM) by exploiting depth maps predicted by convolutional neural networks (CNNs) is presented. The method includes capturing a sequence of RGB images from an unlabeled monocular video stream obtained by a monocular camera, feeding the RGB images into a depth estimation/refinement module, outputting depth maps, feeding the depth maps and the RGB images to a pose estimation/refinement module, the depths maps and the RGB images collectively defining pseudo RGB-D images, outputting camera poses and point clouds, and constructing a 3D map of a surrounding environment displayed on a visualization device.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: October 11, 2022
    Inventors: Quoc-Huy Tran, Pan Ji, Manmohan Chandraker, Lokender Tiwari
  • Publication number: 20210065391
    Abstract: A method for improving geometry-based monocular structure from motion (SfM) by exploiting depth maps predicted by convolutional neural networks (CNNs) is presented. The method includes capturing a sequence of RGB images from an unlabeled monocular video stream obtained by a monocular camera, feeding the RGB images into a depth estimation/refinement module, outputting depth maps, feeding the depth maps and the RGB images to a pose estimation/refinement module, the depths maps and the RGB images collectively defining pseudo RGB-D images, outputting camera poses and point clouds, and constructing a 3D map of a surrounding environment displayed on a visualization device.
    Type: Application
    Filed: August 7, 2020
    Publication date: March 4, 2021
    Inventors: Quoc-Huy Tran, Pan Ji, Manmohan Chandraker, Lokender Tiwari