Patents by Inventor Soumyadip Sengupta

Soumyadip Sengupta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230343033
    Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.
    Type: Application
    Filed: June 29, 2023
    Publication date: October 26, 2023
    Inventors: Chen Cao, Menglei Chai, Linjie Luo, Soumyadip Sengupta
  • Patent number: 11710275
    Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: July 25, 2023
    Assignee: Snap Inc.
    Inventors: Soumyadip Sengupta, Linjie Luo, Chen Cao, Menglei Chai
  • Patent number: 11295514
    Abstract: Inverse rendering estimates physical scene attributes (e.g., reflectance, geometry, and lighting) from image(s) and is used for gaming, virtual reality, augmented reality, and robotics. An inverse rendering network (IRN) receives a single input image of a 3D scene and generates the physical scene attributes for the image. The IRN is trained by using the estimated physical scene attributes generated by the IRN to reproduce the input image and updating parameters of the IRN to reduce differences between the reproduced input image and the input image. A direct renderer and a residual appearance renderer (RAR) reproduce the input image. The RAR predicts a residual image representing complex appearance effects of the real (not synthetic) image based on features extracted from the image and the reflectance and geometry properties. The residual image represents near-field illumination, cast shadows, inter-reflections, and realistic shading that are not provided by the direct renderer.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: April 5, 2022
    Assignee: NVIDIA Corporation
    Inventors: Jinwei Gu, Kihwan Kim, Jan Kautz, Guilin Liu, Soumyadip Sengupta
  • Publication number: 20220036647
    Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.
    Type: Application
    Filed: October 12, 2021
    Publication date: February 3, 2022
    Inventors: Soumyadip Sengupta, Linjie Luo, Chen Cao, Menglei Chai
  • Patent number: 11164376
    Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: November 2, 2021
    Assignee: Snap Inc.
    Inventors: Soumyadip Sengupta, Linjie Luo, Chen Cao, Menglei Chai
  • Publication number: 20200160593
    Abstract: Inverse rendering estimates physical scene attributes (e.g., reflectance, geometry, and lighting) from image(s) and is used for gaming, virtual reality, augmented reality, and robotics. An inverse rendering network (IRN) receives a single input image of a 3D scene and generates the physical scene attributes for the image. The IRN is trained by using the estimated physical scene attributes generated by the IRN to reproduce the input image and updating parameters of the IRN to reduce differences between the reproduced input image and the input image. A direct renderer and a residual appearance renderer (RAR) reproduce the input image. The RAR predicts a residual image representing complex appearance effects of the real (not synthetic) image based on features extracted from the image and the reflectance and geometry properties. The residual image represents near-field illumination, cast shadows, inter-reflections, and realistic shading that are not provided by the direct renderer.
    Type: Application
    Filed: November 15, 2019
    Publication date: May 21, 2020
    Inventors: Jinwei Gu, Kihwan Kim, Jan Kautz, Guilin Liu, Soumyadip Sengupta