Patents by Inventor Jinwei Gu

Jinwei Gu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220335672
    Abstract: One embodiment of a method includes applying a first generator model to a semantic representation of an image to generate an affine transformation, where the affine transformation represents a bounding box associated with at least one region within the image. The method further includes applying a second generator model to the affine transformation and the semantic representation to generate a shape of an object. The method further includes inserting the object into the image based on the bounding box and the shape.
    Type: Application
    Filed: January 26, 2022
    Publication date: October 20, 2022
    Inventors: Donghoon LEE, Sifei LIU, Jinwei GU, Ming-Yu LIU, Jan KAUTZ
  • Publication number: 20220224881
    Abstract: A method, apparatus and device for camera calibration, and a storage medium. A camera to be calibrated for performing depth estimation on a scene is determined. A first correlation function for characterizing a correlation between a sensor modulation signal of the camera to be calibrated and a first modulated light emission signal is determined. A second correlation function for characterizing an actual correlation function produced by the camera to be calibrated is determined. A calibrated impulse response based on the first correlation function and the second correlation function is determined. The camera to be calibrated is calibrated based on the calibrated impulse response, to obtain the calibrated camera.
    Type: Application
    Filed: March 29, 2022
    Publication date: July 14, 2022
    Inventors: Felipe GUTIERREZ-BARRAGAN, Huaijin CHEN, Jinwei GU
  • Publication number: 20220165052
    Abstract: A method and device for generating data and computer storage medium are provided. In the method, an original image is obtained and first depth information of the original image is determined; point spread functions for four phases matching the first depth information and a complete point spread function matching the first depth information are determined; the original image is processed according to the point spread functions for the four phases to obtain input image data, and the original image is processed according to the complete point spread function to obtain labeled image data; and the input image data and the labeled image data are determined as training data for training a neural network.
    Type: Application
    Filed: January 25, 2022
    Publication date: May 26, 2022
    Inventors: Yufei GAN, Jun JIANG, Jinwei GU
  • Patent number: 11328169
    Abstract: A temporal propagation network (TPN) system learns the affinity matrix for video image processing tasks. An affinity matrix is a generic matrix that defines the similarity of two points in space. The TPN system includes a guidance neural network model and a temporal propagation module and is trained for a particular computer vision task to propagate visual properties from a key-frame represented by dense data (color), to another frame that is represented by coarse data (grey-scale). The guidance neural network model generates an affinity matrix referred to as a global transformation matrix from task-specific data for the key-frame and the other frame. The temporal propagation module applies the global transformation matrix to the key-frame property data to produce propagated property data (color) for the other frame. For example, the TPN system may be used to colorize several frames of greyscale video using a single manually colorized key-frame.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: May 10, 2022
    Assignee: NVIDIA Corporation
    Inventors: Sifei Liu, Shalini De Mello, Jinwei Gu, Varun Jampani, Jan Kautz
  • Patent number: 11328173
    Abstract: A temporal propagation network (TPN) system learns the affinity matrix for video image processing tasks. An affinity matrix is a generic matrix that defines the similarity of two points in space. The TPN system includes a guidance neural network model and a temporal propagation module and is trained for a particular computer vision task to propagate visual properties from a key-frame represented by dense data (color), to another frame that is represented by coarse data (grey-scale). The guidance neural network model generates an affinity matrix referred to as a global transformation matrix from task-specific data for the key-frame and the other frame. The temporal propagation module applies the global transformation matrix to the key-frame property data to produce propagated property data (color) for the other frame. For example, the TPN system may be used to colorize several frames of greyscale video using a single manually colorized key-frame.
    Type: Grant
    Filed: October 27, 2020
    Date of Patent: May 10, 2022
    Assignee: NVIDIA Corporation
    Inventors: Sifei Liu, Shalini De Mello, Jinwei Gu, Varun Jampani, Jan Kautz
  • Publication number: 20220108424
    Abstract: A method and device for image processing and computer storage medium are disclosed. In the method, red-green-blue (RGB) images corresponding to a raw image are obtained by demosaicing the raw image acquired by an image sensor; input data is obtained by downsampling the RGB images; and labeled data is generated according to the RGB images; and the input data and the labeled data are determined as training data for training the neural network.
    Type: Application
    Filed: December 17, 2021
    Publication date: April 7, 2022
    Inventors: Yufei Gan, Jun Jiang, Jinwei Gu
  • Patent number: 11295514
    Abstract: Inverse rendering estimates physical scene attributes (e.g., reflectance, geometry, and lighting) from image(s) and is used for gaming, virtual reality, augmented reality, and robotics. An inverse rendering network (IRN) receives a single input image of a 3D scene and generates the physical scene attributes for the image. The IRN is trained by using the estimated physical scene attributes generated by the IRN to reproduce the input image and updating parameters of the IRN to reduce differences between the reproduced input image and the input image. A direct renderer and a residual appearance renderer (RAR) reproduce the input image. The RAR predicts a residual image representing complex appearance effects of the real (not synthetic) image based on features extracted from the image and the reflectance and geometry properties. The residual image represents near-field illumination, cast shadows, inter-reflections, and realistic shading that are not provided by the direct renderer.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: April 5, 2022
    Assignee: NVIDIA Corporation
    Inventors: Jinwei Gu, Kihwan Kim, Jan Kautz, Guilin Liu, Soumyadip Sengupta
  • Publication number: 20220092748
    Abstract: A method for image processing, an electronic device, and a storage medium are provided. The method includes that: an image to be processed and respective semantic category information corresponding to each of multiple regions in the image to be processed are acquired, the respective semantic category information indicates at least one semantic category corresponding to the region; a respective category mapping parameter corresponding to each of the at least one semantic category is acquired; based on the respective semantic category information corresponding to each region and the respective category mapping parameter corresponding to each semantic category, a region mapping parameter corresponding to the region is determined; and the image to be processed is processed based on region mapping parameters corresponding to respective regions to obtain a processed image.
    Type: Application
    Filed: November 17, 2021
    Publication date: March 24, 2022
    Inventors: Qian ZHANG, Jun JIANG, Jinwei GU
  • Patent number: 11270161
    Abstract: When a computer image is generated from a real-world scene having a semi-reflective surface (e.g. window), the computer image will create, at the semi-reflective surface from the viewpoint of the camera, both a reflection of a scene in front of the semi-reflective surface and a transmission of a scene located behind the semi-reflective surface. Similar to a person viewing the real-world scene from different locations, angles, etc., the reflection and transmission may change, and also move relative to each other, as the viewpoint of the camera changes. Unfortunately, the dynamic nature of the reflection and transmission negatively impacts the performance of many computer applications, but performance can generally be improved if the reflection and transmission are separated. The present disclosure uses deep learning to separate reflection and transmission at a semi-reflective surface of a computer image generated from a real-world scene.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: March 8, 2022
    Assignee: NVIDIA Corporation
    Inventors: Orazio Gallo, Jinwei Gu, Jan Kautz, Patrick Wieschollek
  • Patent number: 11037051
    Abstract: Planar regions in three-dimensional scenes offer important geometric cues in a variety of three-dimensional perception tasks such as scene understanding, scene reconstruction, and robot navigation. Image analysis to detect planar regions can be performed by a deep learning architecture that includes a number of neural networks configured to estimate parameters for the planar regions. The neural networks process an image to detect an arbitrary number of plane objects in the image. Each plane object is associated with a number of estimated parameters including bounding box parameters, plane normal parameters, and a segmentation mask. Global parameters for the image, including a depth map, can also be estimated by one of the neural networks. Then, a segmentation refinement network jointly optimizes (i.e., refines) the segmentation masks for each instance of the plane objects and combines the refined segmentation masks to generate an aggregate segmentation mask for the image.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: June 15, 2021
    Assignee: NVIDIA Corporation
    Inventors: Kihwan Kim, Jinwei Gu, Chen Liu, Jan Kautz
  • Patent number: 10984545
    Abstract: Techniques for estimating depth for a video stream captured by a monocular image sensor are disclosed. A sequence of image frames are captured by the monocular image sensor. A first neural network is configured to process at least a portion of the sequence of image frames to generate a depth probability volume. The depth probability volume includes a plurality of probability maps corresponding to a number of discrete depth candidate locations over a range of depths defined for the scene. The depth probability volume can be updated using a second neural network that is configured to generate adaptive gain parameters to integrate the DPVs over time. A third neural network is configured to refine the updated depth probability volume from a lower resolution to a higher resolution that matches the original resolution of the sequence of image frames. A depth map can be calculated based on the depth probability volume.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: April 20, 2021
    Assignee: NVIDIA Corporation
    Inventors: Jinwei Gu, Kihwan Kim, Chao Liu
  • Patent number: 10964061
    Abstract: A deep neural network (DNN) system learns a map representation for estimating a camera position and orientation (pose). The DNN is trained to learn a map representation corresponding to the environment, defining positions and attributes of structures, trees, walls, vehicles, etc. The DNN system learns a map representation that is versatile and performs well for many different environments (indoor, outdoor, natural, synthetic, etc.). The DNN system receives images of an environment captured by a camera (observations) and outputs an estimated camera pose within the environment. The estimated camera pose is used to perform camera localization, i.e., recover the three-dimensional (3D) position and orientation of a moving camera, which is a fundamental task in computer vision with a wide variety of applications in robot navigation, car localization for autonomous driving, device localization for mobile navigation, and augmented/virtual reality.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: March 30, 2021
    Assignee: NVIDIA Corporation
    Inventors: Jinwei Gu, Samarth Manoj Brahmbhatt, Kihwan Kim, Jan Kautz
  • Publication number: 20210073575
    Abstract: A temporal propagation network (TPN) system learns the affinity matrix for video image processing tasks. An affinity matrix is a generic matrix that defines the similarity of two points in space. The TPN system includes a guidance neural network model and a temporal propagation module and is trained for a particular computer vision task to propagate visual properties from a key-frame represented by dense data (color), to another frame that is represented by coarse data (grey-scale). The guidance neural network model generates an affinity matrix referred to as a global transformation matrix from task-specific data for the key-frame and the other frame. The temporal propagation module applies the global transformation matrix to the key-frame property data to produce propagated property data (color) for the other frame. For example, the TPN system may be used to colorize several frames of greyscale video using a single manually colorized key-frame.
    Type: Application
    Filed: October 27, 2020
    Publication date: March 11, 2021
    Inventors: Sifei Liu, Shalini De Mello, Jinwei Gu, Varun Jampani, Jan Kautz
  • Patent number: 10922793
    Abstract: Missing image content is generated using a neural network. In an embodiment, a high resolution image and associated high resolution semantic label map are generated from a low resolution image and associated low resolution semantic label map. The input image/map pair (low resolution image and associated low resolution semantic label map) lacks detail and is therefore missing content. Rather than simply enhancing the input image/map pair, data missing in the input image/map pair is improvised or hallucinated by a neural network, creating plausible content while maintaining spatio-temporal consistency. Missing content is hallucinated to generate a detailed zoomed in portion of an image. Missing content is hallucinated to generate different variations of an image, such as different seasons or weather conditions for a driving video.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: February 16, 2021
    Assignee: NVIDIA Corporation
    Inventors: Seung-Hwan Baek, Kihwan Kim, Jinwei Gu, Orazio Gallo, Alejandro Jose Troccoli, Ming-Yu Liu, Jan Kautz
  • Publication number: 20200342263
    Abstract: When a computer image is generated from a real-world scene having a semi-reflective surface (e.g. window), the computer image will create, at the semi-reflective surface from the viewpoint of the camera, both a reflection of a scene in front of the semi-reflective surface and a transmission of a scene located behind the semi-reflective surface. Similar to a person viewing the real-world scene from different locations, angles, etc., the reflection and transmission may change, and also move relative to each other, as the viewpoint of the camera changes. Unfortunately, the dynamic nature of the reflection and transmission negatively impacts the performance of many computer applications, but performance can generally be improved if the reflection and transmission are separated. The present disclosure uses deep learning to separate reflection and transmission at a semi-reflective surface of a computer image generated from a real-world scene.
    Type: Application
    Filed: July 8, 2020
    Publication date: October 29, 2020
    Inventors: Orazio Gallo, Jinwei Gu, Jan Kautz, Patrick Wieschollek
  • Publication number: 20200294194
    Abstract: A video stitching system combines video from different cameras to form a panoramic video that, in various embodiments, is temporally stable and tolerant to strong parallax. In an embodiment, the system provides a smooth spatial interpolation that can be used to connect the input video images. In an embodiment, the system applies an interpolation layer to slices of the overlapping video sources, and the network learns a dense flow field to smoothly align the input videos with spatial interpolation. Various embodiments are applicable to areas such as virtual reality, immersive telepresence, autonomous driving, and video surveillance.
    Type: Application
    Filed: March 11, 2019
    Publication date: September 17, 2020
    Inventors: Deqing Sun, Orazio Gallo, Jan Kautz, Jinwei GU, Wei-Sheng Lai
  • Patent number: 10762425
    Abstract: A spatial linear propagation network (SLPN) system learns the affinity matrix for vision tasks. An affinity matrix is a generic matrix that defines the similarity of two points in space. The SLPN system is trained for a particular computer vision task and refines an input map (i.e., affinity matrix) that indicates pixels the share a particular property (e.g., color, object, texture, shape, etc.). Inputs to the SLPN system are input data (e.g., pixel values for an image) and the input map corresponding to the input data to be propagated. The input data is processed to produce task-specific affinity values (guidance data). The task-specific affinity values are applied to values in the input map, with at least two weighted values from each column contributing to a value in the refined map data for the adjacent column.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: September 1, 2020
    Assignee: NVIDIA Corporation
    Inventors: Sifei Liu, Shalini De Mello, Jinwei Gu, Ming-Hsuan Yang, Jan Kautz
  • Patent number: 10762620
    Abstract: When a computer image is generated from a real-world scene having a semi-reflective surface (e.g. window), the computer image will create, at the semi-reflective surface from the viewpoint of the camera, both a reflection of a scene in front of the semi-reflective surface and a transmission of a scene located behind the semi-reflective surface. Similar to a person viewing the real-world scene from different locations, angles, etc., the reflection and transmission may change, and also move relative to each other, as the viewpoint of the camera changes. Unfortunately, the dynamic nature of the reflection and transmission negatively impacts the performance of many computer applications, but performance can generally be improved if the reflection and transmission are separated. The present disclosure uses deep learning to separate reflection and transmission at a semi-reflective surface of a computer image generated from a real-world scene.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: September 1, 2020
    Assignee: NVIDIA Corporation
    Inventors: Orazio Gallo, Jinwei Gu, Jan Kautz, Patrick Wieschollek
  • Publication number: 20200273207
    Abstract: A deep neural network (DNN) system learns a map representation for estimating a camera position and orientation (pose). The DNN is trained to learn a map representation corresponding to the environment, defining positions and attributes of structures, trees, walls, vehicles, etc. The DNN system learns a map representation that is versatile and performs well for many different environments (indoor, outdoor, natural, synthetic, etc.). The DNN system receives images of an environment captured by a camera (observations) and outputs an estimated camera pose within the environment. The estimated camera pose is used to perform camera localization, i.e., recover the three-dimensional (3D) position and orientation of a moving camera, which is a fundamental task in computer vision with a wide variety of applications in robot navigation, car localization for autonomous driving, device localization for mobile navigation, and augmented/virtual reality.
    Type: Application
    Filed: May 12, 2020
    Publication date: August 27, 2020
    Inventors: Jinwei Gu, Samarth Manoj Brahmbhatt, Kihwan Kim, Jan Kautz
  • Patent number: 10692244
    Abstract: A deep neural network (DNN) system learns a map representation for estimating a camera position and orientation (pose). The DNN is trained to learn a map representation corresponding to the environment, defining positions and attributes of structures, trees, walls, vehicles, etc. The DNN system learns a map representation that is versatile and performs well for many different environments (indoor, outdoor, natural, synthetic, etc.). The DNN system receives images of an environment captured by a camera (observations) and outputs an estimated camera pose within the environment. The estimated camera pose is used to perform camera localization, i.e., recover the three-dimensional (3D) position and orientation of a moving camera, which is a fundamental task in computer vision with a wide variety of applications in robot navigation, car localization for autonomous driving, device localization for mobile navigation, and augmented/virtual reality.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: June 23, 2020
    Assignee: NVIDIA Corporation
    Inventors: Jinwei Gu, Samarth Manoj Brahmbhatt, Kihwan Kim, Jan Kautz