Patents by Inventor Shiqiu Liu

Shiqiu Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220092736
    Abstract: Apparatuses, systems, and techniques to enhance video are disclosed. In at least one embodiment, one or more neural networks are used to create a higher resolution video using upsampled frames from a lower resolution video.
    Type: Application
    Filed: December 6, 2021
    Publication date: March 24, 2022
    Inventors: Shiqiu Liu, Matthieu Le, Andrew Tao
  • Publication number: 20220038654
    Abstract: Apparatuses, systems, and techniques to generate interpolated video frames. In at least one embodiment, an interpolated video frame is generated based, at least in part, on a first set of pixel data sampled from a first video frame, and a second set of pixel data sampled from a second video frame based, at least in part, on a set of forward pointing motion vectors from the first video frame to the second video frame.
    Type: Application
    Filed: July 30, 2020
    Publication date: February 3, 2022
    Inventors: Fitsum Reda, Karan Sapra, Robert Thomas Pottorff, Shiqiu Liu, Andrew Tao, Bryan Christopher Catanzaro
  • Publication number: 20220038653
    Abstract: Apparatuses, systems, and techniques to generate interpolated video frames. In at least one embodiment, an interpolated video frame is generated based, at least in part, on one of a plurality of possible motions of one or more objects from a first video frame to a second video frame.
    Type: Application
    Filed: July 30, 2020
    Publication date: February 3, 2022
    Inventors: Fitsum Reda, Karan Sapra, Robert Thomas Pottorff, Shiqiu Liu, Andrew Tao, Bryan Christopher Catanzaro
  • Publication number: 20210342980
    Abstract: Various approaches are disclosed to temporally and spatially filter noisy image data—generated using one or more ray-tracing effects—in a graphically rendered image. Rather than fully sampling data values using spatial filters, the data values may be sparsely sampled using filter taps within the spatial filters. To account for the sparse sampling, locations of filter taps may be jittered spatially and/or temporally. For filtering efficiency, a size of a spatial filter may be reduced when historical data values are used to temporally filter pixels. Further, data values filtered using a temporal filter may be clamped to avoid ghosting. For further filtering efficiency, a spatial filter may be applied as a separable filter in which the filtering for a filter direction may be performed over multiple iterations using reducing filter widths, decreasing the chance of visual artifacts when the spatial filter does not follow a true Gaussian distribution.
    Type: Application
    Filed: July 15, 2021
    Publication date: November 4, 2021
    Inventors: Shiqiu Liu, Jacopo Pantaleoni
  • Publication number: 20210279841
    Abstract: Apparatuses, systems, and techniques for texture synthesis from small input textures in images using convolutional neural networks. In at least one embodiment, one or more convolutional layers are used in conjunction with one or more transposed convolution operations to generate a large textured output image from a small input textured image while preserving global features and texture, according to various novel techniques described herein.
    Type: Application
    Filed: March 9, 2020
    Publication date: September 9, 2021
    Inventors: Guilin Liu, Andrew Tao, Bryan Christopher Catanzaro, Ting-Chun Wang, Zhiding Yu, Shiqiu Liu, Fitsum Reda, Karan Sapra, Brandon Rowlett
  • Patent number: 11113792
    Abstract: Various approaches are disclosed to temporally and spatially filter noisy image data—generated using one or more ray-tracing effects—in a graphically rendered image. Rather than fully sampling data values using spatial filters, the data values may be sparsely sampled using filter taps within the spatial filters. To account for the sparse sampling, locations of filter taps may be jittered spatially and/or temporally. For filtering efficiency, a size of a spatial filter may be reduced when historical data values are used to temporally filter pixels. Further, data values filtered using a temporal filter may be clamped to avoid ghosting. For further filtering efficiency, a spatial filter may be applied as a separable filter in which the filtering for a filter direction may be performed over multiple iterations using reducing filter widths, decreasing the chance of visual artifacts when the spatial filter does not follow a true Gaussian distribution.
    Type: Grant
    Filed: August 14, 2019
    Date of Patent: September 7, 2021
    Assignee: NVIDIA CORPORATION
    Inventors: Shiqiu Liu, Jacopo Pantaleoni
  • Publication number: 20210264571
    Abstract: This disclosure presents a method and computer program product to denoise a ray traced scene. An apparatus for processing a ray traced scene is also disclosed. In one example, the method includes: (1) generating filtered scene data by filtering modified scene data from original scene data utilizing a spatial filter, and (2) providing a denoised ray traced scene by adjusting the filtered scene data utilizing a temporal filter. The modified and adjusted scene data can be sent to a rendering processor or system to complete rendering to generate a final scene.
    Type: Application
    Filed: April 27, 2021
    Publication date: August 26, 2021
    Inventors: Shiqiu Liu, Jacopo Pantaleoni
  • Patent number: 10991079
    Abstract: This disclosure presents a method to denoise a ray traced scene where the ray tracing uses a minimal number of rays. The method can use temporal reprojections to compute a weighted average to the scene data. A spatial filter can be run on the scene data, using the temporal reprojection count to reduce the size of the utilized spatial filter radius. In some aspects, additional temporal filters can be applied to the scene data. In some aspects, global illumination temporal reprojection history counts can be used to modify the spatial filter radius. In some aspects, caustic photon tracing can be conducted to compute a logarithmic cost, which can then be utilized to reduce the denoising radius used by the spatial filter. The modified and adjusted scene data can be sent to a rendering process to complete the rendering to generate a final scene.
    Type: Grant
    Filed: August 2, 2019
    Date of Patent: April 27, 2021
    Assignee: Nvidia Corporation
    Inventors: Shiqiu Liu, Jacopo Pantaleoni
  • Publication number: 20210073944
    Abstract: Apparatuses, systems, and techniques to enhance video are disclosed. In at least one embodiment, one or more neural networks are used to create a higher resolution video using upsampled frames from a lower resolution video.
    Type: Application
    Filed: September 9, 2019
    Publication date: March 11, 2021
    Inventors: Shiqiu Liu, Matthieu Le, Andrew Tao
  • Publication number: 20200349755
    Abstract: Disclosed approaches may leverage the actual spatial and reflective properties of a virtual environment—such as the size, shape, and orientation of a bidirectional reflectance distribution function (BRDF) lobe of a light path and its position relative to a reflection surface, a virtual screen, and a virtual camera—to produce, for a pixel, an anisotropic kernel filter having dimensions and weights that accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface. In order to accomplish this, geometry may be computed that corresponds to a projection of a reflection of the BRDF lobe below the surface along a view vector to the pixel. Using this approach, the dimensions of the anisotropic filter kernel may correspond to the BRDF lobe to accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface.
    Type: Application
    Filed: July 22, 2020
    Publication date: November 5, 2020
    Inventors: Shiqiu Liu, Christopher Ryan Wyman, Jon Hasselgren, Jacob Munkberg, Ignacio Llamas
  • Publication number: 20200334891
    Abstract: In various examples, the actual spatial properties of a virtual environment are used to produce, for a pixel, an anisotropic filter kernel for a filter having dimensions and weights that accurately reflect the spatial characteristics of the virtual environment. Geometry of the virtual environment may be computed based at least in part on a projection of a light source onto a surface through an occluder, in order to determine a footprint that reflects a contribution of the light source to lighting conditions of the pixel associated with a point on the surface. The footprint may define a size, orientation, and/or shape of the anisotropic filter kernel and corresponding filter weights. The anisotropic filter kernel may be applied to the pixel to produce a graphically-rendered image of the virtual environment.
    Type: Application
    Filed: July 6, 2020
    Publication date: October 22, 2020
    Inventor: Shiqiu Liu
  • Patent number: 10776985
    Abstract: Disclosed approaches may leverage the actual spatial and reflective properties of a virtual environment—such as the size, shape, and orientation of a bidirectional reflectance distribution function (BRDF) lobe of a light path and its position relative to a reflection surface, a virtual screen, and a virtual camera—to produce, for a pixel, an anisotropic kernel filter having dimensions and weights that accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface. In order to accomplish this, geometry may be computed that corresponds to a projection of a reflection of the BRDF lobe below the surface along a view vector to the pixel. Using this approach, the dimensions of the anisotropic filter kernel may correspond to the BRDF lobe to accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: September 15, 2020
    Assignee: NVIDIA Corporation
    Inventors: Shiqiu Liu, Christopher Ryan Wyman, Jon Hasselgren, Jacob Munkberg, Ignacio Llamas
  • Patent number: 10740954
    Abstract: In various examples, the actual spatial properties of a virtual environment are used to produce, for a pixel, an anisotropic filter kernel for a filter having dimensions and weights that accurately reflect the spatial characteristics of the virtual environment. Geometry of the virtual environment may be computed based at least in part on a projection of a light source onto a surface through an occluder, in order to determine a footprint that reflects a contribution of the light source to lighting conditions of the pixel associated with a point on the surface. The footprint may define a size, orientation, and/or shape of the anisotropic filter kernel and corresponding filter weights. The anisotropic filter kernel may be applied to the pixel to produce a graphically-rendered image of the virtual environment.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: August 11, 2020
    Assignee: NVIDIA Corporation
    Inventor: Shiqiu Liu
  • Publication number: 20200058105
    Abstract: Various approaches are disclosed to temporally and spatially filter noisy image data—generated using one or more ray-tracing effects—in a graphically rendered image. Rather than fully sampling data values using spatial filters, the data values may be sparsely sampled using filter taps within the spatial filters. To account for the sparse sampling, locations of filter taps may be jittered spatially and/or temporally. For filtering efficiency, a size of a spatial filter may be reduced when historical data values are used to temporally filter pixels. Further, data values filtered using a temporal filter may be clamped to avoid ghosting. For further filtering efficiency, a spatial filter may be applied as a separable filter in which the filtering for a filter direction may be performed over multiple iterations using reducing filter widths, decreasing the chance of visual artifacts when the spatial filter does not follow a true Gaussian distribution.
    Type: Application
    Filed: August 14, 2019
    Publication date: February 20, 2020
    Inventors: Shiqiu Liu, Jacopo Pantaleoni
  • Publication number: 20200058103
    Abstract: This disclosure presents a method to denoise a ray traced scene where the ray tracing uses a minimal number of rays. The method can use temporal reprojections to compute a weighted average to the scene data. A spatial filter can be run on the scene data, using the temporal reprojection count to reduce the size of the utilized spatial filter radius. In some aspects, additional temporal filters can be applied to the scene data. In some aspects, global illumination temporal reprojection history counts can be used to modify the spatial filter radius. In some aspects, caustic photon tracing can be conducted to compute a logarithmic cost, which can then be utilized to reduce the denoising radius used by the spatial filter. The modified and adjusted scene data can be sent to a rendering process to complete the rendering to generate a final scene.
    Type: Application
    Filed: August 2, 2019
    Publication date: February 20, 2020
    Inventors: Shiqiu Liu, Jacopo Pantaleoni
  • Publication number: 20190287294
    Abstract: Disclosed approaches may leverage the actual spatial and reflective properties of a virtual environment—such as the size, shape, and orientation of a bidirectional reflectance distribution function (BRDF) lobe of a light path and its position relative to a reflection surface, a virtual screen, and a virtual camera—to produce, for a pixel, an anisotropic kernel filter having dimensions and weights that accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface. In order to accomplish this, geometry may be computed that corresponds to a projection of a reflection of the BRDF lobe below the surface along a view vector to the pixel. Using this approach, the dimensions of the anisotropic filter kernel may correspond to the BRDF lobe to accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface.
    Type: Application
    Filed: March 15, 2019
    Publication date: September 19, 2019
    Inventors: Shiqiu Liu, Christopher Ryan Wyman, Jon Hasselgren, Jacob Munkberg, Ignacio Llamas
  • Publication number: 20190287291
    Abstract: In various examples, the actual spatial properties of a virtual environment are used to produce, for a pixel, an anisotropic filter kernel for a filter having dimensions and weights that accurately reflect the spatial characteristics of the virtual environment. Geometry of the virtual environment may be computed based at least in part on a projection of a light source onto a surface through an occluder, in order to determine a footprint that reflects a contribution of the light source to lighting conditions of the pixel associated with a point on the surface. The footprint may define a size, orientation, and/or shape of the anisotropic filter kernel and corresponding filter weights. The anisotropic filter kernel may be applied to the pixel to produce a graphically-rendered image of the virtual environment.
    Type: Application
    Filed: March 15, 2019
    Publication date: September 19, 2019
    Inventor: Shiqiu Liu
  • Publication number: 20160004332
    Abstract: The embodiments of the present invention discloses a method for simulating multitouch with two joysticks, comprising: receiving movement parameters input during movement of the first joystick and the second joystick; obtaining movement traces of a first mouse pointer corresponding to the first joystick and a second mouse pointer corresponding to the second joystick on the graphical interface of a terminal unit, according to the movement parameters; and, generating a corresponding touch gesture signal, according to the movement traces of the first mouse pointer and the second mouse pointer. With the present invention, non-touch screen display terminals can support multitouch, and thereby the compatibility of such terminal units is improved.
    Type: Application
    Filed: July 2, 2014
    Publication date: January 7, 2016
    Inventors: Jian Qiu, Shiqiu Liu, Xun Tang, Zhi Deng, Shan Wang