Patents by Inventor Jon Niklas Theodor Hasselgren

Jon Niklas Theodor Hasselgren has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200126191
    Abstract: A neural network-based rendering technique increases temporal stability and image fidelity of low sample count path tracing by optimizing a distribution of samples for rendering each image in a sequence. A sample predictor neural network learns spatio-temporal sampling strategies such as placing more samples in dis-occluded regions and tracking specular highlights. Temporal feedback enables a denoiser neural network to boost the effective input sample count and increases temporal stability. The initial uniform sampling step typically present in adaptive sampling algorithms is not needed. The sample predictor and denoiser operate at interactive rates to achieve significantly improved image quality and temporal stability compared with conventional adaptive sampling techniques.
    Type: Application
    Filed: December 17, 2019
    Publication date: April 23, 2020
    Inventors: Carl Jacob Munkberg, Jon Niklas Theodor Hasselgren, Anjul Patney, Marco Salvi, Aaron Eliot Lefohn, Donald Lee Brittain
  • Publication number: 20200126192
    Abstract: A neural network-based rendering technique increases temporal stability and image fidelity of low sample count path tracing by optimizing a distribution of samples for rendering each image in a sequence. A sample predictor neural network learns spatio-temporal sampling strategies such as placing more samples in dis-occluded regions and tracking specular highlights. Temporal feedback enables a denoiser neural network to boost the effective input sample count and increases temporal stability. The initial uniform sampling step typically present in adaptive sampling algorithms is not needed. The sample predictor and denoiser operate at interactive rates to achieve significantly improved image quality and temporal stability compared with conventional adaptive sampling techniques.
    Type: Application
    Filed: December 18, 2019
    Publication date: April 23, 2020
    Inventors: Carl Jacob Munkberg, Jon Niklas Theodor Hasselgren, Anjul Patney, Marco Salvi, Aaron Eliot Lefohn, Donald Lee Brittain
  • Patent number: 10565686
    Abstract: A method, computer readable medium, and system are disclosed for training a neural network. The method includes the steps of selecting an input sample from a set of training data that includes input samples and noisy target samples, where the input samples and the noisy target samples each correspond to a latent, clean target sample. The input sample is processed by a neural network model to produce an output and a noisy target sample is selected from the set of training data, where the noisy target samples have a distribution relative to the latent, clean target sample. The method also includes adjusting parameter values of the neural network model to reduce differences between the output and the noisy target sample.
    Type: Grant
    Filed: November 8, 2017
    Date of Patent: February 18, 2020
    Assignee: NVIDIA Corporation
    Inventors: Jaakko T. Lehtinen, Timo Oskari Aila, Jon Niklas Theodor Hasselgren, Carl Jacob Munkberg
  • Publication number: 20200051206
    Abstract: A neural network structure, namely a warped external recurrent neural network, is disclosed for reconstructing images with synthesized effects. The effects can include motion blur, depth of field reconstruction (e.g., simulating lens effects), and/or anti-aliasing (e.g., removing artifacts caused by sampling frequency). The warped external recurrent neural network is not recurrent at each layer inside the neural network. Instead, the external state output by the final layer of the neural network is warped and provided as a portion of the input to the neural network for the next image in a sequence of images. In contrast, in a conventional recurrent neural network, hidden state generated at each layer is provided as a feedback input to the generating layer. The neural network can be implemented, at least in part, on a processor. In an embodiment, the neural network is implemented on at least one parallel processing unit.
    Type: Application
    Filed: May 24, 2019
    Publication date: February 13, 2020
    Inventors: Carl Jacob Munkberg, Jon Niklas Theodor Hasselgren, Marco Salvi
  • Publication number: 20180357753
    Abstract: A method, computer readable medium, and system are disclosed for training a neural network. The method includes the steps of selecting an input sample from a set of training data that includes input samples and noisy target samples, where the input samples and the noisy target samples each correspond to a latent, clean target sample. The input sample is processed by a neural network model to produce an output and a noisy target sample is selected from the set of training data, where the noisy target samples have a distribution relative to the latent, clean target sample. The method also includes adjusting parameter values of the neural network model to reduce differences between the output and the noisy target sample.
    Type: Application
    Filed: November 8, 2017
    Publication date: December 13, 2018
    Inventors: Jaakko T. Lehtinen, Timo Oskari Aila, Jon Niklas Theodor Hasselgren, Carl Jacob Munkberg
  • Publication number: 20180357537
    Abstract: A method, computer readable medium, and system are disclosed for training a neural network model. The method includes the step of selecting an input vector from a set of training data that includes input vectors and sparse target vectors, where each sparse target vector includes target data corresponding to a subset of samples within an output vector of the neural network model. The method also includes the steps of processing the input vector by the neural network model to produce output data for the samples within the output vector and adjusting parameter values of the neural network model to reduce differences between the output vector and the sparse target vector for the subset of the samples.
    Type: Application
    Filed: January 26, 2018
    Publication date: December 13, 2018
    Inventors: Carl Jacob Munkberg, Jon Niklas Theodor Hasselgren, Jaakko T. Lehtinen, Timo Oskari Aila