Patents by Inventor Timo Oskari Aila
Timo Oskari Aila has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20210042503Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: ApplicationFiled: October 13, 2020Publication date: February 11, 2021Inventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Patent number: 10866990Abstract: An apparatus, computer readable medium, and method are disclosed for decompressing compressed geometric data stored in a lossless compression format. The compressed geometric data resides within a compression block sized according to a system cache line. An indirection technique maps a global identifier value in a linear identifier space to corresponding variable rate compressed data. The apparatus may include decompression circuitry within a graphics processing unit configured to perform ray-tracing.Type: GrantFiled: July 3, 2019Date of Patent: December 15, 2020Assignee: NVIDIA CorporationInventors: Jaakko Lehtinen, Timo Oskari Aila, Tero Tapani Karras, Alexander Keller, Nikolaus Binder, Carsten Alexander Waechter, Samuli Matias Laine
-
Publication number: 20200242739Abstract: A neural network architecture is disclosed for restoring noisy data. The neural network is a blind-spot network that can be trained according to a self-supervised framework. In an embodiment, the blind-spot network includes a plurality of network branches. Each network branch processes a version of the input data using one or more layers associated with kernels that have a receptive field that extends in a particular half-plane relative to the output value. In one embodiment, the versions of the input data are offset in a particular direction and the convolution kernels are rotated to correspond to the particular direction of the associated network branch. In another embodiment, the versions of the input data are rotated and the convolution kernel is the same for each network branch. The outputs of the network branches are composited to de-noise the image. In some embodiments, Bayesian filtering is performed to de-noise the input data.Type: ApplicationFiled: December 20, 2019Publication date: July 30, 2020Inventors: Samuli Matias Laine, Tero Tapani Karras, Jaakko T. Lehtinen, Timo Oskari Aila
-
Publication number: 20200151559Abstract: A style-based generative network architecture enables scale-specific control of synthesized output data, such as images. During training, the style-based generative neural network (generator neural network) includes a mapping network and a synthesis network. During prediction, the mapping network may be omitted, replicated, or evaluated several times. The synthesis network may be used to generate highly varied, high-quality output data with a wide variety of attributes. For example, when used to generate images of people's faces, the attributes that may vary are age, ethnicity, camera viewpoint, pose, face shape, eyeglasses, colors (eyes, hair, etc.), hair style, lighting, background, etc. Depending on the task, generated output data may include images, audio, video, three-dimensional (3D) objects, text, etc.Type: ApplicationFiled: May 21, 2019Publication date: May 14, 2020Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine
-
Patent number: 10565686Abstract: A method, computer readable medium, and system are disclosed for training a neural network. The method includes the steps of selecting an input sample from a set of training data that includes input samples and noisy target samples, where the input samples and the noisy target samples each correspond to a latent, clean target sample. The input sample is processed by a neural network model to produce an output and a noisy target sample is selected from the set of training data, where the noisy target samples have a distribution relative to the latent, clean target sample. The method also includes adjusting parameter values of the neural network model to reduce differences between the output and the noisy target sample.Type: GrantFiled: November 8, 2017Date of Patent: February 18, 2020Assignee: NVIDIA CorporationInventors: Jaakko T. Lehtinen, Timo Oskari Aila, Jon Niklas Theodor Hasselgren, Carl Jacob Munkberg
-
Publication number: 20190324991Abstract: An apparatus, computer readable medium, and method are disclosed for decompressing compressed geometric data stored in a lossless compression format. The compressed geometric data resides within a compression block sized according to a system cache line. An indirection technique maps a global identifier value in a linear identifier space to corresponding variable rate compressed data. The apparatus may include decompression circuitry within a graphics processing unit configured to perform ray-tracing.Type: ApplicationFiled: July 3, 2019Publication date: October 24, 2019Inventors: Jaakko Lehtinen, Timo Oskari Aila, Tero Tapani Karras, Alexander Keller, Nikolaus Binder, Carsten Alexander Waechter, Samuli Matias Laine
-
Patent number: 10331632Abstract: A system, method, and computer program product are provided for modifying a hierarchical tree data structure. An initial hierarchical tree data structure is received and treelets of node neighborhoods in the initial hierarchical tree data structure are formed. Each treelet includes n leaf nodes and n?1 internal nodes. The treelets are restructured, by a processor, to produce an optimized hierarchical tree data structure.Type: GrantFiled: August 19, 2013Date of Patent: June 25, 2019Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Timo Oskari Aila
-
Publication number: 20190171936Abstract: A neural network learns a particular task by being shown many examples. In one scenario, a neural network may be trained to label an image, such as cat, dog, bicycle, chair, etc. In other scenario, a neural network may be trained to remove noise from videos or identify specific objects within images, such as human faces, bicycles, etc. Rather than training a complex neural network having a predetermined topology of features and interconnections between the features to learn the task, the topology of the neural network is modified as the neural network is trained for the task, eventually evolving to match the predetermined topology of the complex neural network. In the beginning the neural network learns large-scale details for the task (bicycles have two wheels) and later, as the neural network becomes more complex, learns smaller details (the wheels have spokes).Type: ApplicationFiled: January 18, 2019Publication date: June 6, 2019Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine, Jaakko T. Lehtinen, Janne Hellsten
-
Publication number: 20190130278Abstract: A generative adversarial neural network (GAN) learns a particular task by being shown many examples. In one scenario, a GAN may be trained to generate new images including specific objects, such as human faces, bicycles, etc. Rather than training a complex GAN having a predetermined topology of features and interconnections between the features to learn the task, the topology of the GAN is modified as the GAN is trained for the task. The topology of the GAN may be simple in the beginning and become more complex as the GAN learns during the training, eventually evolving to match the predetermined topology of the complex GAN. In the beginning the GAN learns large-scale details for the task (bicycles have two wheels) and later, as the GAN becomes more complex, learns smaller details (the wheels have spokes).Type: ApplicationFiled: October 10, 2018Publication date: May 2, 2019Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine, Jaakko T. Lehtinen
-
Patent number: 10242485Abstract: An apparatus, computer readable medium, and method are disclosed for performing an intersection query between a query beam and a target bounding volume. The target bounding volume may comprise an axis-aligned bounding box (AABB) associated with a bounding volume hierarchy (BVH) tree. An intersection query comprising beam information associated with the query beam and slab boundary information for a first dimension of a target bounding volume is received. Intersection parameter values are calculated for the first dimension based on the beam information and the slab boundary information and a slab intersection case is determined for the first dimension based on the beam information. A parametric variable range for the first dimension is assigned based on the slab intersection case and the intersection parameter values and it is determined whether the query beam intersects the target bounding volume based on at least the parametric variable range for the first dimension.Type: GrantFiled: December 28, 2016Date of Patent: March 26, 2019Assignee: NVIDIA CORPORATIONInventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine, John Erik Lindholm
-
Patent number: 10235338Abstract: A system, computer readable medium, and method are disclosed for performing a tree traversal operation utilizing a short stack data structure. The method includes the steps of executing, via a processor, a tree traversal operation for a tree data structure utilizing a short stack data structure, determining that the short stack data structure is empty after testing a current node in the tree traversal operation, and executing, via the processor, a back-tracking operation for the current node to identify a new node in the tree data structure to continue the tree traversal operation. The processor may be a parallel processing unit that includes one or more tree traversal units, which implement the tree traversal operation in hardware, software, or a combination of hardware and software.Type: GrantFiled: December 8, 2014Date of Patent: March 19, 2019Assignee: NVIDIA CORPORATIONInventors: Samuli Matias Laine, Timo Oskari Aila
-
Publication number: 20180357753Abstract: A method, computer readable medium, and system are disclosed for training a neural network. The method includes the steps of selecting an input sample from a set of training data that includes input samples and noisy target samples, where the input samples and the noisy target samples each correspond to a latent, clean target sample. The input sample is processed by a neural network model to produce an output and a noisy target sample is selected from the set of training data, where the noisy target samples have a distribution relative to the latent, clean target sample. The method also includes adjusting parameter values of the neural network model to reduce differences between the output and the noisy target sample.Type: ApplicationFiled: November 8, 2017Publication date: December 13, 2018Inventors: Jaakko T. Lehtinen, Timo Oskari Aila, Jon Niklas Theodor Hasselgren, Carl Jacob Munkberg
-
Publication number: 20180357537Abstract: A method, computer readable medium, and system are disclosed for training a neural network model. The method includes the step of selecting an input vector from a set of training data that includes input vectors and sparse target vectors, where each sparse target vector includes target data corresponding to a subset of samples within an output vector of the neural network model. The method also includes the steps of processing the input vector by the neural network model to produce output data for the samples within the output vector and adjusting parameter values of the neural network model to reduce differences between the output vector and the sparse target vector for the subset of the samples.Type: ApplicationFiled: January 26, 2018Publication date: December 13, 2018Inventors: Carl Jacob Munkberg, Jon Niklas Theodor Hasselgren, Jaakko T. Lehtinen, Timo Oskari Aila
-
Publication number: 20180336464Abstract: A method and system are disclosed for training a model that implements a machine-learning algorithm. The technique utilizes latent descriptor vectors to change a multiple-valued output problem into a single-valued output problem and includes the steps of receiving a set of training data, processing, by a model, the set of training data to generate a set of output vectors, and adjusting a set of model parameters and component values for at least one latent descriptor vector in the plurality of latent descriptor vectors based on the set of output vectors. The set of training data includes a plurality of input vectors and a plurality of desired output vectors, and each input vector in the plurality of input vectors is associated with a particular latent descriptor vector in a plurality of latent descriptor vectors. Each latent descriptor vector comprises a plurality of scalar values that are initialized prior to training the model.Type: ApplicationFiled: November 29, 2017Publication date: November 22, 2018Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine
-
Patent number: 10032289Abstract: A system, method, and computer program product for implementing a tree traversal operation for a tree data structure is disclosed. The method includes the steps of receiving at least a portion of a tree data structure that represents a tree having a plurality of nodes and processing, via a tree traversal operation algorithm executed by a processor, one or more nodes of the tree data structure by intersecting the one or more nodes of the tree data structure with a query data structure. A first node of the tree data structure is associated with a first local coordinate system and a second node of the tree data structure is associated with a second local coordinate system, the first node being an ancestor of the second node, and the first local coordinate system and the second local coordinate system are both specified relative to a global coordinate system.Type: GrantFiled: December 13, 2016Date of Patent: July 24, 2018Assignee: NVIDIA CorporationInventors: Samuli Matias Laine, Timo Oskari Aila, Tero Tapani Karras
-
Publication number: 20180204314Abstract: A method, computer readable medium, and system are disclosed for performing spatiotemporal filtering. The method includes identifying image data to be rendered, reconstructing the image data to create reconstructed image data, utilizing a filter including a neural network having one or more skip connections and one or more recurrent layers, and returning the reconstructed image data.Type: ApplicationFiled: January 16, 2018Publication date: July 19, 2018Inventors: Anton S. Kaplanyan, Chakravarty Reddy Alla Chaitanya, Timo Oskari Aila, Aaron Eliot Lefohn, Marco Salvi
-
Patent number: 10025879Abstract: A system, computer readable medium, and method are disclosed for performing a tree traversal operation. The method includes the steps of executing, via a processor, a tree traversal operation for a tree data structure, receiving a transformation node that includes transformation data during the tree traversal operation, and transforming spatial data included in a query data structure based on the transformation data. Each node in the tree data structure is classified according to one of a plurality of nodesets, the plurality of nodesets corresponding to a plurality of local coordinate systems. The processor may be a parallel processing unit that includes one or more tree traversal units, which implement the tree traversal operation in hardware, software, or a combination of hardware and software.Type: GrantFiled: April 27, 2015Date of Patent: July 17, 2018Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Samuli Matias Laine, Timo Oskari Aila
-
Publication number: 20180182158Abstract: An apparatus, computer readable medium, and method are disclosed for performing an intersection query between a query beam and a target bounding volume. The target bounding volume may comprise an axis-aligned bounding box (AABB) associated with a bounding volume hierarchy (BVH) tree. An intersection query comprising beam information associated with the query beam and slab boundary information for a first dimension of a target bounding volume is received. Intersection parameter values are calculated for the first dimension based on the beam information and the slab boundary information and a slab intersection case is determined for the first dimension based on the beam information. A parametric variable range for the first dimension is assigned based on the slab intersection case and the intersection parameter values and it is determined whether the query beam intersects the target bounding volume based on at least the parametric variable range for the first dimension.Type: ApplicationFiled: December 28, 2016Publication date: June 28, 2018Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine, John Erik Lindholm
-
Publication number: 20180114114Abstract: A method, computer readable medium, and system are disclosed for neural network pruning. The method includes the steps of receiving first-order gradients of a cost function relative to layer parameters for a trained neural network and computing a pruning criterion for each layer parameter based on the first-order gradient corresponding to the layer parameter, where the pruning criterion indicates an importance of each neuron that is included in the trained neural network and is associated with the layer parameter. The method includes the additional steps of identifying at least one neuron having a lowest importance and removing the at least one neuron from the trained neural network to produce a pruned neural network.Type: ApplicationFiled: October 17, 2017Publication date: April 26, 2018Inventors: Pavlo Molchanov, Stephen Walter Tyree, Tero Tapani Karras, Timo Oskari Aila, Jan Kautz
-
Publication number: 20180101768Abstract: A method, computer readable medium, and system are disclosed for implementing a temporal ensembling model for training a deep neural network. The method for training the deep neural network includes the steps of receiving a set of training data for a deep neural network and training the deep neural network utilizing the set of training data by: analyzing the plurality of input vectors by the deep neural network to generate a plurality of prediction vectors, and, for each prediction vector in the plurality of prediction vectors corresponding to the particular input vector, computing a loss term associated with the particular input vector by combining a supervised component and an unsupervised component according to a weighting function and updating the target prediction vector associated with the particular input vector.Type: ApplicationFiled: September 29, 2017Publication date: April 12, 2018Inventors: Samuli Matias Laine, Timo Oskari Aila