Patents by Inventor Tero Tapani KARRAS

Tero Tapani KARRAS has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220198741
    Abstract: In examples, a list of elements may be divided into spans and each span may be allocated a respective memory range for output based on a worst-case compression ratio of a compression algorithm that will be used to compress the span. Worker threads may output compressed versions of the spans to the memory ranges. To ensure placement constraints of a data structure will be satisfied, boundaries of the spans may be adjusted prior to compression. The size allocated to a span (e.g., each span) may be increased (or decreasing) to avoid padding blocks while allowing for the span's compressed data to use a block allocated to an adjacent span. Further aspects of the disclosure provide for compaction of the portions of compressed data in memory in order to free up space which may have been allocated to account for the memory gaps which may result from variable compression ratios.
    Type: Application
    Filed: March 7, 2022
    Publication date: June 23, 2022
    Inventors: Timo Tapani Viitanen, Tero Tapani Karras, Samuli Laine
  • Publication number: 20220189100
    Abstract: A three-dimensional (3D) density volume of an object is constructed from tomography images (e.g., x-ray images) of the object. The tomography images are projection images that capture all structures of an object (e.g., human body) between a beam source and imaging sensor. The beam effectively integrates along a path through the object producing a tomography image at the imaging sensor, where each pixel represents attenuation. A 3D reconstruction pipeline includes a first neural network model, a fixed function backprojection unit, and a second neural network model. Given information for the capture environment, the tomography images are processed by the reconstruction pipeline to produce a reconstructed 3D density volume of the object. In contrast with a set of 2D slices, the entire 3D density volume is reconstructed, so two-dimensional (2D) density images may be produced by slicing through any portion of the 3D density volume at any angle.
    Type: Application
    Filed: July 1, 2021
    Publication date: June 16, 2022
    Inventors: Onni August Kosomaa, Jaakko T. Lehtinen, Samuli Matias Laine, Tero Tapani Karras, Miika Samuli Aittala
  • Publication number: 20220189011
    Abstract: A three-dimensional (3D) density volume of an object is constructed from tomography images (e.g., x-ray images) of the object. The tomography images are projection images that capture all structures of an object (e.g., human body) between a beam source and imaging sensor. The beam effectively integrates along a path through the object producing a tomography image at the imaging sensor, where each pixel represents attenuation. A 3D reconstruction pipeline includes a first neural network model, a fixed function backprojection unit, and a second neural network model. Given information for the capture environment, the tomography images are processed by the reconstruction pipeline to produce a reconstructed 3D density volume of the object. In contrast with a set of 2D slices, the entire 3D density volume is reconstructed, so two-dimensional (2D) density images may be produced by slicing through any portion of the 3D density volume at any angle.
    Type: Application
    Filed: July 1, 2021
    Publication date: June 16, 2022
    Inventors: Onni August Kosomaa, Jaakko T. Lehtinen, Samuli Matias Laine, Tero Tapani Karras, Miika Samuli Aittala
  • Patent number: 11315018
    Abstract: A method, computer readable medium, and system are disclosed for neural network pruning. The method includes the steps of receiving first-order gradients of a cost function relative to layer parameters for a trained neural network and computing a pruning criterion for each layer parameter based on the first-order gradient corresponding to the layer parameter, where the pruning criterion indicates an importance of each neuron that is included in the trained neural network and is associated with the layer parameter. The method includes the additional steps of identifying at least one neuron having a lowest importance and removing the at least one neuron from the trained neural network to produce a pruned neural network.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: April 26, 2022
    Assignee: NVIDIA Corporation
    Inventors: Pavlo Molchanov, Stephen Walter Tyree, Tero Tapani Karras, Timo Oskari Aila, Jan Kautz
  • Publication number: 20220121958
    Abstract: A generative adversarial neural network (GAN) learns a particular task by being shown many examples. In one scenario, a GAN may be trained to generate new images including specific objects, such as human faces, bicycles, etc. Rather than training a complex GAN having a predetermined topology of features and interconnections between the features to learn the task, the topology of the GAN is modified as the GAN is trained for the task. The topology of the GAN may be simple in the beginning and become more complex as the GAN learns during the training, eventually evolving to match the predetermined topology of the complex GAN. In the beginning the GAN learns large-scale details for the task (bicycles have two wheels) and later, as the GAN becomes more complex, learns smaller details (the wheels have spokes).
    Type: Application
    Filed: January 3, 2022
    Publication date: April 21, 2022
    Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine, Jaakko T. Lehtinen
  • Patent number: 11270495
    Abstract: In examples, a list of elements may be divided into spans and each span may be allocated a respective memory range for output based on a worst-case compression ratio of a compression algorithm that will be used to compress the span. Worker threads may output compressed versions of the spans to the memory ranges. To ensure placement constraints of a data structure will be satisfied, boundaries of the spans may be adjusted prior to compression. The size allocated to a span (e.g., each span) may be increased (or decreasing) to avoid padding blocks while allowing for the span's compressed data to use a block allocated to an adjacent span. Further aspects of the disclosure provide for compaction of the portions of compressed data in memory in order to free up space which may have been allocated to account for the memory gaps which may result from variable compression ratios.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: March 8, 2022
    Assignee: NVIDIA Corporation
    Inventors: Timo Tapani Viitanen, Tero Tapani Karras, Samuli Laine
  • Patent number: 11263525
    Abstract: A neural network learns a particular task by being shown many examples. In one scenario, a neural network may be trained to label an image, such as cat, dog, bicycle, chair, etc. In other scenario, a neural network may be trained to remove noise from videos or identify specific objects within images, such as human faces, bicycles, etc. Rather than training a complex neural network having a predetermined topology of features and interconnections between the features to learn the task, the topology of the neural network is modified as the neural network is trained for the task, eventually evolving to match the predetermined topology of the complex neural network. In the beginning the neural network learns large-scale details for the task (bicycles have two wheels) and later, as the neural network becomes more complex, learns smaller details (the wheels have spokes).
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: March 1, 2022
    Assignee: NVIDIA Corporation
    Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine, Jaakko T. Lehtinen, Janne Hellsten
  • Publication number: 20220051481
    Abstract: A three-dimensional (3D) model of an object is recovered from two-dimensional (2D) images of the object. Each image in the set of 2D images includes the object captured from a different camera position and deformations of a base mesh that defines the 3D model may be computed corresponding to each image. The 3D model may also include a texture map that represents the lighting and material properties of the 3D model. Recovery of the 3D model relies on analytic antialiasing to provide a link between pixel colors in the 2D images and geometry of the 3D model. A modular differentiable renderer design yields high performance by leveraging existing, highly optimized hardware graphics pipelines to reconstruct the 3D model. The differential renderer renders images of the 3D model and differences between the rendered images and reference images are propagated backwards through the rendering pipeline to iteratively adjust the 3D model.
    Type: Application
    Filed: February 15, 2021
    Publication date: February 17, 2022
    Inventors: Samuli Matias Laine, Janne Johannes Hellsten, Tero Tapani Karras, Yeongho Seol, Jaakko T. Lehtinen, Timo Oskari Aila
  • Patent number: 11250329
    Abstract: A generative adversarial neural network (GAN) learns a particular task by being shown many examples. In one scenario, a GAN may be trained to generate new images including specific objects, such as human faces, bicycles, etc. Rather than training a complex GAN having a predetermined topology of features and interconnections between the features to learn the task, the topology of the GAN is modified as the GAN is trained for the task. The topology of the GAN may be simple in the beginning and become more complex as the GAN learns during the training, eventually evolving to match the predetermined topology of the complex GAN. In the beginning the GAN learns large-scale details for the task (bicycles have two wheels) and later, as the GAN becomes more complex, learns smaller details (the wheels have spokes).
    Type: Grant
    Filed: October 10, 2018
    Date of Patent: February 15, 2022
    Assignee: NVIDIA Corporation
    Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine, Jaakko T. Lehtinen
  • Publication number: 20220012596
    Abstract: Apparatuses, systems, and techniques used to train one or more neural networks to generate images comprising one or more features. In at least one embodiment, one or more neural networks are trained to determine one or more styles for an input image and then generate features associated with said one or more styles in an output image.
    Type: Application
    Filed: July 9, 2020
    Publication date: January 13, 2022
    Inventors: Weili Nie, Tero Tapani Karras, Animesh Garg, Shoubhik Debnath, Anjul Patney, Anima Anandkumar
  • Publication number: 20210383241
    Abstract: Embodiments of the present disclosure relate to a technique for training neural networks, such as a generative adversarial neural network (GAN), using a limited amount of data. Training GANs using too little example data typically leads to discriminator overfitting, causing training to diverge and produce poor results. An adaptive discriminator augmentation mechanism is used that significantly stabilizes training with limited data providing the ability to train high-quality GANs. An augmentation operator is applied to the distribution of inputs to a discriminator used to train a generator, representing a transformation that is invertible to ensure there is no leakage of the augmentations into the images generated by the generator. Reducing the amount of training data that is needed to achieve convergence has the potential to considerably help many applications and may the increase use of generative models in fields such as medicine.
    Type: Application
    Filed: March 24, 2021
    Publication date: December 9, 2021
    Inventors: Tero Tapani Karras, Miika Samuli Aittala, Janne Johannes Hellsten, Samuli Matias Laine, Jaakko T. Lehtinen, Timo Oskari Aila
  • Publication number: 20210366177
    Abstract: In examples, a list of elements may be divided into spans and each span may be allocated a respective memory range for output based on a worst-case compression ratio of a compression algorithm that will be used to compress the span. Worker threads may output compressed versions of the spans to the memory ranges. To ensure placement constraints of a data structure will be satisfied, boundaries of the spans may be adjusted prior to compression. The size allocated to a span (e.g., each span) may be increased (or decreasing) to avoid padding blocks while allowing for the span's compressed data to use a block allocated to an adjacent span. Further aspects of the disclosure provide for compaction of the portions of compressed data in memory in order to free up space which may have been allocated to account for the memory gaps which may result from variable compression ratios.
    Type: Application
    Filed: May 21, 2020
    Publication date: November 25, 2021
    Inventors: Timo Tapani Viitanen, Tero Tapani Karras, Samuli Laine
  • Publication number: 20210329306
    Abstract: Apparatuses, systems, and techniques to perform compression of video data using neural networks to facilitate video streaming, such as video conferencing. In at least one embodiment, a sender transmits to a receiver a key frame from video data and one or more keypoints identified by a neural network from said video data, and a receiver reconstructs video data using said key frame and one or more received keypoints.
    Type: Application
    Filed: October 13, 2020
    Publication date: October 21, 2021
    Inventors: Ming-Yu Liu, Ting-Chun Wang, Arun Mohanray Mallya, Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko Lehtinen, Miika Samuli Aittala, Timo Oskari Aila
  • Publication number: 20210150187
    Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.
    Type: Application
    Filed: January 7, 2021
    Publication date: May 20, 2021
    Inventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
  • Publication number: 20210150354
    Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.
    Type: Application
    Filed: January 7, 2021
    Publication date: May 20, 2021
    Inventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
  • Publication number: 20210150369
    Abstract: A style-based generative network architecture enables scale-specific control of synthesized output data, such as images. During training, the style-based generative neural network (generator neural network) includes a mapping network and a synthesis network. During prediction, the mapping network may be omitted, replicated, or evaluated several times. The synthesis network may be used to generate highly varied, high-quality output data with a wide variety of attributes. For example, when used to generate images of people's faces, the attributes that may vary are age, ethnicity, camera viewpoint, pose, face shape, eyeglasses, colors (eyes, hair, etc.), hair style, lighting, background, etc. Depending on the task, generated output data may include images, audio, video, three-dimensional (3D) objects, text, etc.
    Type: Application
    Filed: January 28, 2021
    Publication date: May 20, 2021
    Inventors: Tero Tapani Karras, Samuli Matias Laine, Jaakko T. Lehtinen, Miika Samuli Aittala, Janne Johannes Hellsten, Timo Oskari Aila
  • Publication number: 20210150357
    Abstract: A style-based generative network architecture enables scale-specific control of synthesized output data, such as images. During training, the style-based generative neural network (generator neural network) includes a mapping network and a synthesis network. During prediction, the mapping network may be omitted, replicated, or evaluated several times. The synthesis network may be used to generate highly varied, high-quality output data with a wide variety of attributes. For example, when used to generate images of people's faces, the attributes that may vary are age, ethnicity, camera viewpoint, pose, face shape, eyeglasses, colors (eyes, hair, etc.), hair style, lighting, background, etc. Depending on the task, generated output data may include images, audio, video, three-dimensional (3D) objects, text, etc.
    Type: Application
    Filed: January 28, 2021
    Publication date: May 20, 2021
    Inventors: Tero Tapani Karras, Samuli Matias Laine, Jaakko T. Lehtinen, Miika Samuli Aittala, Janne Johannes Hellsten, Timo Oskari Aila
  • Publication number: 20210117795
    Abstract: A style-based generative network architecture enables scale-specific control of synthesized output data, such as images. During training, the style-based generative neural network (generator neural network) includes a mapping network and a synthesis network. During prediction, the mapping network may be omitted, replicated, or evaluated several times. The synthesis network may be used to generate highly varied, high-quality output data with a wide variety of attributes. For example, when used to generate images of people's faces, the attributes that may vary are age, ethnicity, camera viewpoint, pose, face shape, eyeglasses, colors (eyes, hair, etc.), hair style, lighting, background, etc. Depending on the task, generated output data may include images, audio, video, three-dimensional (3D) objects, text, etc.
    Type: Application
    Filed: December 28, 2020
    Publication date: April 22, 2021
    Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine
  • Patent number: 10984049
    Abstract: A method, computer readable medium, and system are disclosed for performing traversal stack compression. The method includes traversing a hierarchical data structure having more than two children per node, and during the traversing, creating at least one stack entry, utilizing a processor, where each stack entry contains a plurality of intersected nodes, and adding the at least one stack entry to a compressed traversal stack stored in a memory, utilizing the processor.
    Type: Grant
    Filed: January 25, 2018
    Date of Patent: April 20, 2021
    Assignee: NVIDIA CORPORATION
    Inventors: Henri Johannes Ylitie, Tero Tapani Karras, Samuli Matias Laine
  • Publication number: 20210049468
    Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.
    Type: Application
    Filed: October 13, 2020
    Publication date: February 18, 2021
    Inventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang