Patents by Inventor Tero Tapani KARRAS

Tero Tapani KARRAS has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11605217
    Abstract: A style-based generative network architecture enables scale-specific control of synthesized output data, such as images. During training, the style-based generative neural network (generator neural network) includes a mapping network and a synthesis network. During prediction, the mapping network may be omitted, replicated, or evaluated several times. The synthesis network may be used to generate highly varied, high-quality output data with a wide variety of attributes. For example, when used to generate images of people's faces, the attributes that may vary are age, ethnicity, camera viewpoint, pose, face shape, eyeglasses, colors (eyes, hair, etc.), hair style, lighting, background, etc. Depending on the task, generated output data may include images, audio, video, three-dimensional (3D) objects, text, etc.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: March 14, 2023
    Assignee: NVIDIA Corporation
    Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine
  • Patent number: 11605001
    Abstract: A style-based generative network architecture enables scale-specific control of synthesized output data, such as images. During training, the style-based generative neural network (generator neural network) includes a mapping network and a synthesis network. During prediction, the mapping network may be omitted, replicated, or evaluated several times. The synthesis network may be used to generate highly varied, high-quality output data with a wide variety of attributes. For example, when used to generate images of people's faces, the attributes that may vary are age, ethnicity, camera viewpoint, pose, face shape, eyeglasses, colors (eyes, hair, etc.), hair style, lighting, background, etc. Depending on the task, generated output data may include images, audio, video, three-dimensional (3D) objects, text, etc.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: March 14, 2023
    Assignee: NVIDIA Corporation
    Inventors: Tero Tapani Karras, Samuli Matias Laine, Jaakko T. Lehtinen, Miika Samuli Aittala, Janne Johannes Hellsten, Timo Oskari Aila
  • Patent number: 11580395
    Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: February 14, 2023
    Assignee: NVIDIA Corporation
    Inventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
  • Publication number: 20220406048
    Abstract: A style-based generative network architecture enables scale-specific control of synthesized output data, such as images. During training, the style-based generative neural network (generator neural network) includes a mapping network and a synthesis network. During prediction, the mapping network may be omitted, replicated, or evaluated several times. The synthesis network may be used to generate highly varied, high-quality output data with a wide variety of attributes. For example, when used to generate images of people's faces, the attributes that may vary are age, ethnicity, camera viewpoint, pose, face shape, eyeglasses, colors (eyes, hair, etc.), hair style, lighting, background, etc. Depending on the task, generated output data may include images, audio, video, three-dimensional (3D) objects, text, etc.
    Type: Application
    Filed: August 23, 2022
    Publication date: December 22, 2022
    Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine
  • Publication number: 20220405880
    Abstract: Systems and methods are disclosed that improve output quality of any neural network, particularly an image generative neural network. In the real world, details of different scale tend to transform hierarchically. For example, moving a person's head causes the nose to move, which in turn moves the skin pores on the nose. Conventional generative neural networks do not synthesize images in a natural hierarchical manner: the coarse features seem to mainly control the presence of finer features, but not the precise positions of the finer features. Instead, much of the fine detail appears to be fixed to pixel coordinates which is a manifestation of aliasing. Aliasing breaks the illusion of a solid and coherent object moving in space. A generative neural network with reduced aliasing provides an architecture that exhibits a more natural transformation hierarchy, where the exact sub-pixel position of each feature is inherited from underlying coarse features.
    Type: Application
    Filed: December 27, 2021
    Publication date: December 22, 2022
    Inventors: Tero Tapani Karras, Miika Samuli Aittala, Samuli Matias Laine, Erik Andreas Härkönen, Janne Johannes Hellsten, Jaakko T. Lehtinen, Timo Oskari Aila
  • Publication number: 20220405980
    Abstract: Systems and methods are disclosed for fused processing of a continuous mathematical operator. Fused processing of continuous mathematical operations, such as pointwise non-linear functions without storing intermediate results to memory improves performance when the memory bus bandwidth is limited. In an embodiment, a continuous mathematical operation including at least two of convolution, upsampling, pointwise non-linear function, and downsampling is executed to process input data and generate alias-free output data. In an embodiment, the input data is spatially tiled for processing in parallel such that the intermediate results generated during processing of the input data for each tile may be stored in a shared memory within the processor. Storing the intermediate data in the shared memory improves performance compared with storing the intermediate data to the external memory and loading the intermediate data from the external memory.
    Type: Application
    Filed: December 27, 2021
    Publication date: December 22, 2022
    Inventors: Tero Tapani Karras, Miika Samuli Aittala, Samuli Matias Laine, Erik Andreas Härkönen, Janne Johannes Hellsten, Jaakko T. Lehtinen, Timo Oskari Aila
  • Publication number: 20220398005
    Abstract: User interfaces and methods are disclosed. In some embodiments, a plurality of source artifacts is displayed. A selector is operable to indicate a selected set of the source artifacts. An output artifact is displayed having an output attribute that represents a combination of source attributes from the source artifacts in the selected set. An amount of contribution to the first output attribute by respective ones of the source artifacts in the first selected set is based on a coordinate of the selector relative to coordinates of the source attributes in the first selected set.
    Type: Application
    Filed: July 22, 2022
    Publication date: December 15, 2022
    Inventors: Janne Hellsten, Tero Tapani Karras, Samuli Matias Laine
  • Publication number: 20220398004
    Abstract: User interfaces, methods and structures are described for intuitively and fluidly creating new artifacts from existing artifacts and for exploring latent spaces in a visual manner. In example embodiments, source artifacts are displayed along with a selector. The selector is operable to indicate a selected set of the source artifacts by establishing a selection region that includes portions of one or more of the source artifacts displayed. Source vectors are associated with the source artifacts in the selected set. One or more resultant vectors are determined based on the source vectors, and an output artifact is generated based on the one or more resultant vectors.
    Type: Application
    Filed: June 10, 2021
    Publication date: December 15, 2022
    Inventors: Janne Hellsten, Tero Tapani Karras, Samuli Matias Laine
  • Patent number: 11494879
    Abstract: A neural network architecture is disclosed for restoring noisy data. The neural network is a blind-spot network that can be trained according to a self-supervised framework. In an embodiment, the blind-spot network includes a plurality of network branches. Each network branch processes a version of the input data using one or more layers associated with kernels that have a receptive field that extends in a particular half-plane relative to the output value. In one embodiment, the versions of the input data are offset in a particular direction and the convolution kernels are rotated to correspond to the particular direction of the associated network branch. In another embodiment, the versions of the input data are rotated and the convolution kernel is the same for each network branch. The outputs of the network branches are composited to de-noise the image. In some embodiments, Bayesian filtering is performed to de-noise the input data.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: November 8, 2022
    Assignee: NVIDIA Corporation
    Inventors: Samuli Matias Laine, Tero Tapani Karras, Jaakko T. Lehtinen, Timo Oskari Aila
  • Patent number: 11455790
    Abstract: A style-based generative network architecture enables scale-specific control of synthesized output data, such as images. During training, the style-based generative neural network (generator neural network) includes a mapping network and a synthesis network. During prediction, the mapping network may be omitted, replicated, or evaluated several times. The synthesis network may be used to generate highly varied, high-quality output data with a wide variety of attributes. For example, when used to generate images of people's faces, the attributes that may vary are age, ethnicity, camera viewpoint, pose, face shape, eyeglasses, colors (eyes, hair, etc.), hair style, lighting, background, etc. Depending on the task, generated output data may include images, audio, video, three-dimensional (3D) objects, text, etc.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: September 27, 2022
    Assignee: NVIDIA Corporation
    Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine
  • Patent number: 11435885
    Abstract: User interfaces and methods are disclosed. In some embodiments, a plurality of source artifacts is displayed. A selector is operable to indicate a selected set of the source artifacts. The selected set corresponds to those of the source artifacts that intersect at least partially with a selection region. An output artifact is displayed having an output attribute that represents a combination of source attributes from the source artifacts in the selected set.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: September 6, 2022
    Assignee: NVIDIA Corporation
    Inventors: Janne Hellsten, Tero Tapani Karras, Samuli Matias Laine
  • Publication number: 20220198741
    Abstract: In examples, a list of elements may be divided into spans and each span may be allocated a respective memory range for output based on a worst-case compression ratio of a compression algorithm that will be used to compress the span. Worker threads may output compressed versions of the spans to the memory ranges. To ensure placement constraints of a data structure will be satisfied, boundaries of the spans may be adjusted prior to compression. The size allocated to a span (e.g., each span) may be increased (or decreasing) to avoid padding blocks while allowing for the span's compressed data to use a block allocated to an adjacent span. Further aspects of the disclosure provide for compaction of the portions of compressed data in memory in order to free up space which may have been allocated to account for the memory gaps which may result from variable compression ratios.
    Type: Application
    Filed: March 7, 2022
    Publication date: June 23, 2022
    Inventors: Timo Tapani Viitanen, Tero Tapani Karras, Samuli Laine
  • Publication number: 20220189100
    Abstract: A three-dimensional (3D) density volume of an object is constructed from tomography images (e.g., x-ray images) of the object. The tomography images are projection images that capture all structures of an object (e.g., human body) between a beam source and imaging sensor. The beam effectively integrates along a path through the object producing a tomography image at the imaging sensor, where each pixel represents attenuation. A 3D reconstruction pipeline includes a first neural network model, a fixed function backprojection unit, and a second neural network model. Given information for the capture environment, the tomography images are processed by the reconstruction pipeline to produce a reconstructed 3D density volume of the object. In contrast with a set of 2D slices, the entire 3D density volume is reconstructed, so two-dimensional (2D) density images may be produced by slicing through any portion of the 3D density volume at any angle.
    Type: Application
    Filed: July 1, 2021
    Publication date: June 16, 2022
    Inventors: Onni August Kosomaa, Jaakko T. Lehtinen, Samuli Matias Laine, Tero Tapani Karras, Miika Samuli Aittala
  • Publication number: 20220189011
    Abstract: A three-dimensional (3D) density volume of an object is constructed from tomography images (e.g., x-ray images) of the object. The tomography images are projection images that capture all structures of an object (e.g., human body) between a beam source and imaging sensor. The beam effectively integrates along a path through the object producing a tomography image at the imaging sensor, where each pixel represents attenuation. A 3D reconstruction pipeline includes a first neural network model, a fixed function backprojection unit, and a second neural network model. Given information for the capture environment, the tomography images are processed by the reconstruction pipeline to produce a reconstructed 3D density volume of the object. In contrast with a set of 2D slices, the entire 3D density volume is reconstructed, so two-dimensional (2D) density images may be produced by slicing through any portion of the 3D density volume at any angle.
    Type: Application
    Filed: July 1, 2021
    Publication date: June 16, 2022
    Inventors: Onni August Kosomaa, Jaakko T. Lehtinen, Samuli Matias Laine, Tero Tapani Karras, Miika Samuli Aittala
  • Patent number: 11315018
    Abstract: A method, computer readable medium, and system are disclosed for neural network pruning. The method includes the steps of receiving first-order gradients of a cost function relative to layer parameters for a trained neural network and computing a pruning criterion for each layer parameter based on the first-order gradient corresponding to the layer parameter, where the pruning criterion indicates an importance of each neuron that is included in the trained neural network and is associated with the layer parameter. The method includes the additional steps of identifying at least one neuron having a lowest importance and removing the at least one neuron from the trained neural network to produce a pruned neural network.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: April 26, 2022
    Assignee: NVIDIA Corporation
    Inventors: Pavlo Molchanov, Stephen Walter Tyree, Tero Tapani Karras, Timo Oskari Aila, Jan Kautz
  • Publication number: 20220121958
    Abstract: A generative adversarial neural network (GAN) learns a particular task by being shown many examples. In one scenario, a GAN may be trained to generate new images including specific objects, such as human faces, bicycles, etc. Rather than training a complex GAN having a predetermined topology of features and interconnections between the features to learn the task, the topology of the GAN is modified as the GAN is trained for the task. The topology of the GAN may be simple in the beginning and become more complex as the GAN learns during the training, eventually evolving to match the predetermined topology of the complex GAN. In the beginning the GAN learns large-scale details for the task (bicycles have two wheels) and later, as the GAN becomes more complex, learns smaller details (the wheels have spokes).
    Type: Application
    Filed: January 3, 2022
    Publication date: April 21, 2022
    Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine, Jaakko T. Lehtinen
  • Patent number: 11270495
    Abstract: In examples, a list of elements may be divided into spans and each span may be allocated a respective memory range for output based on a worst-case compression ratio of a compression algorithm that will be used to compress the span. Worker threads may output compressed versions of the spans to the memory ranges. To ensure placement constraints of a data structure will be satisfied, boundaries of the spans may be adjusted prior to compression. The size allocated to a span (e.g., each span) may be increased (or decreasing) to avoid padding blocks while allowing for the span's compressed data to use a block allocated to an adjacent span. Further aspects of the disclosure provide for compaction of the portions of compressed data in memory in order to free up space which may have been allocated to account for the memory gaps which may result from variable compression ratios.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: March 8, 2022
    Assignee: NVIDIA Corporation
    Inventors: Timo Tapani Viitanen, Tero Tapani Karras, Samuli Laine
  • Patent number: 11263525
    Abstract: A neural network learns a particular task by being shown many examples. In one scenario, a neural network may be trained to label an image, such as cat, dog, bicycle, chair, etc. In other scenario, a neural network may be trained to remove noise from videos or identify specific objects within images, such as human faces, bicycles, etc. Rather than training a complex neural network having a predetermined topology of features and interconnections between the features to learn the task, the topology of the neural network is modified as the neural network is trained for the task, eventually evolving to match the predetermined topology of the complex neural network. In the beginning the neural network learns large-scale details for the task (bicycles have two wheels) and later, as the neural network becomes more complex, learns smaller details (the wheels have spokes).
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: March 1, 2022
    Assignee: NVIDIA Corporation
    Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine, Jaakko T. Lehtinen, Janne Hellsten
  • Publication number: 20220051481
    Abstract: A three-dimensional (3D) model of an object is recovered from two-dimensional (2D) images of the object. Each image in the set of 2D images includes the object captured from a different camera position and deformations of a base mesh that defines the 3D model may be computed corresponding to each image. The 3D model may also include a texture map that represents the lighting and material properties of the 3D model. Recovery of the 3D model relies on analytic antialiasing to provide a link between pixel colors in the 2D images and geometry of the 3D model. A modular differentiable renderer design yields high performance by leveraging existing, highly optimized hardware graphics pipelines to reconstruct the 3D model. The differential renderer renders images of the 3D model and differences between the rendered images and reference images are propagated backwards through the rendering pipeline to iteratively adjust the 3D model.
    Type: Application
    Filed: February 15, 2021
    Publication date: February 17, 2022
    Inventors: Samuli Matias Laine, Janne Johannes Hellsten, Tero Tapani Karras, Yeongho Seol, Jaakko T. Lehtinen, Timo Oskari Aila
  • Patent number: 11250329
    Abstract: A generative adversarial neural network (GAN) learns a particular task by being shown many examples. In one scenario, a GAN may be trained to generate new images including specific objects, such as human faces, bicycles, etc. Rather than training a complex GAN having a predetermined topology of features and interconnections between the features to learn the task, the topology of the GAN is modified as the GAN is trained for the task. The topology of the GAN may be simple in the beginning and become more complex as the GAN learns during the training, eventually evolving to match the predetermined topology of the complex GAN. In the beginning the GAN learns large-scale details for the task (bicycles have two wheels) and later, as the GAN becomes more complex, learns smaller details (the wheels have spokes).
    Type: Grant
    Filed: October 10, 2018
    Date of Patent: February 15, 2022
    Assignee: NVIDIA Corporation
    Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine, Jaakko T. Lehtinen