Patents by Inventor Jaakko T. Lehtinen
Jaakko T. Lehtinen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11790598Abstract: A three-dimensional (3D) density volume of an object is constructed from tomography images (e.g., x-ray images) of the object. The tomography images are projection images that capture all structures of an object (e.g., human body) between a beam source and imaging sensor. The beam effectively integrates along a path through the object producing a tomography image at the imaging sensor, where each pixel represents attenuation. A 3D reconstruction pipeline includes a first neural network model, a fixed function backprojection unit, and a second neural network model. Given information for the capture environment, the tomography images are processed by the reconstruction pipeline to produce a reconstructed 3D density volume of the object. In contrast with a set of 2D slices, the entire 3D density volume is reconstructed, so two-dimensional (2D) density images may be produced by slicing through any portion of the 3D density volume at any angle.Type: GrantFiled: July 1, 2021Date of Patent: October 17, 2023Assignee: NVIDIA CorporationInventors: Onni August Kosomaa, Jaakko T. Lehtinen, Samuli Matias Laine, Tero Tapani Karras, Miika Samuli Aittala
-
Patent number: 11775829Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: GrantFiled: December 12, 2022Date of Patent: October 3, 2023Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Patent number: 11763168Abstract: A generative adversarial neural network (GAN) learns a particular task by being shown many examples. In one scenario, a GAN may be trained to generate new images including specific objects, such as human faces, bicycles, etc. Rather than training a complex GAN having a predetermined topology of features and interconnections between the features to learn the task, the topology of the GAN is modified as the GAN is trained for the task. The topology of the GAN may be simple in the beginning and become more complex as the GAN learns during the training, eventually evolving to match the predetermined topology of the complex GAN. In the beginning the GAN learns large-scale details for the task (bicycles have two wheels) and later, as the GAN becomes more complex, learns smaller details (the wheels have spokes).Type: GrantFiled: January 3, 2022Date of Patent: September 19, 2023Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine, Jaakko T. Lehtinen
-
Patent number: 11734890Abstract: A three-dimensional (3D) model of an object is recovered from two-dimensional (2D) images of the object. Each image in the set of 2D images includes the object captured from a different camera position and deformations of a base mesh that defines the 3D model may be computed corresponding to each image. The 3D model may also include a texture map that represents the lighting and material properties of the 3D model. Recovery of the 3D model relies on analytic antialiasing to provide a link between pixel colors in the 2D images and geometry of the 3D model. A modular differentiable renderer design yields high performance by leveraging existing, highly optimized hardware graphics pipelines to reconstruct the 3D model. The differential renderer renders images of the 3D model and differences between the rendered images and reference images are propagated backwards through the rendering pipeline to iteratively adjust the 3D model.Type: GrantFiled: February 15, 2021Date of Patent: August 22, 2023Assignee: NVIDIA CorporationInventors: Samuli Matias Laine, Janne Johannes Hellsten, Tero Tapani Karras, Yeongho Seol, Jaakko T. Lehtinen, Timo Oskari Aila
-
Publication number: 20230110206Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: ApplicationFiled: December 12, 2022Publication date: April 13, 2023Inventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Patent number: 11625613Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: GrantFiled: January 7, 2021Date of Patent: April 11, 2023Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Patent number: 11620521Abstract: A style-based generative network architecture enables scale-specific control of synthesized output data, such as images. During training, the style-based generative neural network (generator neural network) includes a mapping network and a synthesis network. During prediction, the mapping network may be omitted, replicated, or evaluated several times. The synthesis network may be used to generate highly varied, high-quality output data with a wide variety of attributes. For example, when used to generate images of people's faces, the attributes that may vary are age, ethnicity, camera viewpoint, pose, face shape, eyeglasses, colors (eyes, hair, etc.), hair style, lighting, background, etc. Depending on the task, generated output data may include images, audio, video, three-dimensional (3D) objects, text, etc.Type: GrantFiled: January 28, 2021Date of Patent: April 4, 2023Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Samuli Matias Laine, Jaakko T. Lehtinen, Miika Samuli Aittala, Janne Johannes Hellsten, Timo Oskari Aila
-
Patent number: 11610435Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: GrantFiled: October 13, 2020Date of Patent: March 21, 2023Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Patent number: 11610122Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: GrantFiled: January 7, 2021Date of Patent: March 21, 2023Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Patent number: 11605001Abstract: A style-based generative network architecture enables scale-specific control of synthesized output data, such as images. During training, the style-based generative neural network (generator neural network) includes a mapping network and a synthesis network. During prediction, the mapping network may be omitted, replicated, or evaluated several times. The synthesis network may be used to generate highly varied, high-quality output data with a wide variety of attributes. For example, when used to generate images of people's faces, the attributes that may vary are age, ethnicity, camera viewpoint, pose, face shape, eyeglasses, colors (eyes, hair, etc.), hair style, lighting, background, etc. Depending on the task, generated output data may include images, audio, video, three-dimensional (3D) objects, text, etc.Type: GrantFiled: January 28, 2021Date of Patent: March 14, 2023Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Samuli Matias Laine, Jaakko T. Lehtinen, Miika Samuli Aittala, Janne Johannes Hellsten, Timo Oskari Aila
-
Patent number: 11580395Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: GrantFiled: October 13, 2020Date of Patent: February 14, 2023Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Publication number: 20220405582Abstract: A method, computer readable medium, and system are disclosed for training a neural network model. The method includes the step of selecting an input vector from a set of training data that includes input vectors and sparse target vectors, where each sparse target vector includes target data corresponding to a subset of samples within an output vector of the neural network model. The method also includes the steps of processing the input vector by the neural network model to produce output data for the samples within the output vector and adjusting parameter values of the neural network model to reduce differences between the output vector and the sparse target vector for the subset of the samples.Type: ApplicationFiled: February 4, 2022Publication date: December 22, 2022Inventors: Carl Jacob Munkberg, Jon Niklas Theodor Hasselgren, Jaakko T. Lehtinen, Timo Oskari Aila
-
Publication number: 20220405980Abstract: Systems and methods are disclosed for fused processing of a continuous mathematical operator. Fused processing of continuous mathematical operations, such as pointwise non-linear functions without storing intermediate results to memory improves performance when the memory bus bandwidth is limited. In an embodiment, a continuous mathematical operation including at least two of convolution, upsampling, pointwise non-linear function, and downsampling is executed to process input data and generate alias-free output data. In an embodiment, the input data is spatially tiled for processing in parallel such that the intermediate results generated during processing of the input data for each tile may be stored in a shared memory within the processor. Storing the intermediate data in the shared memory improves performance compared with storing the intermediate data to the external memory and loading the intermediate data from the external memory.Type: ApplicationFiled: December 27, 2021Publication date: December 22, 2022Inventors: Tero Tapani Karras, Miika Samuli Aittala, Samuli Matias Laine, Erik Andreas Härkönen, Janne Johannes Hellsten, Jaakko T. Lehtinen, Timo Oskari Aila
-
Publication number: 20220405880Abstract: Systems and methods are disclosed that improve output quality of any neural network, particularly an image generative neural network. In the real world, details of different scale tend to transform hierarchically. For example, moving a person's head causes the nose to move, which in turn moves the skin pores on the nose. Conventional generative neural networks do not synthesize images in a natural hierarchical manner: the coarse features seem to mainly control the presence of finer features, but not the precise positions of the finer features. Instead, much of the fine detail appears to be fixed to pixel coordinates which is a manifestation of aliasing. Aliasing breaks the illusion of a solid and coherent object moving in space. A generative neural network with reduced aliasing provides an architecture that exhibits a more natural transformation hierarchy, where the exact sub-pixel position of each feature is inherited from underlying coarse features.Type: ApplicationFiled: December 27, 2021Publication date: December 22, 2022Inventors: Tero Tapani Karras, Miika Samuli Aittala, Samuli Matias Laine, Erik Andreas Härkönen, Janne Johannes Hellsten, Jaakko T. Lehtinen, Timo Oskari Aila
-
Patent number: 11494879Abstract: A neural network architecture is disclosed for restoring noisy data. The neural network is a blind-spot network that can be trained according to a self-supervised framework. In an embodiment, the blind-spot network includes a plurality of network branches. Each network branch processes a version of the input data using one or more layers associated with kernels that have a receptive field that extends in a particular half-plane relative to the output value. In one embodiment, the versions of the input data are offset in a particular direction and the convolution kernels are rotated to correspond to the particular direction of the associated network branch. In another embodiment, the versions of the input data are rotated and the convolution kernel is the same for each network branch. The outputs of the network branches are composited to de-noise the image. In some embodiments, Bayesian filtering is performed to de-noise the input data.Type: GrantFiled: December 20, 2019Date of Patent: November 8, 2022Assignee: NVIDIA CorporationInventors: Samuli Matias Laine, Tero Tapani Karras, Jaakko T. Lehtinen, Timo Oskari Aila
-
Publication number: 20220189100Abstract: A three-dimensional (3D) density volume of an object is constructed from tomography images (e.g., x-ray images) of the object. The tomography images are projection images that capture all structures of an object (e.g., human body) between a beam source and imaging sensor. The beam effectively integrates along a path through the object producing a tomography image at the imaging sensor, where each pixel represents attenuation. A 3D reconstruction pipeline includes a first neural network model, a fixed function backprojection unit, and a second neural network model. Given information for the capture environment, the tomography images are processed by the reconstruction pipeline to produce a reconstructed 3D density volume of the object. In contrast with a set of 2D slices, the entire 3D density volume is reconstructed, so two-dimensional (2D) density images may be produced by slicing through any portion of the 3D density volume at any angle.Type: ApplicationFiled: July 1, 2021Publication date: June 16, 2022Inventors: Onni August Kosomaa, Jaakko T. Lehtinen, Samuli Matias Laine, Tero Tapani Karras, Miika Samuli Aittala
-
Publication number: 20220189011Abstract: A three-dimensional (3D) density volume of an object is constructed from tomography images (e.g., x-ray images) of the object. The tomography images are projection images that capture all structures of an object (e.g., human body) between a beam source and imaging sensor. The beam effectively integrates along a path through the object producing a tomography image at the imaging sensor, where each pixel represents attenuation. A 3D reconstruction pipeline includes a first neural network model, a fixed function backprojection unit, and a second neural network model. Given information for the capture environment, the tomography images are processed by the reconstruction pipeline to produce a reconstructed 3D density volume of the object. In contrast with a set of 2D slices, the entire 3D density volume is reconstructed, so two-dimensional (2D) density images may be produced by slicing through any portion of the 3D density volume at any angle.Type: ApplicationFiled: July 1, 2021Publication date: June 16, 2022Inventors: Onni August Kosomaa, Jaakko T. Lehtinen, Samuli Matias Laine, Tero Tapani Karras, Miika Samuli Aittala
-
Publication number: 20220121958Abstract: A generative adversarial neural network (GAN) learns a particular task by being shown many examples. In one scenario, a GAN may be trained to generate new images including specific objects, such as human faces, bicycles, etc. Rather than training a complex GAN having a predetermined topology of features and interconnections between the features to learn the task, the topology of the GAN is modified as the GAN is trained for the task. The topology of the GAN may be simple in the beginning and become more complex as the GAN learns during the training, eventually evolving to match the predetermined topology of the complex GAN. In the beginning the GAN learns large-scale details for the task (bicycles have two wheels) and later, as the GAN becomes more complex, learns smaller details (the wheels have spokes).Type: ApplicationFiled: January 3, 2022Publication date: April 21, 2022Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine, Jaakko T. Lehtinen
-
Patent number: 11263525Abstract: A neural network learns a particular task by being shown many examples. In one scenario, a neural network may be trained to label an image, such as cat, dog, bicycle, chair, etc. In other scenario, a neural network may be trained to remove noise from videos or identify specific objects within images, such as human faces, bicycles, etc. Rather than training a complex neural network having a predetermined topology of features and interconnections between the features to learn the task, the topology of the neural network is modified as the neural network is trained for the task, eventually evolving to match the predetermined topology of the complex neural network. In the beginning the neural network learns large-scale details for the task (bicycles have two wheels) and later, as the neural network becomes more complex, learns smaller details (the wheels have spokes).Type: GrantFiled: January 18, 2019Date of Patent: March 1, 2022Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine, Jaakko T. Lehtinen, Janne Hellsten
-
Publication number: 20220051481Abstract: A three-dimensional (3D) model of an object is recovered from two-dimensional (2D) images of the object. Each image in the set of 2D images includes the object captured from a different camera position and deformations of a base mesh that defines the 3D model may be computed corresponding to each image. The 3D model may also include a texture map that represents the lighting and material properties of the 3D model. Recovery of the 3D model relies on analytic antialiasing to provide a link between pixel colors in the 2D images and geometry of the 3D model. A modular differentiable renderer design yields high performance by leveraging existing, highly optimized hardware graphics pipelines to reconstruct the 3D model. The differential renderer renders images of the 3D model and differences between the rendered images and reference images are propagated backwards through the rendering pipeline to iteratively adjust the 3D model.Type: ApplicationFiled: February 15, 2021Publication date: February 17, 2022Inventors: Samuli Matias Laine, Janne Johannes Hellsten, Tero Tapani Karras, Yeongho Seol, Jaakko T. Lehtinen, Timo Oskari Aila