Patents by Inventor Jan Novak
Jan Novak has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240127119Abstract: In one or more embodiments, a software service allows software providers to implement machine learning (ML) features into products offered by the software providers. Each ML feature may be referred to as an encapsulated ML application, which may be defined and maintained in a central repository, while also being provisioned for each user of the software provider on an as-needed basis. Advantageously, embodiments allow for a central definition for an ML application that encapsulates data science and processing capabilities and routines of the software provider. This central ML application delivers a ML deployment pipeline template that may be replicated multiple times as separate, tailored runtime pipeline instances on a per-user basis. Each runtime pipeline instance accounts for differences in the specific data of each user, resulting in user-specific ML models and predictions based on the same central ML application.Type: ApplicationFiled: September 5, 2023Publication date: April 18, 2024Applicant: Oracle International CorporationInventors: Andrew Ioannou, Miroslav Novák, Petr Dousa, Martin Panacek, Hari Ganesh Natarajan, David Kalivoda, Vojtech Janota, Zdenek Pesek, Jan Pridal
-
Patent number: 11935179Abstract: A fully-connected neural network may be configured for execution by a processor as a fully-fused neural network by limiting slow global memory accesses to reading and writing inputs to and outputs from the fully-connected neural network. The computational cost of fully-connected neural networks scale quadratically with its width, whereas its memory traffic scales linearly. Modern graphics processing units typically have much greater computational throughput compared with memory bandwidth, so that for narrow, fully-connected neural networks, the linear memory traffic is the bottleneck. The key to improving performance of the fully-connected neural network is to minimize traffic to slow “global” memory (off-chip memory and high-level caches) and to fully utilize fast on-chip memory (low-level caches, “shared” memory, and registers), which is achieved by the fully-fused approach.Type: GrantFiled: March 15, 2023Date of Patent: March 19, 2024Assignee: NVIDIA CorporationInventors: Thomas Müller, Nikolaus Binder, Fabrice Pierre Armand Rousselle, Jan Novák, Alexander Georg Keller
-
Publication number: 20240020443Abstract: Monte Carlo and quasi-Monte Carlo integration are simple numerical recipes for solving complicated integration problems, such as valuating financial derivatives or synthesizing photorealistic images by light transport simulation. A drawback of a straightforward application of (quasi-)Monte Carlo integration is the relatively slow convergence rate that manifests as high error of Monte Carlo estimators. Neural control variates may be used to reduce error in parametric (quasi-)Monte Carlo integration—providing more accurate solutions in less time. A neural network system has sufficient approximation power for estimating integrals and is efficient to evaluate. The efficiency results from the use of a first neural network that infers the integral of the control variate and using normalizing flows to model a shape of the control variate.Type: ApplicationFiled: September 29, 2023Publication date: January 18, 2024Inventors: Thomas Müller, Fabrice Pierre Armand Rousselle, Alexander Georg Keller, Jan Novák
-
Patent number: 11816404Abstract: Monte Carlo and quasi-Monte Carlo integration are simple numerical recipes for solving complicated integration problems, such as valuating financial derivatives or synthesizing photorealistic images by light transport simulation. A drawback of a straightforward application of (quasi-)Monte Carlo integration is the relatively slow convergence rate that manifests as high error of Monte Carlo estimators. Neural control variates may be used to reduce error in parametric (quasi-)Monte Carlo integration—providing more accurate solutions in less time. A neural network system has sufficient approximation power for estimating integrals and is efficient to evaluate. The efficiency results from the use of a first neural network that infers the integral of the control variate and using normalizing flows to model a shape of the control variate.Type: GrantFiled: October 29, 2020Date of Patent: November 14, 2023Assignee: NVIDIA CorporationInventors: Thomas Müller, Fabrice Pierre Armand Rousselle, Alexander Georg Keller, Jan Novák
-
Publication number: 20230230310Abstract: A fully-connected neural network may be configured for execution by a processor as a fully-fused neural network by limiting slow global memory accesses to reading and writing inputs to and outputs from the fully-connected neural network. The computational cost of fully-connected neural networks scale quadratically with its width, whereas its memory traffic scales linearly. Modern graphics processing units typically have much greater computational throughput compared with memory bandwidth, so that for narrow, fully-connected neural networks, the linear memory traffic is the bottleneck. The key to improving performance of the fully-connected neural network is to minimize traffic to slow “global” memory (off-chip memory and high-level caches) and to fully utilize fast on-chip memory (low-level caches, “shared” memory, and registers), which is achieved by the fully-fused approach.Type: ApplicationFiled: March 15, 2023Publication date: July 20, 2023Inventors: Thomas Müller, Nikolaus Binder, Fabrice Pierre Armand Rousselle, Jan Novák, Alexander Georg Keller
-
Patent number: 11631210Abstract: A fully-connected neural network may be configured for execution by a processor as a fully-fused neural network by limiting slow global memory accesses to reading and writing inputs to and outputs from the fully-connected neural network. The computational cost of fully-connected neural networks scale quadratically with its width, whereas its memory traffic scales linearly. Modern graphics processing units typically have much greater computational throughput compared with memory bandwidth, so that for narrow, fully-connected neural networks, the linear memory traffic is the bottleneck. The key to improving performance of the fully-connected neural network is to minimize traffic to slow “global” memory (off-chip memory and high-level caches) and to fully utilize fast on-chip memory (low-level caches, “shared” memory, and registers), which is achieved by the fully-fused approach.Type: GrantFiled: June 7, 2021Date of Patent: April 18, 2023Assignee: NVIDIA CorporationInventors: Thomas Müller, Nikolaus Binder, Fabrice Pierre Armand Rousselle, Jan Novák, Alexander Georg Keller
-
Patent number: 11610360Abstract: A real-time neural radiance caching technique for path-traced global illumination is implemented using a neural network for caching scattered radiance components of global illumination. The neural (network) radiance cache handles fully dynamic scenes, and makes no assumptions about the camera, lighting, geometry, and materials. In contrast with conventional caching, the data-driven approach sidesteps many difficulties of caching algorithms, such as locating, interpolating, and updating cache points. The neural radiance cache is trained via online learning during rendering. Advantages of the neural radiance cache are noise reduction and real-time performance. Importantly, the runtime overhead and memory footprint of the neural radiance cache are stable and independent of scene complexity.Type: GrantFiled: June 7, 2021Date of Patent: March 21, 2023Assignee: NVIDIA CorporationInventors: Thomas Müller, Fabrice Pierre Armand Rousselle, Jan Novák, Alexander Georg Keller
-
Publication number: 20230083929Abstract: A modular architecture is provided for denoising Monte Carlo renderings using neural networks. The temporal approach extracts and combines feature representations from neighboring frames rather than building a temporal context using recurrent connections. A multiscale architecture includes separate single-frame or temporal denoising modules for individual scales, and one or more scale compositor neural networks configured to adaptively blend individual scales. An error-predicting module is configured to produce adaptive sampling maps for a renderer to achieve more uniform residual noise distribution. An asymmetric loss function may be used for training the neural networks, which can provide control over the variance-bias trade-off during denoising.Type: ApplicationFiled: November 9, 2022Publication date: March 16, 2023Applicants: Pixar, Disney Enterprises, Inc.Inventors: Thijs Vogels, Fabrice Rousselle, Jan Novak, Brian McWilliams, Mark Meyer, Alex Harvill
-
Patent number: 11593988Abstract: In various examples, transmittance may be computed using a power-series expansion of an exponential integral of a density function. A term of the power-series expansion may be evaluated as a combination of values of the term for different orderings of samples in the power-series expansion. A sample may be computed from a combination of values at spaced intervals along the function and a discontinuity may be compensated for based at least on determining a version of the function that includes an alignment of a first point with a second point of the function. Rather than arbitrarily or manually selecting a pivot used to expand the power-series, the pivot may be computed as an average of values of the function. The transmittance estimation may be computed from the power-series expansion using a value used to compute the pivot (for a biased estimate) or using all different values (for an unbiased estimate).Type: GrantFiled: February 8, 2021Date of Patent: February 28, 2023Assignee: NVIDIA CorporationInventors: Eugene d″Eon, Jan Novak, Jacopo Pantaleoni, Niko Markus Kettunen
-
Patent number: 11532073Abstract: A modular architecture is provided for denoising Monte Carlo renderings using neural networks. The temporal approach extracts and combines feature representations from neighboring frames rather than building a temporal context using recurrent connections. A multiscale architecture includes separate single-frame or temporal denoising modules for individual scales, and one or more scale compositor neural networks configured to adaptively blend individual scales. An error-predicting module is configured to produce adaptive sampling maps for a renderer to achieve more uniform residual noise distribution. An asymmetric loss function may be used for training the neural networks, which can provide control over the variance-bias trade-off during denoising.Type: GrantFiled: July 31, 2018Date of Patent: December 20, 2022Assignees: Pixar, Disnev Enterprises, Inc.Inventors: Thijs Vogels, Fabrice Rousselle, Jan Novak, Brian McWilliams, Mark Meyer, Alex Harvill
-
Publication number: 20220284657Abstract: A real-time neural radiance caching technique for path-traced global illumination is implemented using a neural network for caching scattered radiance components of global illumination. The neural (network) radiance cache handles fully dynamic scenes, and makes no assumptions about the camera, lighting, geometry, and materials. In contrast with conventional caching, the data-driven approach sidesteps many difficulties of caching algorithms, such as locating, interpolating, and updating cache points. The neural radiance cache is trained via online learning during rendering. Advantages of the neural radiance cache are noise reduction and real-time performance. Importantly, the runtime overhead and memory footprint of the neural radiance cache are stable and independent of scene complexity.Type: ApplicationFiled: June 7, 2021Publication date: September 8, 2022Inventors: Thomas Müller, Fabrice Pierre Armand Rousselle, Jan Novák, Alexander Georg Keller
-
Publication number: 20220284658Abstract: A fully-connected neural network may be configured for execution by a processor as a fully-fused neural network by limiting slow global memory accesses to reading and writing inputs to and outputs from the fully-connected neural network. The computational cost of fully-connected neural networks scale quadratically with its width, whereas its memory traffic scales linearly. Modern graphics processing units typically have much greater computational throughput compared with memory bandwidth, so that for narrow, fully-connected neural networks, the linear memory traffic is the bottleneck. The key to improving performance of the fully-connected neural network is to minimize traffic to slow “global” memory (off-chip memory and high-level caches) and to fully utilize fast on-chip memory (low-level caches, “shared” memory, and registers), which is achieved by the fully-fused approach.Type: ApplicationFiled: June 7, 2021Publication date: September 8, 2022Inventors: Thomas Müller, Nikolaus Binder, Fabrice Pierre Armand Rousselle, Jan Novák, Alexander Georg Keller
-
Publication number: 20220254099Abstract: In various examples, transmittance may be computed using a power-series expansion of an exponential integral of a density function. A term of the power-series expansion may be evaluated as a combination of values of the term for different orderings of samples in the power-series expansion. A sample may be computed from a combination of values at spaced intervals along the function and a discontinuity may be compensated for based at least on determining a version of the function that includes an alignment of a first point with a second point of the function. Rather than arbitrarily or manually selecting a pivot used to expand the power-series, the pivot may be computed as an average of values of the function. The transmittance estimation may be computed from the power-series expansion using a value used to compute the pivot (for a biased estimate) or using all different values (for an unbiased estimate).Type: ApplicationFiled: February 8, 2021Publication date: August 11, 2022Inventors: Eugene d''Eon, Jan Novak, Jacopo Pantaleoni, Niko Markus Kettunen
-
Publication number: 20220054065Abstract: The invention relates to a method for detection of a relapse into a depression or mania state of a patient from a remission state wherein motor activity data is recorded using a wearable device worn by the patient and is received as input data by an evaluating unit and/or mood data is acquired by obtaining a questionnaire which has been completed by the patient, the questions of the questionnaire relating to the mania state, to the depression state and the questionnaire including at least one control question for checking the awareness and/or the ability to focus of the patient, the questions being designed such that they can be answered by multiple choice, and wherein the answers of the patient are input as input data into the evaluating unit, the input data is analyzed by the evaluating unit, wherein the condition of the patient is classified as remission, mania or depression by means of machine learning, and wherein a relapse is detected if the patient is classified as mania or depression.Type: ApplicationFiled: August 19, 2021Publication date: February 24, 2022Inventor: Jan Novak
-
Publication number: 20210294945Abstract: Monte Carlo and quasi-Monte Carlo integration are simple numerical recipes for solving complicated integration problems, such as valuating financial derivatives or synthesizing photorealistic images by light transport simulation. A drawback of a straightforward application of (quasi-)Monte Carlo integration is the relatively slow convergence rate that manifests as high error of Monte Carlo estimators. Neural control variates may be used to reduce error in parametric (quasi-)Monte Carlo integration—providing more accurate solutions in less time. A neural network system has sufficient approximation power for estimating integrals and is efficient to evaluate. The efficiency results from the use of a first neural network that infers the integral of the control variate and using normalizing flows to model a shape of the control variate.Type: ApplicationFiled: October 29, 2020Publication date: September 23, 2021Inventors: Thomas Müller, Fabrice Pierre Armand Rousselle, Alexander Georg Keller, Jan Novák
-
Patent number: 11037274Abstract: Supervised machine learning using neural networks is applied to denoising images rendered by MC path tracing. Specialization of neural networks may be achieved by using a modular design that allows reusing trained components in different networks and facilitates easy debugging and incremental building of complex structures. Specialization may also be achieved by using progressive neural networks. In some embodiments, training of a neural-network based denoiser may use importance sampling, where more challenging patches or patches including areas of particular interests within a training dataset are selected with higher probabilities than others. In some other embodiments, generative adversarial networks (GANs) may be used for training a machine-learning based denoiser as an alternative to using pre-defined loss functions.Type: GrantFiled: February 12, 2020Date of Patent: June 15, 2021Assignees: Pixar, Disney Enterprises, Inc.Inventors: Thijs Vogels, Fabrice Rousselle, Brian McWilliams, Mark Meyer, Jan Novak
-
Patent number: 10818080Abstract: According to one implementation, a system includes a computing platform having a hardware processor and a system memory storing a software code including multiple artificial neural networks (ANNs). The hardware processor executes the software code to partition a multi-dimensional input vector into a first vector data and a second vector data, and to transform the second vector data using a first piecewise-polynomial transformation parameterized by one of the ANNs, based on the first vector data, to produce a transformed second vector data. The hardware processor further executes the software code to transform the first vector data using a second piecewise-polynomial transformation parameterized by another of the ANNs, based on the transformed second vector data, to produce a transformed first vector data, and to determine a multi-dimensional output vector based on an output from the plurality of ANNs.Type: GrantFiled: October 11, 2018Date of Patent: October 27, 2020Assignees: Disney Enterprises, Inc., ETH Zürich (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)Inventors: Thomas Muller, Brian McWilliams, Fabrice Pierre Armand Rousselle, Jan Novak
-
Patent number: 10796414Abstract: Supervised machine learning using convolutional neural network (CNN) is applied to denoising images rendered by MC path tracing. The input image data may include pixel color and its variance, as well as a set of auxiliary buffers that encode scene information (e.g., surface normal, albedo, depth, and their corresponding variances). In some embodiments, a CNN directly predicts the final denoised pixel value as a highly non-linear combination of the input features. In some other embodiments, a kernel-prediction neural network uses a CNN to estimate the local weighting kernels, which are used to compute each denoised pixel from its neighbors. In some embodiments, the input image can be decomposed into diffuse and specular components. The diffuse and specular components are then independently preprocessed, filtered, and postprocessed, before recombining them to obtain a final denoised image.Type: GrantFiled: September 26, 2019Date of Patent: October 6, 2020Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Thijs Vogels, Jan Novák, Fabrice Rousselle, Brian McWilliams
-
Patent number: 10789686Abstract: Supervised machine learning using neural networks is applied to denoising images rendered by MC path tracing. Specialization of neural networks may be achieved by using a modular design that allows reusing trained components in different networks and facilitates easy debugging and incremental building of complex structures. Specialization may also be achieved by using progressive neural networks. In some embodiments, training of a neural-network based denoiser may use importance sampling, where more challenging patches or patches including areas of particular interests within a training dataset are selected with higher probabilities than others. In some other embodiments, generative adversarial networks (GANs) may be used for training a machine-learning based denoiser as an alternative to using pre-defined loss functions.Type: GrantFiled: January 6, 2020Date of Patent: September 29, 2020Assignees: Pixar, Disney Enterprises, Inc.Inventors: Thijs Vogels, Fabrice Rousselle, Brian McWilliams, Mark Meyer, Jan Novak
-
Patent number: 10706508Abstract: A modular architecture is provided for denoising Monte Carlo renderings using neural networks. The temporal approach extracts and combines feature representations from neighboring frames rather than building a temporal context using recurrent connections. A multiscale architecture includes separate single-frame or temporal denoising modules for individual scales, and one or more scale compositor neural networks configured to adaptively blend individual scales. An error-predicting module is configured to produce adaptive sampling maps for a renderer to achieve more uniform residual noise distribution. An asymmetric loss function may be used for training the neural networks, which can provide control over the variance-bias trade-off during denoising.Type: GrantFiled: July 31, 2018Date of Patent: July 7, 2020Assignees: Disney Enterprises, Inc., PixarInventors: Thijs Vogels, Fabrice Rousselle, Jan Novak, Brian McWilliams, Mark Meyer, Alex Harvill