Patents by Inventor Vladlen Koltun

Vladlen Koltun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11972519
    Abstract: Described herein are techniques for learning neural reflectance shaders from images. A set of one or more machine learning models can be trained to optimize an illumination latent code and a set of reflectance latent codes for an object within a set of input images. A shader can then be generated based on a machine learning model of the one or more machine learning models. The shader is configured to sample the illumination latent code and the set of reflectance latent codes for the object. A 3D representation of the object can be rendered using the generated shader.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: April 30, 2024
    Assignee: Intel Corporation
    Inventors: Benjamin Ummenhofer, Shenlong Wang, Sanskar Agrawal, Yixing Lao, Kai Zhang, Stephan Richter, Vladlen Koltun
  • Patent number: 11928787
    Abstract: Systems, apparatuses and methods may provide for technology that estimates poses of a plurality of input images, reconstructs a proxy three-dimensional (3D) geometry based on the estimated poses and the plurality of input images, detects a user selection of a virtual viewpoint, encodes, via a first neural network, the plurality of input images with feature maps, warps the feature maps of the encoded plurality of input images based on the virtual viewpoint and the proxy 3D geometry, and blends, via a second neural network, the warped feature maps into a single image, wherein the first neural network is deep convolutional network and the second neural network is a recurrent convolutional network.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: March 12, 2024
    Assignee: Intel Corporation
    Inventors: Gernot Riegler, Vladlen Koltun
  • Patent number: 11875252
    Abstract: Some embodiments are directed to a neural network training device for training a neural network. At least one layer of the neural network layers is a projection layer. The projection layer projects a layer input vector (x) of the projection layer to a layer output vector (y). The output vector (y) sums to the summing parameter (k).
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: January 16, 2024
    Inventors: Brandon David Amos, Vladlen Koltun, Jeremy Zieg Kolter, Frank Rüdiger Schmidt
  • Patent number: 11816784
    Abstract: Systems, apparatuses and methods may provide for technology that generates, by a first neural network, an initial set of model weights based on input data and iteratively generates, by a second neural network, an updated set of model weights based on residual data associated with the initial set of model weights and the input data. Additionally, the technology may output a geometric model of the input data based on the updated set of model weights. In one example, the first neural network and the second neural network reduce the dependence of the geometric model on the number of data points in the input data.
    Type: Grant
    Filed: June 15, 2022
    Date of Patent: November 14, 2023
    Assignee: Intel Corporation
    Inventors: Rene Ranftl, Vladlen Koltun
  • Publication number: 20230343014
    Abstract: Described herein are techniques for learning neural reflectance shaders from images. A set of one or more machine learning models can be trained to optimize an illumination latent code and a set of reflectance latent codes for an object within a set of input images. A shader can then be generated based on a machine learning model of the one or more machine learning models. The shader is configured to sample the illumination latent code and the set of reflectance latent codes for the object. A 3D representation of the object can be rendered using the generated shader.
    Type: Application
    Filed: June 24, 2022
    Publication date: October 26, 2023
    Applicant: Intel Corporation
    Inventors: Benjamin Ummenhofer, Shenlong Wang, Sanskar Agrawal, Yixing Lao, Kai Zhang, Stephan Richter, Vladlen Koltun
  • Publication number: 20230113271
    Abstract: Methods, apparatus, systems and articles of manufacture disclosed herein perform dense prediction of an input image using transformers at an encoder stage and at a reassembly stage of an image processing system. A disclosed apparatus includes an encoder with an embedder to convert an input image to a plurality of tokens representing features extracted from the input image. The tokens are embedded with a learnable position embedding. The encoder also includes one or more transformers configured in a sequence of stages to relate the tokens to each other. The apparatus further includes a decoder that includes one or more of reassemblers to assemble the tokens into feature representations, one or more of fusion blocks to combine the feature representations to generate a final feature representation, and an output head to generate a dense prediction based on the final feature representation and based on an output task.
    Type: Application
    Filed: June 30, 2022
    Publication date: April 13, 2023
    Inventors: Renee Ranftl, Alexey Bochkovskiy, Vladlen Koltun
  • Publication number: 20230102866
    Abstract: Systems and methods for operating a deep equilibrium (DEQ) model in a neural network are disclosed. DEQs solve for a fixed point of a single nonlinear layer, which enables decoupling the internal structure of the layer from how the fixed point is actually computed. This disclosure discloses that such decoupling can be exploited while substantially enhancing this fixed point computation using a custom neural solver.
    Type: Application
    Filed: September 27, 2022
    Publication date: March 30, 2023
    Inventors: Shaojie BAI, Vladlen KOLTUN, Jeremy KOLTER, Devin T. WILLMOTT, João D. SEMEDO
  • Patent number: 11610129
    Abstract: A computer-implemented method for a classification and training a neural network includes receiving input at the neural network, wherein the input includes a plurality of resolution inputs of varying resolutions, outputting a plurality of feature tensors for each corresponding resolution of the plurality of resolution inputs, fusing the plurality of feature tensors utilizing upsampling or down sampling for the vary resolutions, utilizing an equilibrium solver to identify one or more prediction vectors from the plurality of feature tensors, and outputting a loss in response to the one or more prediction vectors.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: March 21, 2023
    Inventors: Shaojie Bai, Jeremy Kolter, Vladlen Koltun, Devin T. Willmott
  • Publication number: 20220398480
    Abstract: Regularized training of a Deep Equilibrium Model (DEQ) is provided. A regularization term is computed using a predefined quantity of random samples and the Jacobian matrix of the DEQ, the regularization term penalizing the spectral radius of the Jacobian matrix. The regularization term is included in an original loss function of the DEQ to form a regularized loss function. A gradient of the regularized loss function is computed with respect to model parameters of the DEQ. The gradient is used to update the model parameters.
    Type: Application
    Filed: June 9, 2021
    Publication date: December 15, 2022
    Inventors: Shaojie BAI, Vladlen KOLTUN, J. Zico KOLTER, Devin T. WILLMOTT, João D. SEMEDO
  • Publication number: 20220343521
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for metric depth estimation using a monocular visual-inertial system. An example apparatus for metric depth estimation includes at least one memory, instructions in the apparatus, and processor circuitry to execute the instructions to access a globally-aligned depth prediction, the globally-aligned depth prediction generated based on a monocular depth estimator, access a dense scale map scaffolding, the dense scale map scaffolding generated based on visual-inertial odometry, regress a dense scale residual map determined using the globally-aligned depth prediction and the dense scale map scaffolding, and apply the dense scale residual map to the globally-aligned depth prediction.
    Type: Application
    Filed: June 30, 2022
    Publication date: October 27, 2022
    Inventors: Diana Wofk, Rene Ranftl, Matthias Mueller, Vladlen Koltun
  • Publication number: 20220309739
    Abstract: Systems, apparatuses and methods may provide for technology that generates, by a first neural network, an initial set of model weights based on input data and iteratively generates, by a second neural network, an updated set of model weights based on residual data associated with the initial set of model weights and the input data. Additionally, the technology may output a geometric model of the input data based on the updated set of model weights. In one example, the first neural network and the second neural network reduce the dependence of the geometric model on the number of data points in the input data.
    Type: Application
    Filed: June 15, 2022
    Publication date: September 29, 2022
    Applicant: Intel Corporation
    Inventors: Rene Ranftl, Vladlen Koltun
  • Patent number: 11393160
    Abstract: Systems, apparatuses and methods may provide for technology that generates, by a first neural network, an initial set of model weights based on input data and iteratively generates, by a second neural network, an updated set of model weights based on residual data associated with the initial set of model weights and the input data. Additionally, the technology may output a geometric model of the input data based on the updated set of model weights. In one example, the first neural network and the second neural network reduce the dependence of the geometric model on the number of data points in the input data.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: July 19, 2022
    Assignee: Intel Corporation
    Inventors: Rene Ranftl, Vladlen Koltun
  • Publication number: 20220075555
    Abstract: Systems, apparatuses and methods may provide for technology that selects elements of a multi-scale kernel according to resolutions in an adaptive grid, conducts convolutions on the adaptive grid with the selected elements of the multi-scale kernel, and generates a signed distance field based on the convolutions.
    Type: Application
    Filed: November 17, 2021
    Publication date: March 10, 2022
    Applicant: Intel Corporation
    Inventors: Benjamin Ummenhofer, Vladlen Koltun
  • Publication number: 20220028026
    Abstract: Apparatus and method for enhancing graphics rendering photorealism. For example, one embodiment of a graphics processor comprises: a graphics processing pipeline comprising a plurality of graphics processing stages to render a graphics image; a local storage to store intermediate rendering data to generate the graphics image; and machine-learning hardware logic to perform a refinement operation on the graphics image using at least a portion of the intermediate rendering data to generate a translated image.
    Type: Application
    Filed: July 27, 2020
    Publication date: January 27, 2022
    Inventors: Stephan Richter, Vladlen Koltun, Hassan Abu Alhaija
  • Publication number: 20220012848
    Abstract: Methods, apparatus, systems and articles of manufacture disclosed herein perform dense prediction of an input image using transformers at an encoder stage and at a reassembly stage of an image processing system. A disclosed apparatus includes an encoder with an embedder to convert an input image to a plurality of tokens representing features extracted from the input image. The tokens are embedded with a learnable position embedding. The encoder also includes one or more transformers configured in a sequence of stages to relate the tokens to each other. The apparatus further includes a decoder that includes one or more of reassemblers to assemble the tokens into feature representations, one or more of fusion blocks to combine the feature representations to generate a final feature representation, and an output head to generate a dense prediction based on the final feature representation and based on an output task.
    Type: Application
    Filed: September 25, 2021
    Publication date: January 13, 2022
    Inventors: Rene Ranftl, Alexey Bochkovskiy, Vladlen Koltun
  • Publication number: 20210383234
    Abstract: A computer-implemented method for a classification and training a neural network includes receiving input at the neural network, wherein the input includes a plurality of resolution inputs of varying resolutions, outputting a plurality of feature tensors for each corresponding resolution of the plurality of resolution inputs, fusing the plurality of feature tensors utilizing upsampling or down sampling for the vary resolutions, utilizing an equilibrium solver to identify one or more prediction vectors from the plurality of feature tensors, and outputting a loss in response to the one or more prediction vectors.
    Type: Application
    Filed: June 8, 2020
    Publication date: December 9, 2021
    Inventors: Shaojie BAI, Jeremy KOLTER, Vladlen KOLTUN, Devin T. WILLMOTT
  • Publication number: 20210319324
    Abstract: Systems, apparatuses and methods may provide for technology that trains a reversible graph neural network (GNN) by partitioning an input vertex feature matrix into a plurality of groups, generating, via a block of the reversible GNN, outputs for the plurality of groups based on an adjacency matrix and an edge feature matrix, wherein the outputs are generated during one or more forward propagations, conducting a reconstruction of the input feature matrix during one or more backward propagations, and excluding the adjacency matrix and the edge feature matrix from the reconstruction. The technology also trains a deep equilibrium GNN.
    Type: Application
    Filed: June 25, 2021
    Publication date: October 14, 2021
    Applicant: Intel Corporation
    Inventors: Matthias Mueller, Vladlen Koltun, Guohao Li
  • Publication number: 20210319319
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to implement parallel architectures for neural network classifiers. An example non-transitory computer readable medium comprises instructions that, when executed, cause a machine to at least: process a first stream using first neural network blocks, the first stream based on an input image; process a second stream using second neural network blocks, the second stream based on the input image; fuse a result of the first neural network blocks and the second neural network blocks; perform average pooling on the fused result; process a fully connected layer based on the result of the average pooling; and classify the image based on the output of the fully connected layer.
    Type: Application
    Filed: June 25, 2021
    Publication date: October 14, 2021
    Inventors: Ankit Goyal, Alexey Bochkovkiy, Vladlen Koltun
  • Publication number: 20210012576
    Abstract: Systems, apparatuses and methods may provide for technology that estimates poses of a plurality of input images, reconstructs a proxy three-dimensional (3D) geometry based on the estimated poses and the plurality of input images, detects a user selection of a virtual viewpoint, encodes, via a first neural network, the plurality of input images with feature maps, warps the feature maps of the encoded plurality of input images based on the virtual viewpoint and the proxy 3D geometry, and blends, via a second neural network, the warped feature maps into a single image, wherein the first neural network is deep convolutional network and the second neural network is a recurrent convolutional network.
    Type: Application
    Filed: September 22, 2020
    Publication date: January 14, 2021
    Inventors: Gernot Riegler, Vladlen Koltun
  • Publication number: 20200364553
    Abstract: Some embodiments are directed to a neural network training device for training a neural network. At least one layer of the neural network layers is a projection layer. The projection layer projects a layer input vector (x) of the projection layer to a layer output vector (y). The output vector (y) sums to the summing parameter (k).
    Type: Application
    Filed: May 17, 2019
    Publication date: November 19, 2020
    Inventors: Brandon David Amos, Vladlen Koltun, Jeremy Zieg Kolter, Frank Rüdiger Schmidt