Patents by Inventor Priyank Jaini

Priyank Jaini has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961275
    Abstract: A computer-implemented method for training a normalizing flow. The normalizing flow predicts a first density value based on a first input image. The first density value characterizes a likelihood of the first input image to occur. The first density value is predicted based on an intermediate output of a first convolutional layer of the normalizing flow. The intermediate output is determined based on a plurality of weights of the first convolutional layer. The method for training includes: determining a second input image; determining an output, wherein the output is determined by providing the second input image to the normalizing flow and providing an output of the normalizing flow as output; determining a second density value based on the output tensor and on the plurality of weights; determining a natural gradient of the plurality of weights with respect to the second density value; adapting the weights according to the natural gradient.
    Type: Grant
    Filed: August 16, 2021
    Date of Patent: April 16, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Emiel Hoogeboom, Max Welling, Priyank Jaini
  • Publication number: 20220388172
    Abstract: A computer-implemented method of training a machine learnable model for controlling and/or monitoring a computer-controlled system. The machine learnable model is configured to make inferences based on a probability distribution of sensor data of the computer-controlled system. The machine learnable model is configured to account for symmetries in the probability distribution imposed by the system and/or its environment. The training involves sampling multiple samples of the sensor data according to the probability distribution. Initial values are sampled from a source probability distribution invariant to the one or more symmetries. The samples are iteratively evolved according to a kernel function equivariant to the one or more symmetries. The evolution uses an attraction term and a repulsion term that are defined for a selected sample in terms of gradient directions of the probability distribution and of the kernel function for the multiple samples.
    Type: Application
    Filed: May 16, 2022
    Publication date: December 8, 2022
    Inventors: Priyank Jaini, Lars Holdijk, Max Welling
  • Publication number: 20220101050
    Abstract: A computer-implemented method of training an image generation model. The image generation model comprises an argmax transformation configured to compute a discrete index feature indicating an index of a feature of the continuous feature vector with an extreme value. The image generation model is trained using a log-likelihood optimization. This involves obtaining a value of the index feature for the training image, sampling values of the continuous feature vector given the value of the index feature according to a stochastic inverse transformation of the argmax transformation, and determining a likelihood contribution of the argmax transformation for the log-likelihood based on a probability that the stochastic inverse transformation generates the values of the continuous feature vector given the value of the index feature.
    Type: Application
    Filed: August 25, 2021
    Publication date: March 31, 2022
    Inventors: Emiel Hoogeboom, Didrik Nielsen, Max Welling, Patrick Forre, Priyank Jaini, William Harris Beluch
  • Publication number: 20220101074
    Abstract: A computer-implemented method for training a normalizing flow. The normalizing flow is configured to determine a first output signal characterizing a likelihood or a log-likelihood of an input signal. The normalizing flow includes at least one first layer which includes trainable parameters. A layer input to the first layer is based on the input signal and the first output signal is based on a layer output of the first layer. The training includes: determining at least one training input signal; determining a training output signal for each training input signal using the normalizing flow; determining a first loss value which is based on a likelihood or a log-likelihood of the at least one determined training output signal with respect to a predefined probability distribution; determining an approximation of a gradient of the trainable parameters; updating the trainable parameters of the first layer based on the approximation of the gradient.
    Type: Application
    Filed: September 20, 2021
    Publication date: March 31, 2022
    Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Emiel Hoogeboom, Max Welling, Patrick Forre, Priyank Jaini
  • Publication number: 20220076044
    Abstract: A computer-implemented method for training a normalizing flow. The normalizing flow predicts a first density value based on a first input image. The first density value characterizes a likelihood of the first input image to occur. The first density value is predicted based on an intermediate output of a first convolutional layer of the normalizing flow. The intermediate output is determined based on a plurality of weights of the first convolutional layer. The method for training includes: determining a second input image; determining an output, wherein the output is determined by providing the second input image to the normalizing flow and providing an output of the normalizing flow as output; determining a second density value based on the output tensor and on the plurality of weights; determining a natural gradient of the plurality of weights with respect to the second density value; adapting the weights according to the natural gradient.
    Type: Application
    Filed: August 16, 2021
    Publication date: March 10, 2022
    Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Emiel Hoogeboom, Max Welling, Priyank Jaini
  • Publication number: 20220012549
    Abstract: A computer-implemented method of training an image classifier which uses any combination of labelled and/or unlabelled training images. The image classifier comprises a set of transformations between respective transformation inputs and transformation outputs. An inverse model is defined in which for a deterministic, non-injective transformation of the image classifier, its inverse is approximated by a stochastic inverse transformation. During training, for a given training image, a likelihood contribution for this transformation is determined based on a probability of its transformation inputs being generated by the stochastic inverse transformation given its transformation outputs. This likelihood contribution is used to determine a log-likelihood for the training image to be maximized (and its label, if the training image is labelled), based on which the model parameters are optimized.
    Type: Application
    Filed: June 11, 2021
    Publication date: January 13, 2022
    Inventors: Didrik Nielsen, Emiel Hoogeboom, Kaspar Sakmann, Max Welling, Priyank Jaini
  • Publication number: 20210287093
    Abstract: A method for training a neural network. The neural network comprises a first layer which includes a plurality of filters to provide a first layer output comprising a plurality of feature maps. Training of the classifier includes: receiving, by a preceding layer, a first layer input in the first layer, wherein the first layer input is based on the input signal; determining the first layer output based on the first layer input and a plurality of parameters of the first layer; determining a first layer loss value based on the first layer output, wherein the first layer loss value characterizes a degree of dependency between the feature maps, the first layer loss value being obtained in an unsupervised fashion; and training the neural network. The training includes an adaption of the parameters of the first layer, the adaption being based on the first layer loss value.
    Type: Application
    Filed: February 19, 2021
    Publication date: September 16, 2021
    Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Max Welling, Priyank Jaini