Patents by Inventor Emiel Hoogeboom

Emiel Hoogeboom has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961275
    Abstract: A computer-implemented method for training a normalizing flow. The normalizing flow predicts a first density value based on a first input image. The first density value characterizes a likelihood of the first input image to occur. The first density value is predicted based on an intermediate output of a first convolutional layer of the normalizing flow. The intermediate output is determined based on a plurality of weights of the first convolutional layer. The method for training includes: determining a second input image; determining an output, wherein the output is determined by providing the second input image to the normalizing flow and providing an output of the normalizing flow as output; determining a second density value based on the output tensor and on the plurality of weights; determining a natural gradient of the plurality of weights with respect to the second density value; adapting the weights according to the natural gradient.
    Type: Grant
    Filed: August 16, 2021
    Date of Patent: April 16, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Emiel Hoogeboom, Max Welling, Priyank Jaini
  • Patent number: 11823302
    Abstract: A device for and a computer implemented method of digital signal processing. The method includes providing a first set of data, mapping the first set of data with to a second set of data, and determining an output of the digital signal processing depending on the second set of data. The second set of data is determined depending on a sum of a finite series of terms. At least one term of the series is determined depending on a result of a convolution of the first set of data with a kernel and at least one term of the series is determined depending on the first set of data and independent of the kernel.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: November 21, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Emiel Hoogeboom, Jakub Tomczak, Max Welling, Dan Zhang
  • Publication number: 20220277554
    Abstract: A computer-implemented method of training an image analysis model. The image analysis model comprises a coupling layer that determines an output vector of discrete values from an input vector of discrete values. First, a machine learnable submodel of the coupling layer is trained to predict a second input part of the coupling layer from a first input part of the coupling layer. Next, the image analysis model is trained. This involves applying the coupling layer by applying the machine learnable submodel to the first input part to obtain a prediction of the second input part; and determining a second output part by applying an invertible mapping to the second input part defined by the prediction of the second input part. The mapping maps a predicted value of an element of the second input part to a fixed value independent from the predicted value.
    Type: Application
    Filed: February 11, 2022
    Publication date: September 1, 2022
    Inventors: Alexandra Lindt, Emiel Hoogeboom, William Harris Beluch
  • Publication number: 20220277559
    Abstract: A computer-implemented method of training an image analysis model. A coupling layer determines an output vector of integer values from an input vector of integer values. The coupling layer is applied by dividing the input vector into non-overlapping first and second input parts; applying a machine learnable submodel of the coupling layer to the first input part to obtain a submodel output of the machine learnable submodel; sampling a transformation vector from a discrete probability distribution, wherein the discrete probability distribution is parameterized based on the submodel output; determining a second output part based on the second input part and the transformation vector; and combining the first input part and the second output part to obtain the output vector. During backpropagation, a gradient of the sampling of the transformation vector is estimated.
    Type: Application
    Filed: February 11, 2022
    Publication date: September 1, 2022
    Inventors: Alexandra Lindt, Emiel Hoogeboom, William Harris Beluch
  • Publication number: 20220261618
    Abstract: A computer-implemented method. The method includes: receiving or knowing an input graph that comprises nodes and associated multi-dimensional coordinates, and propagating the input graph through a trained graph neural network, the input graph being provided as input to an input section of the trained graph neural network, wherein an output tensor of at least one hidden layer of the trained graph neural network is determined, at least partly, based on a set of node embeddings of a previous layer and based on coordinate embeddings associated with the node embeddings of the previous layer, and wherein an output graph is provided in an output section of the trained graph neural network.
    Type: Application
    Filed: January 28, 2022
    Publication date: August 18, 2022
    Inventors: Victor Garcia Satorras, Emiel Hoogeboom, Max Welling
  • Publication number: 20220101074
    Abstract: A computer-implemented method for training a normalizing flow. The normalizing flow is configured to determine a first output signal characterizing a likelihood or a log-likelihood of an input signal. The normalizing flow includes at least one first layer which includes trainable parameters. A layer input to the first layer is based on the input signal and the first output signal is based on a layer output of the first layer. The training includes: determining at least one training input signal; determining a training output signal for each training input signal using the normalizing flow; determining a first loss value which is based on a likelihood or a log-likelihood of the at least one determined training output signal with respect to a predefined probability distribution; determining an approximation of a gradient of the trainable parameters; updating the trainable parameters of the first layer based on the approximation of the gradient.
    Type: Application
    Filed: September 20, 2021
    Publication date: March 31, 2022
    Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Emiel Hoogeboom, Max Welling, Patrick Forre, Priyank Jaini
  • Publication number: 20220101197
    Abstract: A computer-implemented method of estimating a reliability of control data for a computer-controlled system interacting with an environment. The control data is inferred from a model input by a machine learnable control model which is trained on a training dataset. The model input comprises at least one direction vector which is extracted from sensor data and which is associated with a component of the computer-controlled system or an object in the environment. The reliability is estimated using a generative model that is trained to generate synthetic model inputs representative of the training dataset, by applying an inverse of the generative model to the model input to determine a likelihood of the model input being generated according to the generative model. The generative model comprises a coupling layer comprising a circle transformation and one or more of an unconditional rotation and a conditional rotation.
    Type: Application
    Filed: August 25, 2021
    Publication date: March 31, 2022
    Inventors: Simon Passenheim, Emiel Hoogeboom, William Harris Beluch
  • Publication number: 20220101050
    Abstract: A computer-implemented method of training an image generation model. The image generation model comprises an argmax transformation configured to compute a discrete index feature indicating an index of a feature of the continuous feature vector with an extreme value. The image generation model is trained using a log-likelihood optimization. This involves obtaining a value of the index feature for the training image, sampling values of the continuous feature vector given the value of the index feature according to a stochastic inverse transformation of the argmax transformation, and determining a likelihood contribution of the argmax transformation for the log-likelihood based on a probability that the stochastic inverse transformation generates the values of the continuous feature vector given the value of the index feature.
    Type: Application
    Filed: August 25, 2021
    Publication date: March 31, 2022
    Inventors: Emiel Hoogeboom, Didrik Nielsen, Max Welling, Patrick Forre, Priyank Jaini, William Harris Beluch
  • Patent number: 11276140
    Abstract: Computer implemented method for digital image data, digital video data or digital audio data enhancement, and a computer implemented method for encoding or decoding this data in particular for transmission or storage, wherein an element representing a part of said digital data comprises an indication of a position of the element in an ordered input data of a plurality of data elements, wherein a plurality of elements is transformed to a representation depending on an invertible linear mapping, wherein the invertible linear mapping maps the input of the plurality of elements to the representation, wherein the invertible linear mapping comprises at least one autoregressive convolution.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: March 15, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Emiel Hoogeboom, Dan Zhang, Max Welling
  • Publication number: 20220076044
    Abstract: A computer-implemented method for training a normalizing flow. The normalizing flow predicts a first density value based on a first input image. The first density value characterizes a likelihood of the first input image to occur. The first density value is predicted based on an intermediate output of a first convolutional layer of the normalizing flow. The intermediate output is determined based on a plurality of weights of the first convolutional layer. The method for training includes: determining a second input image; determining an output, wherein the output is determined by providing the second input image to the normalizing flow and providing an output of the normalizing flow as output; determining a second density value based on the output tensor and on the plurality of weights; determining a natural gradient of the plurality of weights with respect to the second density value; adapting the weights according to the natural gradient.
    Type: Application
    Filed: August 16, 2021
    Publication date: March 10, 2022
    Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Emiel Hoogeboom, Max Welling, Priyank Jaini
  • Publication number: 20220012549
    Abstract: A computer-implemented method of training an image classifier which uses any combination of labelled and/or unlabelled training images. The image classifier comprises a set of transformations between respective transformation inputs and transformation outputs. An inverse model is defined in which for a deterministic, non-injective transformation of the image classifier, its inverse is approximated by a stochastic inverse transformation. During training, for a given training image, a likelihood contribution for this transformation is determined based on a probability of its transformation inputs being generated by the stochastic inverse transformation given its transformation outputs. This likelihood contribution is used to determine a log-likelihood for the training image to be maximized (and its label, if the training image is labelled), based on which the model parameters are optimized.
    Type: Application
    Filed: June 11, 2021
    Publication date: January 13, 2022
    Inventors: Didrik Nielsen, Emiel Hoogeboom, Kaspar Sakmann, Max Welling, Priyank Jaini
  • Publication number: 20210366160
    Abstract: A device for and a computer implemented method of digital signal processing. The method includes providing a first set of data, mapping the first set of data with to a second set of data, and determining an output of the digital signal processing depending on the second set of data. The second set of data is determined depending on a sum of a finite series of terms. At least one term of the series is determined depending on a result of a convolution of the first set of data with a kernel and at least one term of the series is determined depending on the first set of data and independent of the kernel.
    Type: Application
    Filed: April 28, 2021
    Publication date: November 25, 2021
    Inventors: Emiel Hoogeboom, Jakub Tomczak, Max Welling, Dan Zhang
  • Publication number: 20210086753
    Abstract: A device and a method for generating a compressed network from a trained neural network are provided. The method includes: a model generating a compressing map from first training data, the compressing map representing the impact of model components of the model to first output data in response to the first training data; generating a compressed network by compressing the trained neural network in accordance with the compressing map; the trained neural network generating trained network output data in response to second training data; the compressed network generating compressed network output data in response to the second training data; training the model by comparing the trained network output data with the compressed network output data.
    Type: Application
    Filed: August 3, 2020
    Publication date: March 25, 2021
    Inventors: Jorn Peters, Emiel Hoogeboom, Max Welling, Melih Kandemir, Karim Said Mahmoud Barsim
  • Publication number: 20210073660
    Abstract: A training method is described in which data augmentation is used. New data instances are derived from existing data instances by modifying the latter in a manner dependent on respective variables. A conditionally invertible function is provided to generate different prediction target labels for the new data instances based on the respective variables. The machine learnable model thereby may not only learn the class label of a data instance but also the characteristic of the modification. By being trained to learn the characteristics of such modifications, the machine learnable model may better learn the semantic features of a data instance, and thereby may learn to more accurately classify data instances. At inference time, an inverse of the conditionally invertible function may be used to determine the class label for a test data instance based on the output label of the machine learned model.
    Type: Application
    Filed: August 13, 2020
    Publication date: March 11, 2021
    Inventors: Dan Zhang, Emiel Hoogeboom
  • Publication number: 20200184595
    Abstract: Computer implemented method for digital image data, digital video data or digital audio data enhancement, and a computer implemented method for encoding or decoding this data in particular for transmission or storage, wherein an element representing a part of said digital data comprises an indication of a position of the element in an ordered input data of a plurality of data elements, wherein a plurality of elements is transformed to a representation depending on an invertible linear mapping, wherein the invertible linear mapping maps the input of the plurality of elements to the representation, wherein the invertible linear mapping comprises at least one autoregressive convolution.
    Type: Application
    Filed: December 3, 2019
    Publication date: June 11, 2020
    Inventors: Emiel Hoogeboom, Dan Zhang, Max Welling