Patents by Inventor Thomas Andy Keller

Thomas Andy Keller has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240161346
    Abstract: A computer implemented method of encoding data. The method includes providing a first set of parameters that represent at least a part of the data, determining for parameters in the first set of parameters a weighted first sum depending on the parameters that is positive, providing a first parameter that represents at least a part of the data, and determining an encoding of the data depending on a ratio between the first parameter and the first sum or a root of a predetermined order of the first sum.
    Type: Application
    Filed: July 22, 2022
    Publication date: May 16, 2024
    Inventors: Thomas Andy KELLER, Anna Khoreva, Max Welling
  • Patent number: 11961275
    Abstract: A computer-implemented method for training a normalizing flow. The normalizing flow predicts a first density value based on a first input image. The first density value characterizes a likelihood of the first input image to occur. The first density value is predicted based on an intermediate output of a first convolutional layer of the normalizing flow. The intermediate output is determined based on a plurality of weights of the first convolutional layer. The method for training includes: determining a second input image; determining an output, wherein the output is determined by providing the second input image to the normalizing flow and providing an output of the normalizing flow as output; determining a second density value based on the output tensor and on the plurality of weights; determining a natural gradient of the plurality of weights with respect to the second density value; adapting the weights according to the natural gradient.
    Type: Grant
    Filed: August 16, 2021
    Date of Patent: April 16, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Emiel Hoogeboom, Max Welling, Priyank Jaini
  • Patent number: 11804034
    Abstract: A computer-implemented method of training a machine learnable function, such as an image classifier or image feature extractor. When applying such machine learnable functions in autonomous driving and similar application areas, generalizability may be important. To improve generalizability, the machine learnable function is rewarded for responding predictably at a layer of the machine learnable function to a set of differences between input observations. This is done by means of a regularization objective included in the objective function used to train the machine learnable function. The regularization objective rewards a mutual statistical dependence between representations of input observations at the given layer, given a difference label indicating a difference between the input observations.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: October 31, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Thomas Andy Keller, Anna Khoreva, Max Welling
  • Publication number: 20220101074
    Abstract: A computer-implemented method for training a normalizing flow. The normalizing flow is configured to determine a first output signal characterizing a likelihood or a log-likelihood of an input signal. The normalizing flow includes at least one first layer which includes trainable parameters. A layer input to the first layer is based on the input signal and the first output signal is based on a layer output of the first layer. The training includes: determining at least one training input signal; determining a training output signal for each training input signal using the normalizing flow; determining a first loss value which is based on a likelihood or a log-likelihood of the at least one determined training output signal with respect to a predefined probability distribution; determining an approximation of a gradient of the trainable parameters; updating the trainable parameters of the first layer based on the approximation of the gradient.
    Type: Application
    Filed: September 20, 2021
    Publication date: March 31, 2022
    Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Emiel Hoogeboom, Max Welling, Patrick Forre, Priyank Jaini
  • Publication number: 20220076044
    Abstract: A computer-implemented method for training a normalizing flow. The normalizing flow predicts a first density value based on a first input image. The first density value characterizes a likelihood of the first input image to occur. The first density value is predicted based on an intermediate output of a first convolutional layer of the normalizing flow. The intermediate output is determined based on a plurality of weights of the first convolutional layer. The method for training includes: determining a second input image; determining an output, wherein the output is determined by providing the second input image to the normalizing flow and providing an output of the normalizing flow as output; determining a second density value based on the output tensor and on the plurality of weights; determining a natural gradient of the plurality of weights with respect to the second density value; adapting the weights according to the natural gradient.
    Type: Application
    Filed: August 16, 2021
    Publication date: March 10, 2022
    Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Emiel Hoogeboom, Max Welling, Priyank Jaini
  • Publication number: 20210350182
    Abstract: A computer-implemented method of training a machine learnable function, such as an image classifier or image feature extractor. When applying such machine learnable functions in autonomous driving and similar application areas, generalizability may be important. To improve generalizability, the machine learnable function is rewarded for responding predictably at a layer of the machine learnable function to a set of differences between input observations. This is done by means of a regularization objective included in the objective function used to train the machine learnable function. The regularization objective rewards a mutual statistical dependence between representations of input observations at the given layer, given a difference label indicating a difference between the input observations.
    Type: Application
    Filed: April 16, 2021
    Publication date: November 11, 2021
    Inventors: Thomas Andy Keller, Anna Khoreva, Max Welling
  • Publication number: 20210287093
    Abstract: A method for training a neural network. The neural network comprises a first layer which includes a plurality of filters to provide a first layer output comprising a plurality of feature maps. Training of the classifier includes: receiving, by a preceding layer, a first layer input in the first layer, wherein the first layer input is based on the input signal; determining the first layer output based on the first layer input and a plurality of parameters of the first layer; determining a first layer loss value based on the first layer output, wherein the first layer loss value characterizes a degree of dependency between the feature maps, the first layer loss value being obtained in an unsupervised fashion; and training the neural network. The training includes an adaption of the parameters of the first layer, the adaption being based on the first layer loss value.
    Type: Application
    Filed: February 19, 2021
    Publication date: September 16, 2021
    Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Max Welling, Priyank Jaini
  • Publication number: 20210081784
    Abstract: Device and method for training an artificial neural network, including providing a neural network layer for an equivariant feature mapping having a plurality of output channels, grouping channels of the output channels into a number of distinct groups, wherein the output channels of each individual distinct group are organized into an individual grid defining a spatial location of each of the output channels of the individual distinct group in the grid for the individual distinct group, providing for each of the output channels of each individual distinct group, a distinct normalization function which is defined depending on the spatial location of the output channel in the grid in that this output channel is organized and depending on tunable hyperparameters for the normalization function, determining an output of the artificial neural network depending on a result of each of the distinct normalization functions, training the hyperparameters of the artificial neural network.
    Type: Application
    Filed: August 3, 2020
    Publication date: March 18, 2021
    Inventors: Thomas Andy Keller, Anna Khoreva, Max Welling