Patents by Inventor Max Welling

Max Welling has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220076044
    Abstract: A computer-implemented method for training a normalizing flow. The normalizing flow predicts a first density value based on a first input image. The first density value characterizes a likelihood of the first input image to occur. The first density value is predicted based on an intermediate output of a first convolutional layer of the normalizing flow. The intermediate output is determined based on a plurality of weights of the first convolutional layer. The method for training includes: determining a second input image; determining an output, wherein the output is determined by providing the second input image to the normalizing flow and providing an output of the normalizing flow as output; determining a second density value based on the output tensor and on the plurality of weights; determining a natural gradient of the plurality of weights with respect to the second density value; adapting the weights according to the natural gradient.
    Type: Application
    Filed: August 16, 2021
    Publication date: March 10, 2022
    Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Emiel Hoogeboom, Max Welling, Priyank Jaini
  • Publication number: 20220070822
    Abstract: A method of training an artificial neural network (ANN), receives, from a base station, signal information for a radio frequency signal between the base station and a user equipment (UE). The artificial neural network is trained to determine a location of the UE and to map the environment based on the received signal information and in the absence of labeled data.
    Type: Application
    Filed: August 30, 2021
    Publication date: March 3, 2022
    Inventors: Arash BEHBOODI, Farhad GHAZVINIAN ZANJANI, Joseph Binamira SORIAGA, Lorenzo FERRARI, Rana Ali AMJAD, Max WELLING, Taesang YOO
  • Publication number: 20220012549
    Abstract: A computer-implemented method of training an image classifier which uses any combination of labelled and/or unlabelled training images. The image classifier comprises a set of transformations between respective transformation inputs and transformation outputs. An inverse model is defined in which for a deterministic, non-injective transformation of the image classifier, its inverse is approximated by a stochastic inverse transformation. During training, for a given training image, a likelihood contribution for this transformation is determined based on a probability of its transformation inputs being generated by the stochastic inverse transformation given its transformation outputs. This likelihood contribution is used to determine a log-likelihood for the training image to be maximized (and its label, if the training image is labelled), based on which the model parameters are optimized.
    Type: Application
    Filed: June 11, 2021
    Publication date: January 13, 2022
    Inventors: Didrik Nielsen, Emiel Hoogeboom, Kaspar Sakmann, Max Welling, Priyank Jaini
  • Publication number: 20210399924
    Abstract: A method performed by a communication device includes generating an initial channel estimate of a channel for a current time step with a Kalman filter based on a first signal received at the communication device. The method also includes inferring, with a neural network, a residual of the initial channel estimate of the current time step. The method further includes updating the initial channel estimate of the current time step based on the residual.
    Type: Application
    Filed: June 16, 2021
    Publication date: December 23, 2021
    Inventors: Rana Ali AMJAD, Kumar PRATIK, Max WELLING, Arash BEHBOODI, Joseph Binamira SORIAGA
  • Publication number: 20210366160
    Abstract: A device for and a computer implemented method of digital signal processing. The method includes providing a first set of data, mapping the first set of data with to a second set of data, and determining an output of the digital signal processing depending on the second set of data. The second set of data is determined depending on a sum of a finite series of terms. At least one term of the series is determined depending on a result of a convolution of the first set of data with a kernel and at least one term of the series is determined depending on the first set of data and independent of the kernel.
    Type: Application
    Filed: April 28, 2021
    Publication date: November 25, 2021
    Inventors: Emiel Hoogeboom, Jakub Tomczak, Max Welling, Dan Zhang
  • Publication number: 20210350182
    Abstract: A computer-implemented method of training a machine learnable function, such as an image classifier or image feature extractor. When applying such machine learnable functions in autonomous driving and similar application areas, generalizability may be important. To improve generalizability, the machine learnable function is rewarded for responding predictably at a layer of the machine learnable function to a set of differences between input observations. This is done by means of a regularization objective included in the objective function used to train the machine learnable function. The regularization objective rewards a mutual statistical dependence between representations of input observations at the given layer, given a difference label indicating a difference between the input observations.
    Type: Application
    Filed: April 16, 2021
    Publication date: November 11, 2021
    Inventors: Thomas Andy Keller, Anna Khoreva, Max Welling
  • Publication number: 20210343343
    Abstract: In one embodiment, an electronic device includes a compute-in-memory (CIM) array that includes a plurality of columns. Each column includes a plurality of CIM cells connected to a corresponding read bitline, a plurality of offset cells configured to provide a programmable offset value for the column, and an analog-to-digital converter (ADC) having the corresponding bitline as a first input and configured to receive the programmable offset value. Each CIM cell is configured to store a corresponding weight.
    Type: Application
    Filed: April 30, 2020
    Publication date: November 4, 2021
    Inventors: Edward Harrison Teague, Zhongze Wang, Max Welling
  • Publication number: 20210334623
    Abstract: A method for generating a graph convolutional network includes receiving a graph network comprising nodes connected by edges. A node neighborhood is determined for each of the nodes of the graph network and an edge neighborhood is determined for each of the edges of the graph network. The node neighborhood for each of the nodes and the edge neighborhood for each of the edges are classified based on isomorphism. A mapping of a kernel from an edge neighborhood class representative to each of the edges of the graph network is determined. The graph convolutional network is generated based on the kernel mapping.
    Type: Application
    Filed: April 24, 2021
    Publication date: October 28, 2021
    Inventors: Pim De Haan, Taco Sebastiaan Cohen, Max Welling
  • Patent number: 11150657
    Abstract: A lossy data compressor for physical measurement data, comprising a parametrized mapping network hat, when applied to a measurement data point x in a space X, produces a point z in a lower-dimensional manifold Z, and configured to provide a point z on manifold Z as output in response to receiving a data point x as input, wherein the manifold Z is a continuous hypersurface that only admits fully continuous paths between any two points on the hypersurface; and the parameters ? of the mapping network are trainable or trained towards an objective that comprises minimizing, on the manifold Z, a distance between a given prior distribution PZ and a distribution PQ induced on manifold Z by mapping a given set PD of physical measurement data from X onto Z using the mapping network, according to a given distance measure.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: October 19, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Marcello Carioni, Giorgio Patrini, Max Welling, Patrick Forré, Tim Genewein
  • Publication number: 20210287093
    Abstract: A method for training a neural network. The neural network comprises a first layer which includes a plurality of filters to provide a first layer output comprising a plurality of feature maps. Training of the classifier includes: receiving, by a preceding layer, a first layer input in the first layer, wherein the first layer input is based on the input signal; determining the first layer output based on the first layer input and a plurality of parameters of the first layer; determining a first layer loss value based on the first layer output, wherein the first layer loss value characterizes a degree of dependency between the feature maps, the first layer loss value being obtained in an unsupervised fashion; and training the neural network. The training includes an adaption of the parameters of the first layer, the adaption being based on the first layer loss value.
    Type: Application
    Filed: February 19, 2021
    Publication date: September 16, 2021
    Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Max Welling, Priyank Jaini
  • Publication number: 20210248504
    Abstract: Certain aspects of the present disclosure provide a method for performing machine learning, comprising: determining a plurality of vertices in a neighborhood associated with a mesh including a target vertex; determining a linear transformation configured to parallel transport signals along all edges in the mesh to the target vertex; applying the linear transformation to the plurality of vertices in the neighborhood to form a combined signal at the target vertex; determining a set of basis filters; linearly combining the basis filters using a set of learned parameters to form a gauge equivariant convolution filter, wherein the gauge equivariant convolution filter is constrained to maintain gauge equivariance; applying the gauge equivariant convolution filter to the combined signal to form an intermediate output; and applying a nonlinearity to the intermediate output to form a convolution output.
    Type: Application
    Filed: February 5, 2021
    Publication date: August 12, 2021
    Inventors: Pim DE HAAN, Maurice WEILER, Taco Sebastiaan COHEN, Max WELLING
  • Publication number: 20210248467
    Abstract: Certain aspects of the present disclosure provide a method of performing machine learning, comprising: generating a neural network model; and training the neural network model for a task with a first set of input data, wherein: the training uses a total loss function total including an equivariance loss component equivarnace according to total=task+?equivarnace, and ?>0.
    Type: Application
    Filed: February 8, 2021
    Publication date: August 12, 2021
    Inventors: Mirgahney Husham Awadelkareem MOHAMED, Gabriele CESA, Taco Sebastiaan COHEN, Max WELLING
  • Publication number: 20210089955
    Abstract: Certain aspects of the present disclosure provide a method for performing quantum convolution, including: receiving input data at a neural network model, wherein the neural network model comprises at least one quantum convolutional layer; performing quantum convolution on the input data using the at least one quantum convolutional layer; generating an output wave function based on the quantum convolution using the at least one quantum convolution layer; generating a marginal probability distribution based on the output wave function; and generating an inference based on the marginal probability distribution.
    Type: Application
    Filed: September 24, 2020
    Publication date: March 25, 2021
    Inventors: Roberto BONDESAN, Max WELLING
  • Publication number: 20210086753
    Abstract: A device and a method for generating a compressed network from a trained neural network are provided. The method includes: a model generating a compressing map from first training data, the compressing map representing the impact of model components of the model to first output data in response to the first training data; generating a compressed network by compressing the trained neural network in accordance with the compressing map; the trained neural network generating trained network output data in response to second training data; the compressed network generating compressed network output data in response to the second training data; training the model by comparing the trained network output data with the compressed network output data.
    Type: Application
    Filed: August 3, 2020
    Publication date: March 25, 2021
    Inventors: Jorn Peters, Emiel Hoogeboom, Max Welling, Melih Kandemir, Karim Said Mahmoud Barsim
  • Publication number: 20210081784
    Abstract: Device and method for training an artificial neural network, including providing a neural network layer for an equivariant feature mapping having a plurality of output channels, grouping channels of the output channels into a number of distinct groups, wherein the output channels of each individual distinct group are organized into an individual grid defining a spatial location of each of the output channels of the individual distinct group in the grid for the individual distinct group, providing for each of the output channels of each individual distinct group, a distinct normalization function which is defined depending on the spatial location of the output channel in the grid in that this output channel is organized and depending on tunable hyperparameters for the normalization function, determining an output of the artificial neural network depending on a result of each of the distinct normalization functions, training the hyperparameters of the artificial neural network.
    Type: Application
    Filed: August 3, 2020
    Publication date: March 18, 2021
    Inventors: Thomas Andy Keller, Anna Khoreva, Max Welling
  • Publication number: 20210073650
    Abstract: In one embodiment, a method of simulating an operation of an artificial neural network on a binary neural network processor includes receiving a binary input vector for a layer including a probabilistic binary weight matrix and performing vector-matrix multiplication of the input vector with the probabilistic binary weight matrix, wherein the multiplication results are modified by simulated binary-neural-processing hardware noise, to generate a binary output vector, where the simulation is performed in the forward pass of a training algorithm for a neural network model for the binary-neural-processing hardware.
    Type: Application
    Filed: September 9, 2020
    Publication date: March 11, 2021
    Inventors: Matthias REISSER, Saurabh Kedar PITRE, Xiaochun ZHU, Edward Harris TEAGUE, Zhongze WANG, Max WELLING
  • Publication number: 20210073619
    Abstract: A method performs XNOR-equivalent operations by adjusting column thresholds of a compute-in-memory array of an artificial neural network. The method includes adjusting an activation threshold generated for each column of the compute-in-memory array based on a function of a weight value and an activation value. The method also includes calculating a conversion bias current reference based on an input value from an input vector to the compute-in-memory array, the compute-in-memory array being programmed with a set of weights. The adjusted activation threshold and the conversion bias current reference are used as a threshold for determining the output values of the compute-in-memory array.
    Type: Application
    Filed: September 9, 2019
    Publication date: March 11, 2021
    Inventors: Zhongze WANG, Edward TEAGUE, Max WELLING
  • Publication number: 20210034928
    Abstract: Certain aspects provide a method for determining a solution to a combinatorial optimization problem, including: determining a plurality of subgraphs, wherein each subgraph of the plurality of subgraphs corresponds to a combinatorial variable of the plurality of combinatorial variables; determining a combinatorial graph based on the plurality of subgraphs; determining evaluation data comprising a set of vertices in the combinatorial graph and evaluations on the set of vertices; fitting a Gaussian process to the evaluation data; determining an acquisition function for vertices in the combinatorial graph using a predictive mean and a predictive variance from the fitted Gaussian process; optimizing the acquisition function on the combinatorial graph to determine a next vertex to evaluate; evaluating the next vertex; updating the evaluation data with a tuple of the next vertex and its evaluation; and determining a solution to the problem, wherein the solution comprises a vertex of the combinatorial graph.
    Type: Application
    Filed: July 31, 2020
    Publication date: February 4, 2021
    Inventors: Changyong OH, Efstratios GAVVES, Jakub Mikolaj TOMCZAK, Max WELLING
  • Publication number: 20210012226
    Abstract: A system for adapting a base classifier to one or more novel classes. The base classifier classifies an instance into a base class by extracting a feature representation from the instance using a feature extractor and matching it to class representations of the base classes. The base classifier is adapted using training data for the novel classes. Class representations of the novel classes are determined based on feature representations of instances of the novel classes. The class representations of the novel and base classes are then adapted, wherein at least one class representation of a novel class is adapted based on a class representation of a base class and at least one class representation of a base class is adapted based on a class representation of a novel class. The adapted class representations of the base and novel classes are associated with the base classifier.
    Type: Application
    Filed: June 16, 2020
    Publication date: January 14, 2021
    Inventors: Xiahan Shi, Martin Schiegg, Leonard Salewski, Max Welling, Zeynep Akata
  • Patent number: 10885467
    Abstract: A method for privatizing an iteratively reweighted least squares (IRLS) solution includes perturbing a first moment of a dataset by adding noise and perturbing a second moment of the dataset by adding noise. The method also includes obtaining the IRLS solution based on the perturbed first moment and the perturbed second moment. The method further includes generating a differentially private output based on the IRLS solution.
    Type: Grant
    Filed: April 27, 2017
    Date of Patent: January 5, 2021
    Assignee: Qualcomm Incorporated
    Inventors: Mijung Park, Max Welling