Patents by Inventor Max Welling

Max Welling has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11616666
    Abstract: A method performed by a communication device includes generating an initial channel estimate of a channel for a current time step with a Kalman filter based on a first signal received at the communication device. The method also includes inferring, with a neural network, a residual of the initial channel estimate of the current time step. The method further includes updating the initial channel estimate of the current time step based on the residual.
    Type: Grant
    Filed: June 16, 2021
    Date of Patent: March 28, 2023
    Assignee: Qualcomm Incorporated
    Inventors: Rana Ali Amjad, Kumar Pratik, Max Welling, Arash Behboodi, Joseph Binamira Soriaga
  • Publication number: 20230050283
    Abstract: A method for configuring a neural network which is designed to map measured data to one or more output variables. The method includes: transformation(s) of the measured data is/are specified which when applied to the measured data, is/are meant to induce the output variables supplied by the neural network to exhibit an invariant or equivariant behavior; at least one equation is set up which links a condition that the desired invariance or equivariance be given with the architecture of the neural network; by solving the at least one equation a feature is obtained that characterizes the desired architecture and/or a distribution of weights of the neural network in at least one location of this architecture; a neural network is configured in such a way that its architecture and/or its distribution of weights in at least one location of this architecture has/have all of the features ascertained in this way.
    Type: Application
    Filed: July 20, 2022
    Publication date: February 16, 2023
    Inventors: Elise van der Pol, Frans A. Oliehoek, Herke van Hoof, Max Welling, Michael Herman
  • Publication number: 20230036702
    Abstract: Aspects described herein provide a method of processing data, including: receiving a set of global parameters for a plurality of machine learning models; processing data stored locally on an processing device with the plurality of machine learning models according to the set of global parameters to generate a machine learning model output; receiving, at the processing device, user feedback regarding machine learning model output for the plurality of machine learning models; performing an optimization of the plurality of machine learning models based on the machine learning output and the user feedback to generate locally updated machine learning model parameters; sending the locally updated machine learning model parameters to a remote processing device; and receiving a set of globally updated machine learning model parameters for the plurality of machine learning models.
    Type: Application
    Filed: December 14, 2020
    Publication date: February 2, 2023
    Inventors: Matthias REISSER, Max WELLING, Efstratios GAVVES, Christos LOUIZOS
  • Patent number: 11562212
    Abstract: A method performs XNOR-equivalent operations by adjusting column thresholds of a compute-in-memory array of an artificial neural network. The method includes adjusting an activation threshold generated for each column of the compute-in-memory array based on a function of a weight value and an activation value. The method also includes calculating a conversion bias current reference based on an input value from an input vector to the compute-in-memory array, the compute-in-memory array being programmed with a set of weights. The adjusted activation threshold and the conversion bias current reference are used as a threshold for determining the output values of the compute-in-memory array.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: January 24, 2023
    Assignee: Qualcomm Incorporated
    Inventors: Zhongze Wang, Edward Teague, Max Welling
  • Patent number: 11562208
    Abstract: A method for quantizing a neural network includes modeling noise of parameters of the neural network. The method also includes assigning grid values to each realization of the parameters according to a concrete distribution that depends on a local fixed-point quantization grid and the modeled noise and. The method further includes computing a fixed-point value representing parameters of a hard fixed-point quantized neural network.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: January 24, 2023
    Assignee: Qualcomm Incorporated
    Inventors: Christos Louizos, Matthias Reisser, Tijmen Pieter Frederik Blankevoort, Max Welling
  • Patent number: 11551759
    Abstract: In one embodiment, an electronic device includes a compute-in-memory (CIM) array that includes a plurality of columns. Each column includes a plurality of CIM cells connected to a corresponding read bitline, a plurality of offset cells configured to provide a programmable offset value for the column, and an analog-to-digital converter (ADC) having the corresponding bitline as a first input and configured to receive the programmable offset value. Each CIM cell is configured to store a corresponding weight.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: January 10, 2023
    Assignee: Qualcomm Incorporated
    Inventors: Edward Harrison Teague, Zhongze Wang, Max Welling
  • Publication number: 20220388172
    Abstract: A computer-implemented method of training a machine learnable model for controlling and/or monitoring a computer-controlled system. The machine learnable model is configured to make inferences based on a probability distribution of sensor data of the computer-controlled system. The machine learnable model is configured to account for symmetries in the probability distribution imposed by the system and/or its environment. The training involves sampling multiple samples of the sensor data according to the probability distribution. Initial values are sampled from a source probability distribution invariant to the one or more symmetries. The samples are iteratively evolved according to a kernel function equivariant to the one or more symmetries. The evolution uses an attraction term and a repulsion term that are defined for a selected sample in terms of gradient directions of the probability distribution and of the kernel function for the multiple samples.
    Type: Application
    Filed: May 16, 2022
    Publication date: December 8, 2022
    Inventors: Priyank Jaini, Lars Holdijk, Max Welling
  • Publication number: 20220383114
    Abstract: Certain aspects of the present disclosure provide techniques for training and inferencing with machine learning localization models. In one aspect, a method, includes training a machine learning model based on input data for performing localization of an object in a target space, including: determining parameters of a neural network configured to map samples in an input space based on the input data to samples in an intrinsic space; and determining parameters of a coupling matrix configured to transport the samples in the intrinsic space to the target space.
    Type: Application
    Filed: May 31, 2022
    Publication date: December 1, 2022
    Inventors: Farhad Ghazvinian Zanjani, Ilia Karmanov, Daniel Hendricus Franciscus Dijkman, Hanno Ackermann, Simone Merlin, Brian Michael Buesker, Ishaque Ashar Kadampot, Fatih Murat Porikli, Max Welling
  • Publication number: 20220376801
    Abstract: A processor-implemented method is presented. The method includes receiving an input sequence comprising a group of channel dynamics observations for a wireless communication channel. Each channel dynamics observation may correspond to a timing of a group of timings. The method also includes determining, via a recurrent neural network (RNN), a residual at each of the group of timings based on the group of channel dynamics observations. The method further includes updating Kalman filter (KF) parameters based on the residual and estimating, via the KF, a channel state based on the updated KF parameters.
    Type: Application
    Filed: May 2, 2022
    Publication date: November 24, 2022
    Inventors: Kumar PRATIK, Arash BEHBOODI, Joseph Binamira SORIAGA, Max WELLING
  • Patent number: 11481649
    Abstract: A system for adapting a base classifier to one or more novel classes. The base classifier classifies an instance into a base class by extracting a feature representation from the instance using a feature extractor and matching it to class representations of the base classes. The base classifier is adapted using training data for the novel classes. Class representations of the novel classes are determined based on feature representations of instances of the novel classes. The class representations of the novel and base classes are then adapted, wherein at least one class representation of a novel class is adapted based on a class representation of a base class and at least one class representation of a base class is adapted based on a class representation of a novel class. The adapted class representations of the base and novel classes are associated with the base classifier.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: October 25, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Xiahan Shi, Martin Schiegg, Leonard Salewski, Max Welling, Zeynep Akata
  • Publication number: 20220309773
    Abstract: Some embodiments are directed to a computer-implemented method of interacting with a physical environment according to a policy. The policy determines multiple action probabilities of respective actions based on an observable state of the physical environment. The policy includes a neural network parameterized by a set of parameters. The neural network determines the action probabilities by determining a final layer input from an observable state and applying a final layer of the neural network to the final layer input. The final layer is applied by applying a linear combination of a set of equivariant base weight matrices to the final layer input. The base weight matrices are equivariant in the sense that, for a set of multiple predefined transformations of the final layer input, each transformation causes a corresponding predefined action permutation of the base weight matrix output for the final layer input.
    Type: Application
    Filed: September 8, 2020
    Publication date: September 29, 2022
    Inventors: Michael HERMAN, Max WELLING, Herke VAN HOOF, Elise VAN DER POL, Daniel WORRALL, Frans Adriaan OLIEHOEK
  • Publication number: 20220272489
    Abstract: Certain aspects of the present disclosure provide techniques for object positioning using mixture density networks, comprising: receiving radio frequency (RF) signal data collected in a physical space; generating a feature vector encoding the RF signal data by processing the RF signal data using a first neural network; processing the feature vector using a first mixture model to generate a first encoding tensor indicating a set of moving objects in the physical space, a first location tensor indicating a location of each of the moving objects in the physical space, and a first uncertainty tensor indicating uncertainty of the locations of each of the moving objects in the physical space; and outputting at least one location from the first location tensor.
    Type: Application
    Filed: February 22, 2021
    Publication date: August 25, 2022
    Inventors: Farhad GHAZVINIAN ZANJANI, Arash BEHBOODI, Daniel Hendricus Franciscus DIJKMAN, Ilia KARMANOV, Simone MERLIN, Max WELLING
  • Publication number: 20220261618
    Abstract: A computer-implemented method. The method includes: receiving or knowing an input graph that comprises nodes and associated multi-dimensional coordinates, and propagating the input graph through a trained graph neural network, the input graph being provided as input to an input section of the trained graph neural network, wherein an output tensor of at least one hidden layer of the trained graph neural network is determined, at least partly, based on a set of node embeddings of a previous layer and based on coordinate embeddings associated with the node embeddings of the previous layer, and wherein an output graph is provided in an output section of the trained graph neural network.
    Type: Application
    Filed: January 28, 2022
    Publication date: August 18, 2022
    Inventors: Victor Garcia Satorras, Emiel Hoogeboom, Max Welling
  • Publication number: 20220253741
    Abstract: Certain aspects of the present disclosure provide techniques for performing probabilistic convolution operation with a quantum and non-quantum processing systems.
    Type: Application
    Filed: February 3, 2022
    Publication date: August 11, 2022
    Inventors: Roberto BONDESAN, Max Welling
  • Publication number: 20220123966
    Abstract: A method performed by an artificial neural network includes determining a conditional probability distribution representing a channel based on a data set of transmit and receive sequences. The method also includes determining a latent representation of the channel based on the conditional probability distribution. The method further includes performing a channel-based function based on the latent representation.
    Type: Application
    Filed: October 18, 2021
    Publication date: April 21, 2022
    Inventors: Arash BEHBOODI, Simeng ZHENG, Joseph Binamira SORIAGA, Max WELLING, Tribhuvanesh OREKONDY
  • Publication number: 20220108154
    Abstract: Certain aspects of the present disclosure provide techniques for processing data in a quantum deformed binary neural network, including: determining an input state for a layer of the quantum deformed binary neural network; computing a mean and variance for one or more observables in the layer; and returning an output activation probability based on the mean and variance for the one or more observables in the layer.
    Type: Application
    Filed: September 30, 2021
    Publication date: April 7, 2022
    Inventors: Roberto BONDESAN, Max WELLING
  • Publication number: 20220108173
    Abstract: Certain aspects of the present disclosure provide techniques for performing operations with probabilistic numeric convolutional neural network, including: defining a Gaussian Process based on a mean and a covariance of input data; applying a linear operator to the Gaussian Process to generate pre-activation data; applying a nonlinear operation to the pre-activation data to form activation data; and applying a pooling operation to the activation data to generate an inference.
    Type: Application
    Filed: September 30, 2021
    Publication date: April 7, 2022
    Inventors: Marc Anton FINZI, Roberto BONDESAN, Max WELLING
  • Publication number: 20220101074
    Abstract: A computer-implemented method for training a normalizing flow. The normalizing flow is configured to determine a first output signal characterizing a likelihood or a log-likelihood of an input signal. The normalizing flow includes at least one first layer which includes trainable parameters. A layer input to the first layer is based on the input signal and the first output signal is based on a layer output of the first layer. The training includes: determining at least one training input signal; determining a training output signal for each training input signal using the normalizing flow; determining a first loss value which is based on a likelihood or a log-likelihood of the at least one determined training output signal with respect to a predefined probability distribution; determining an approximation of a gradient of the trainable parameters; updating the trainable parameters of the first layer based on the approximation of the gradient.
    Type: Application
    Filed: September 20, 2021
    Publication date: March 31, 2022
    Inventors: Jorn Peters, Thomas Andy Keller, Anna Khoreva, Emiel Hoogeboom, Max Welling, Patrick Forre, Priyank Jaini
  • Publication number: 20220101050
    Abstract: A computer-implemented method of training an image generation model. The image generation model comprises an argmax transformation configured to compute a discrete index feature indicating an index of a feature of the continuous feature vector with an extreme value. The image generation model is trained using a log-likelihood optimization. This involves obtaining a value of the index feature for the training image, sampling values of the continuous feature vector given the value of the index feature according to a stochastic inverse transformation of the argmax transformation, and determining a likelihood contribution of the argmax transformation for the log-likelihood based on a probability that the stochastic inverse transformation generates the values of the continuous feature vector given the value of the index feature.
    Type: Application
    Filed: August 25, 2021
    Publication date: March 31, 2022
    Inventors: Emiel Hoogeboom, Didrik Nielsen, Max Welling, Patrick Forre, Priyank Jaini, William Harris Beluch
  • Patent number: 11276140
    Abstract: Computer implemented method for digital image data, digital video data or digital audio data enhancement, and a computer implemented method for encoding or decoding this data in particular for transmission or storage, wherein an element representing a part of said digital data comprises an indication of a position of the element in an ordered input data of a plurality of data elements, wherein a plurality of elements is transformed to a representation depending on an invertible linear mapping, wherein the invertible linear mapping maps the input of the plurality of elements to the representation, wherein the invertible linear mapping comprises at least one autoregressive convolution.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: March 15, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Emiel Hoogeboom, Dan Zhang, Max Welling