Patents by Inventor Michael Tobias Tschannen

Michael Tobias Tschannen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12225239
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an encoder neural network configured to receive a data item and to process the data item to output a compressed representation of the data item. In one aspect, a method includes, for each training data item: processing the data item using the encoder neural network to generate a latent representation of the training data item; processing the latent representation using a hyper-encoder neural network to determine a conditional entropy model; generating a compressed representation of the training data item; processing the compressed representation using a decoder neural network to generate a reconstruction of the training data item; processing the reconstruction of the training data item using a discriminator neural network to generate a discriminator network output; evaluating a first loss function; and determining an update to the current values of the encoder network parameters.
    Type: Grant
    Filed: August 25, 2023
    Date of Patent: February 11, 2025
    Assignee: Google LLC
    Inventors: George Dan Toderici, Fabian Julius Mentzer, Eirikur Thor Agustsson, Michael Tobias Tschannen
  • Publication number: 20240169629
    Abstract: A first image and textual content associated with the first image is obtained. A second image that depicts the textual content associated with the first image is rendered. The first image and the second image are processed with a machine-learned encoding model to respectively obtain a first image embedding and a second image embedding for an image embedding space including a plurality of image embeddings. The machine-learned encoding model is trained based on a difference between the first image embedding and the second image embedding.
    Type: Application
    Filed: November 17, 2023
    Publication date: May 23, 2024
    Inventors: Michael Tobias Tschannen, Neil Matthew Tinmouth Houlsby, Basil Mustafa
  • Publication number: 20240169715
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network that is configured to process an input image to generate a network output for the input image. In one aspect, a method comprises, at each of a plurality of training steps: obtaining a plurality of training images for the training step; obtaining, for each of the plurality of training images, a respective target output; and selecting, from a plurality of image patch generation schemes, an image patch generation scheme for the training step, wherein, given an input image, each of the plurality of image patch generation schemes generates a different number of patches of the input image, and wherein each patch comprises a respective subset of the pixels of the input image.
    Type: Application
    Filed: November 22, 2023
    Publication date: May 23, 2024
    Inventors: Lucas Klaus Beyer, Pavel Izmailov, Simon Kornblith, Alexander Kolesnikov, Mathilde Caron, Xiaohua Zhai, Matthias Johannes Lorenz Minderer, Ibrahim Alabdulmohsin, Michael Tobias Tschannen, Filip Pavetic
  • Publication number: 20240107079
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an encoder neural network configured to receive a data item and to process the data item to output a compressed representation of the data item. In one aspect, a method includes, for each training data item: processing the data item using the encoder neural network to generate a latent representation of the training data item; processing the latent representation using a hyper-encoder neural network to determine a conditional entropy model; generating a compressed representation of the training data item; processing the compressed representation using a decoder neural network to generate a reconstruction of the training data item; processing the reconstruction of the training data item using a discriminator neural network to generate a discriminator network output; evaluating a first loss function; and determining an update to the current values of the encoder network parameters.
    Type: Application
    Filed: August 25, 2023
    Publication date: March 28, 2024
    Inventors: George Dan Toderici, Fabian Julius Mentzer, Eirikur Thor Agustsson, Michael Tobias Tschannen
  • Patent number: 11750848
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an encoder neural network configured to receive a data item and to process the data item to output a compressed representation of the data item. In one aspect, a method includes, for each training data item: processing the data item using the encoder neural network to generate a latent representation of the training data item; processing the latent representation using a hyper-encoder neural network to determine a conditional entropy model; generating a compressed representation of the training data item; processing the compressed representation using a decoder neural network to generate a reconstruction of the training data item; processing the reconstruction of the training data item using a discriminator neural network to generate a discriminator network output; evaluating a first loss function; and determining an update to the current values of the encoder network parameters.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: September 5, 2023
    Assignee: Google LLC
    Inventors: George Dan Toderici, Fabian Julius Mentzer, Eirikur Thor Agustsson, Michael Tobias Tschannen
  • Publication number: 20220174328
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an encoder neural network configured to receive a data item and to process the data item to output a compressed representation of the data item. In one aspect, a method includes, for each training data item: processing the data item using the encoder neural network to generate a latent representation of the training data item; processing the latent representation using a hyper-encoder neural network to determine a conditional entropy model; generating a compressed representation of the training data item; processing the compressed representation using a decoder neural network to generate a reconstruction of the training data item; processing the reconstruction of the training data item using a discriminator neural network to generate a discriminator network output; evaluating a first loss function; and determining an update to the current values of the encoder network parameters.
    Type: Application
    Filed: November 30, 2020
    Publication date: June 2, 2022
    Inventors: George Dan Toderici, Fabian Julius Mentzer, Eirikur Thor Agustsson, Michael Tobias Tschannen