Patents by Inventor Johannes Ball

Johannes Ball has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12265898
    Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.
    Type: Grant
    Filed: January 10, 2024
    Date of Patent: April 1, 2025
    Assignee: GOOGLE LLC
    Inventors: Deniz Oktay, Saurabh Singh, Johannes Balle, Abhinav Shrivastava
  • Publication number: 20250045974
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reliably performing data compression and data decompression across a wide variety of hardware and software platforms by using integer neural networks. In one aspect, there is provided a method for entropy encoding data which defines a sequence comprising a plurality of components, the method comprising: for each component of the plurality of components: processing an input comprising: (i) a respective integer representation of each of one or more components of the data which precede the component in the sequence, (ii) an integer representation of one or more respective latent variables characterizing the data, or (iii) both, using an integer neural network to generate data defining a probability distribution over the predetermined set of possible code symbols for the component of the data.
    Type: Application
    Filed: October 22, 2024
    Publication date: February 6, 2025
    Inventors: Nicholas Johnston, Johannes Balle
  • Patent number: 12154304
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reliably performing data compression and data decompression across a wide variety of hardware and software platforms by using integer neural networks. In one aspect, there is provided a method for entropy encoding data which defines a sequence comprising a plurality of components, the method comprising: for each component of the plurality of components: processing an input comprising: (i) a respective integer representation of each of one or more components of the data which precede the component in the sequence, (ii) an integer representation of one or more respective latent variables characterizing the data, or (iii) both, using an integer neural network to generate data defining a probability distribution over the predetermined set of possible code symbols for the component of the data.
    Type: Grant
    Filed: November 28, 2023
    Date of Patent: November 26, 2024
    Assignee: Google LLC
    Inventors: Nicholas Johnston, Johannes Balle
  • Publication number: 20240223817
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for compressing video data. In one aspect, a method comprises: receiving a video sequence of frames; generating, using a flow prediction network, an optical flow between two sequential frames, wherein the two sequential frames comprise a first frame and a second frame that is subsequent the first frame; generating from the optical flow, using a first autoencoder neural network: a predicted optical flow between the first frame and the second frame; and warping a reconstruction of the first frame according to the predicted optical flow and subsequently applying a blurring operation to obtain an initial predicted reconstruction of the second frame.
    Type: Application
    Filed: July 5, 2022
    Publication date: July 4, 2024
    Inventors: George Dan Toderici, Eirikur Thor Agustsson, Fabian Julius Mentzer, David Charles Minnen, Johannes Balle, Nicholas Johnston
  • Publication number: 20240220863
    Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.
    Type: Application
    Filed: January 10, 2024
    Publication date: July 4, 2024
    Inventors: Deniz Oktay, Saurabh Singh, Johannes Balle, Abhinav Shrivastava
  • Publication number: 20240104786
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reliably performing data compression and data decompression across a wide variety of hardware and software platforms by using integer neural networks. In one aspect, there is provided a method for entropy encoding data which defines a sequence comprising a plurality of components, the method comprising: for each component of the plurality of components: processing an input comprising: (i) a respective integer representation of each of one or more components of the data which precede the component in the sequence, (ii) an integer representation of one or more respective latent variables characterizing the data, or (iii) both, using an integer neural network to generate data defining a probability distribution over the predetermined set of possible code symbols for the component of the data.
    Type: Application
    Filed: November 28, 2023
    Publication date: March 28, 2024
    Inventors: Nicholas Johnston, Johannes Balle
  • Publication number: 20240078712
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for compressing and decompressing data. In one aspect, a method comprises: processing data using an encoder neural network to generate a latent representation of the data; processing the latent representation of the data using a hyper-encoder neural network to generate a latent representation of an entropy model; generating an entropy encoded representation of the latent representation of the entropy model; generating an entropy encoded representation of the latent representation of the data using the latent representation of the entropy model; and determining a compressed representation of the data from the entropy encoded representations of: (i) the latent representation of the data and (ii) the latent representation of the entropy model used to entropy encode the latent representation of the data.
    Type: Application
    Filed: April 25, 2023
    Publication date: March 7, 2024
    Inventors: David Charles Minnen, Saurabh Singh, Johannes Balle, Troy Chinen, Sung Jin Hwang, Nicholas Johnston, George Dan Toderici
  • Patent number: 11907818
    Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.
    Type: Grant
    Filed: February 6, 2023
    Date of Patent: February 20, 2024
    Assignee: GOOGLE LLC
    Inventors: Deniz Oktay, Saurabh Singh, Johannes Balle, Abhinav Shrivistava
  • Patent number: 11869221
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reliably performing data compression and data decompression across a wide variety of hardware and software platforms by using integer neural networks. In one aspect, there is provided a method for entropy encoding data which defines a sequence comprising a plurality of components, the method comprising: for each component of the plurality of components: processing an input comprising: (i) a respective integer representation of each of one or more components of the data which precede the component in the sequence, (ii) an integer representation of one or more respective latent variables characterizing the data, or (iii) both, using an integer neural network to generate data defining a probability distribution over the predetermined set of possible code symbols for the component of the data.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: January 9, 2024
    Assignee: Google LLC
    Inventors: Nicholas Johnston, Johannes Balle
  • Publication number: 20230186166
    Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.
    Type: Application
    Filed: February 6, 2023
    Publication date: June 15, 2023
    Inventors: Deniz Oktay, Saurabh Singh, Johannes Balle, Abhinav Shrivistava
  • Patent number: 11670010
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for compressing and decompressing data. In one aspect, a method comprises: processing data using an encoder neural network to generate a latent representation of the data; processing the latent representation of the data using a hyper-encoder neural network to generate a latent representation of an entropy model; generating an entropy encoded representation of the latent representation of the entropy model; generating an entropy encoded representation of the latent representation of the data using the latent representation of the entropy model; and determining a compressed representation of the data from the entropy encoded representations of: (i) the latent representation of the data and (ii) the latent representation of the entropy model used to entropy encode the latent representation of the data.
    Type: Grant
    Filed: January 19, 2022
    Date of Patent: June 6, 2023
    Assignee: Google LLC
    Inventors: David Charles Minnen, Saurabh Singh, Johannes Balle, Troy Chinen, Sung Jin Hwang, Nicholas Johnston, George Dan Toderici
  • Patent number: 11610124
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving, by a neural network (NN), a dataset for generating features from the dataset. A first set of features is computed from the dataset using at least a feature layer of the NN. The first set of features i) is characterized by a measure of informativeness; and ii) is computed such that a size of the first set of features is compressible into a second set of features that is smaller in size than the first set of features and that has a same measure of informativeness as the measure of informativeness of the first set of features. The second set of features if generated from the first set of features using a compression method that compresses the first set of features to generate the second set of features.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: March 21, 2023
    Assignee: Google LLC
    Inventors: Abhinav Shrivastava, Saurabh Singh, Johannes Balle, Sami Ahmad Abu-El-Haija, Nicholas Johnston, George Dan Toderici
  • Patent number: 11574232
    Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: February 7, 2023
    Assignee: GOOGLE LLC
    Inventors: Deniz Oktay, Saurabh Singh, Johannes Balle, Abhinav Shrivastava
  • Publication number: 20220138991
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for compressing and decompressing data. In one aspect, a method comprises: processing data using an encoder neural network to generate a latent representation of the data; processing the latent representation of the data using a hyper-encoder neural network to generate a latent representation of an entropy model; generating an entropy encoded representation of the latent representation of the entropy model; generating an entropy encoded representation of the latent representation of the data using the latent representation of the entropy model; and determining a compressed representation of the data from the entropy encoded representations of: (i) the latent representation of the data and (ii) the latent representation of the entropy model used to entropy encode the latent representation of the data.
    Type: Application
    Filed: January 19, 2022
    Publication date: May 5, 2022
    Inventors: David Charles Minnen, Saurabh Singh, Johannes Balle, Troy Chinen, Sung Jin Hwang, Nicholas Johnston, George Dan Toderici
  • Patent number: 11257254
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for compressing and decompressing data. In one aspect, a method comprises: processing data using an encoder neural network to generate a latent representation of the data; processing the latent representation of the data using a hyper-encoder neural network to generate a latent representation of an entropy model; generating an entropy encoded representation of the latent representation of the entropy model; generating an entropy encoded representation of the latent representation of the data using the latent representation of the entropy model; and determining a compressed representation of the data from the entropy encoded representations of: (i) the latent representation of the data and (ii) the latent representation of the entropy model used to entropy encode the latent representation of the data.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: February 22, 2022
    Assignee: Google LLC
    Inventors: David Charles Minnen, Saurabh Singh, Johannes Balle, Troy Chinen, Sung Jin Hwang, Nicholas Johnston, George Dan Toderici
  • Publication number: 20210358180
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reliably performing data compression and data decompression across a wide variety of hardware and software platforms by using integer neural networks. In one aspect, there is provided a method for entropy encoding data which defines a sequence comprising a plurality of components, the method comprising: for each component of the plurality of components: processing an input comprising: (i) a respective integer representation of each of one or more components of the data which precede the component in the sequence, (ii) an integer representation of one or more respective latent variables characterizing the data, or (iii) both, using an integer neural network to generate data defining a probability distribution over the predetermined set of possible code symbols for the component of the data.
    Type: Application
    Filed: September 18, 2019
    Publication date: November 18, 2021
    Inventors: Nicholas Johnston, Johannes Balle
  • Publication number: 20200364603
    Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.
    Type: Application
    Filed: May 13, 2020
    Publication date: November 19, 2020
    Inventors: Deniz Oktay, Saurabh Singh, Johannes Balle, Abhinav Shrivastava
  • Publication number: 20200311548
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving, by a neural network (NN), a dataset for generating features from the dataset. A first set of features is computed from the dataset using at least a feature layer of the NN. The first set of features i) is characterized by a measure of informativeness; and ii) is computed such that a size of the first set of features is compressible into a second set of features that is smaller in size than the first set of features and that has a same measure of informativeness as the measure of informativeness of the first set of features. The second set of features if generated from the first set of features using a compression method that compresses the first set of features to generate the second set of features.
    Type: Application
    Filed: October 29, 2019
    Publication date: October 1, 2020
    Inventors: Abhinav Shrivastava, Saurabh Singh, Johannes Balle, Sami Ahmad Abu-El-Haija, Nicholas Johnston, George Dan Toderici
  • Publication number: 20200027247
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for compressing and decompressing data. In one aspect, a method comprises: processing data using an encoder neural network to generate a latent representation of the data; processing the latent representation of the data using a hyper-encoder neural network to generate a latent representation of an entropy model; generating an entropy encoded representation of the latent representation of the entropy model; generating an entropy encoded representation of the latent representation of the data using the latent representation of the entropy model; and determining a compressed representation of the data from the entropy encoded representations of: (i) the latent representation of the data and (ii) the latent representation of the entropy model used to entropy encode the latent representation of the data.
    Type: Application
    Filed: July 18, 2019
    Publication date: January 23, 2020
    Inventors: David Charles Minnen, Saurabh Singh, Johannes Balle, Troy Chinen, Sung Jin Hwang, Nicholas Johnston, George Dan Toderici
  • Patent number: 5487558
    Abstract: The instrument panel in a motor vehicle exhibits an interior, rigid reinforcement panel, a foamed-plastic layer located thereabove and an outer skin which covers said layer. A cover which likewise exhibits this construction is integrated into the instrument panel in front of an airbag unit, which cover, in the event of a crash, can move away and releases an opening through which an airbag can unfold out of its receiving container behind the instrument panel. The reinforcement portion of the cover is a separated-off part of the adjoining reinforcement-panel surface. The process for producing an instrument panel of this type is carried out such that, before or after foaming of the entire reinforcement panel, the opening cover is separated from said panel.
    Type: Grant
    Filed: September 29, 1994
    Date of Patent: January 30, 1996
    Assignee: Mercedes-Benz AG
    Inventors: Johannes Ball, Wolfgang Henseler, Uwe Gerstenberg, Thomas Fischer