Patents by Inventor Alexander Amir Alemi

Alexander Amir Alemi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11681924
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes receiving training data; training a neural network on the training data, wherein the neural network is configured to: receive a network input, convert the network input into a latent representation of the network input, and process the latent representation to generate a network output from the network input, and wherein training the neural network on the training data comprises training the neural network on a variational information bottleneck objective that encourages, for each training input, the latent representation generated for the training input to have low mutual information with the training input while the network output generated for the training input has high mutual information with the target output for the training input.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: June 20, 2023
    Assignee: Google LLC
    Inventor: Alexander Amir Alemi
  • Publication number: 20210103823
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes receiving training data; training a neural network on the training data, wherein the neural network is configured to: receive a network input, convert the network input into a latent representation of the network input, and process the latent representation to generate a network output from the network input, and wherein training the neural network on the training data comprises training the neural network on a variational information bottleneck objective that encourages, for each training input, the latent representation generated for the training input to have low mutual information with the training input while the network output generated for the training input has high mutual information with the target output for the training input.
    Type: Application
    Filed: December 18, 2020
    Publication date: April 8, 2021
    Inventor: Alexander Amir Alemi
  • Patent number: 10872296
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes receiving training data; training a neural network on the training data, wherein the neural network is configured to: receive a network input, convert the network input into a latent representation of the network input, and process the latent representation to generate a network output from the network input, and wherein training the neural network on the training data comprises training the neural network on a variational information bottleneck objective that encourages, for each training input, the latent representation generated for the training input to have low mutual information with the training input while the network output generated for the training input has high mutual information with the target output for the training input.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: December 22, 2020
    Assignee: Google LLC
    Inventor: Alexander Amir Alemi
  • Publication number: 20190258937
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes receiving training data; training a neural network on the training data, wherein the neural network is configured to: receive a network input, convert the network input into a latent representation of the network input, and process the latent representation to generate a network output from the network input, and wherein training the neural network on the training data comprises training the neural network on a variational information bottleneck objective that encourages, for each training input, the latent representation generated for the training input to have low mutual information with the training input while the network output generated for the training input has high mutual information with the target output for the training input.
    Type: Application
    Filed: May 3, 2019
    Publication date: August 22, 2019
    Inventor: Alexander Amir Alemi