Patents by Inventor Alhussein Fawzi

Alhussein Fawzi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240127045
    Abstract: A method performed by one or more computers for obtaining an optimized algorithm that (i) is functionally equivalent to a target algorithm and (ii) optimizes one or more target properties when executed on a target set of one or more hardware devices. The method includes: initializing a target tensor representing the target algorithm; generating, using a neural network having a plurality of network parameters, a tensor decomposition of the target tensor that parametrizes a candidate algorithm; generating target property values for each of the target properties when executing the candidate algorithm on the target set of hardware devices; determining a benchmarking score for the tensor decomposition based on the target property values of the candidate algorithm; generating a training example from the tensor decomposition and the benchmarking score; and storing, in a training data store, the training example for use in updating the network parameters of the neural network.
    Type: Application
    Filed: October 3, 2022
    Publication date: April 18, 2024
    Inventors: Thomas Keisuke Hubert, Shih-Chieh Huang, Alexander Novikov, Alhussein Fawzi, Bernardino Romera-Paredes, David Silver, Demis Hassabis, Grzegorz Michal Swirszcz, Julian Schrittwieser, Pushmeet Kohli, Mohammadamin Barekatain, Matej Balog, Francisco Jesus Rodriguez Ruiz
  • Patent number: 11775830
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes processing each training input using the neural network and in accordance with the current values of the network parameters to generate a network output for the training input; computing a respective loss for each of the training inputs by evaluating a loss function; identifying, from a plurality of possible perturbations, a maximally non-linear perturbation; and determining an update to the current values of the parameters of the neural network by performing an iteration of a neural network training procedure to decrease the respective losses for the training inputs and to decrease the non-linearity of the loss function for the identified maximally non-linear perturbation.
    Type: Grant
    Filed: December 12, 2022
    Date of Patent: October 3, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Chongli Qin, Sven Adrian Gowal, Soham De, Robert Stanforth, James Martens, Krishnamurthy Dvijotham, Dilip Krishnan, Alhussein Fawzi
  • Publication number: 20230252286
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes processing each training input using the neural network and in accordance with the current values of the network parameters to generate a network output for the training input; computing a respective loss for each of the training inputs by evaluating a loss function; identifying, from a plurality of possible perturbations, a maximally non-linear perturbation; and determining an update to the current values of the parameters of the neural network by performing an iteration of a neural network training procedure to decrease the respective losses for the training inputs and to decrease the non-linearity of the loss function for the identified maximally non-linear perturbation.
    Type: Application
    Filed: December 12, 2022
    Publication date: August 10, 2023
    Inventors: Chongli Qin, Sven Adrian Gowal, Soham De, Robert Stanforth, James Martens, Krishnamurthy Dvijotham, Dilip Krishnan, Alhussein Fawzi
  • Patent number: 11526755
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes processing each training input using the neural network and in accordance with the current values of the network parameters to generate a network output for the training input; computing a respective loss for each of the training inputs by evaluating a loss function; identifying, from a plurality of possible perturbations, a maximally non-linear perturbation; and determining an update to the current values of the parameters of the neural network by performing an iteration of a neural network training procedure to decrease the respective losses for the training inputs and to decrease the non-linearity of the loss function for the identified maximally non-linear perturbation.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: December 13, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Chongli Qin, Sven Adrian Gowal, Soham De, Robert Stanforth, James Martens, Krishnamurthy Dvijotham, Dilip Krishnan, Alhussein Fawzi