Patents by Inventor Laurent Sifre

Laurent Sifre has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240119261
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output sequence of discrete tokens using a diffusion model. In one aspect, a method includes generating, by using the diffusion model, a final latent representation of the sequence of discrete tokens that includes a determined value for each of a plurality of latent variables; applying a de-embedding matrix to the final latent representation of the output sequence of discrete tokens to generate a de-embedded final latent representation that includes, for each of the plurality of latent variables, a respective numeric score for each discrete token in a vocabulary of multiple discrete tokens; selecting, for each of the plurality of latent variables, a discrete token from among the multiple discrete tokens in the vocabulary that has a highest numeric score; and generating the output sequence of discrete tokens that includes the selected discrete tokens.
    Type: Application
    Filed: September 28, 2023
    Publication date: April 11, 2024
    Inventors: Robin Strudel, Rémi Leblond, Laurent Sifre, Sander Etienne Lea Dieleman, Nikolay Savinov, Will S. Grathwohl, Corentin Tallec, Florent Altché, Iaroslav Ganin, Arthur Mensch, Yilin Du
  • Publication number: 20240104353
    Abstract: A computer-implemented method for generating an output token sequence from an input token sequence. The method combines a look ahead tree search, such as a Monte Carlo tree search, with a sequence-to-sequence neural network system. The sequence-to-sequence neural network system has a policy output defining a next token probability distribution, and may include a value neural network providing a value output to evaluate a sequence. An initial partial output sequence is extended using the look ahead tree search guided by the policy output and, in implementations, the value output, of the sequence-to-sequence neural network system until a complete output sequence is obtained.
    Type: Application
    Filed: February 8, 2022
    Publication date: March 28, 2024
    Inventors: Rémi Bertrand Francis Leblond, Jean-Baptiste Alayrac, Laurent Sifre, Miruna Pîslar, Jean-Baptiste Lespiau, Ioannis Antonoglou, Karen Simonyan, David Silver, Oriol Vinyals
  • Publication number: 20230315532
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model to perform a machine learning task. In one aspect, a method performed by one or more computer is described. The method includes: obtaining data defining a compute budget that characterizes an amount of computing resources allocated for training a machine learning model to perform a machine learning task; processing the data defining the compute budget using an allocation mapping, in accordance with a set of allocation mapping parameters, to generate an allocation tuple defining: (i) a target model size for the machine learning model, and (ii) a target amount of training data for training the machine learning model; instantiating the machine learning model, where the machine learning model has the target model size; and obtaining the target amount of training data for training the machine learning model.
    Type: Application
    Filed: March 28, 2023
    Publication date: October 5, 2023
    Inventors: Jordan Hoffmann, Sebastian Borgeaud Dit Avocat, Laurent Sifre, Arthur Mensch
  • Publication number: 20230177334
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a final output sequence. In one aspect, a method comprises: receiving a current output sequence comprising one or more current output segments; receiving a set of reference segments and a respective reference segment embedding of each reference segment that has been generated using an embedding neural network; for each current output segment: processing the current output segment using the embedding neural network to generate a current output segment embedding of the current output segment; and selecting k most similar reference segments to the current output segment using the reference segment embeddings and the current output segment embedding; and processing the current output sequence and the k most similar reference segments for each current output segment to generate an additional output segment that follows the current output sequence in the final output sequence.
    Type: Application
    Filed: December 7, 2022
    Publication date: June 8, 2023
    Inventors: Sebastian Borgeaud Dit Avocat, Laurent Sifre, Arthur Mensch, Jordan Hoffmann
  • Publication number: 20210407625
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing protein structure prediction. In one aspect, a method comprises generating a distance map for a given protein, wherein the given protein is defined by a sequence of amino acid residues arranged in a structure, wherein the distance map characterizes estimated distances between the amino acid residues in the structure, comprising: generating a plurality of distance map crops, wherein each distance map crop characterizes estimated distances between (i) amino acid residues in each of one or more respective first positions in the sequence and (ii) amino acid residues in each of one or more respective second positions in the sequence in the structure of the protein, wherein the first positions are a proper subset of the sequence; and generating the distance map for the given protein using the plurality of distance map crops.
    Type: Application
    Filed: September 16, 2019
    Publication date: December 30, 2021
    Inventors: Andrew W. Senior, James Kirkpatrick, Laurent Sifre, Richard Andrew Evans, Hugo Penedones, Chongli Qin, Ruoxi Sun, Karen Simonyan, John Jumper
  • Publication number: 20210313008
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing protein structure prediction and protein domain segmentation. In one aspect, a method comprises generating a plurality of predicted structures of a protein, wherein generating a predicted structure of the protein comprises: updating initial values of a plurality of structure parameters of the protein, comprising, at each of a plurality of update iterations: determining a gradient of a quality score for the current values of the structure parameters with respect to the current values of the structure parameters; and updating the current values of the structure parameters using the gradient.
    Type: Application
    Filed: September 16, 2019
    Publication date: October 7, 2021
    Inventors: Andrew W. Senior, James Kirkpatrick, Laurent Sifre, Richard Andrew Evans, Hugo Penedones, Chongli Qin, Ruoxi Sun, Karen Simonyan, John Jumper
  • Publication number: 20210304847
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing protein structure prediction. In one aspect, a method comprises, at each of one or more iterations: determining an alternative predicted structure of a given protein defined by alternative values of structure parameters; processing, using a geometry neural network, a network input comprising: (i) a representation of a sequence of amino acid residues in the given protein, and (ii) the alternative values of the structure parameters, to generate an output characterizing an alternative geometry score that is an estimate of a similarity measure between the alternative predicted structure and the actual structure of the given protein.
    Type: Application
    Filed: September 16, 2019
    Publication date: September 30, 2021
    Inventors: Andrew W. Senior, James Kirkpatrick, Laurent Sifre, Richard Andrew Evans, Hugo Penedones, Chongli Qin, Ruoxi Sun, Karen Simonyan, John Jumper
  • Patent number: 11074481
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. In one aspect, a method of training an action selection policy neural network for use in selecting actions to be performed by an agent navigating through an environment to accomplish one or more goals comprises: receiving an observation image characterizing a current state of the environment; processing, using the action selection policy neural network, an input comprising the observation image to generate an action selection output; processing, using a geometry-prediction neural network, an intermediate output generated by the action selection policy neural network to predict a value of a feature of a geometry of the environment when in the current state; and backpropagating a gradient of a geometry-based auxiliary loss into the action selection policy neural network to determine a geometry-based auxiliary update for current values of the network parameters.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: July 27, 2021
    Assignee: DeepMind Technologies Limited
    Inventors: Fabio Viola, Piotr Wojciech Mirowski, Andrea Banino, Razvan Pascanu, Hubert Josef Soyer, Andrew James Ballard, Sudarshan Kumaran, Raia Thais Hadsell, Laurent Sifre, Rostislav Goroshin, Koray Kavukcuoglu, Misha Man Ray Denil
  • Patent number: 10867242
    Abstract: Methods, systems and apparatus, including computer programs encoded on computer storage media, for training a value neural network that is configured to receive an observation characterizing a state of an environment being interacted with by an agent and to process the observation in accordance with parameters of the value neural network to generate a value score. One of the systems performs operations that include training a supervised learning policy neural network; initializing initial values of parameters of a reinforcement learning policy neural network having a same architecture as the supervised learning policy network to the trained values of the parameters of the supervised learning policy neural network; training the reinforcement learning policy neural network on second training data; and training the value neural network to generate a value score for the state of the environment that represents a predicted long-term reward resulting from the environment being in the state.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: December 15, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Thore Kurt Hartwig Graepel, Shih-Chieh Huang, David Silver, Arthur Clement Guez, Laurent Sifre, Ilya Sutskever, Christopher Maddison
  • Publication number: 20200151515
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. In one aspect, a method of training an action selection policy neural network for use in selecting actions to be performed by an agent navigating through an environment to accomplish one or more goals comprises: receiving an observation image characterizing a current state of the environment; processing, using the action selection policy neural network, an input comprising the observation image to generate an action selection output; processing, using a geometry-prediction neural network, an intermediate output generated by the action selection policy neural network to predict a value of a feature of a geometry of the environment when in the current state; and backpropagating a gradient of a geometry-based auxiliary loss into the action selection policy neural network to determine a geometry-based auxiliary update for current values of the network parameters.
    Type: Application
    Filed: January 17, 2020
    Publication date: May 14, 2020
    Inventors: Fabio Viola, Piotr Wojciech Mirowski, Andrea Banino, Razvan Pascanu, Hubert Josef Soyer, Andrew James Ballard, Sudarshan Kumaran, Raia Thais Hadsell, Laurent Sifre, Rostislav Goroshin, Koray Kavukcuoglu, Misha Man Ray Denil
  • Patent number: 10572776
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. In one aspect, a method of training an action selection policy neural network for use in selecting actions to be performed by an agent navigating through an environment to accomplish one or more goals comprises: receiving an observation image characterizing a current state of the environment; processing, using the action selection policy neural network, an input comprising the observation image to generate an action selection output; processing, using a geometry-prediction neural network, an intermediate output generated by the action selection policy neural network to predict a value of a feature of a geometry of the environment when in the current state; and backpropagating a gradient of a geometry-based auxiliary loss into the action selection policy neural network to determine a geometry-based auxiliary update for current values of the network parameters.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: February 25, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Fabio Viola, Piotr Wojciech Mirowski, Andrea Banino, Razvan Pascanu, Hubert Josef Soyer, Andrew James Ballard, Sudarshan Kumaran, Raia Thais Hadsell, Laurent Sifre, Rostislav Goroshin, Koray Kavukcuoglu, Misha Man Ray Denil
  • Publication number: 20190266449
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. In one aspect, a method of training an action selection policy neural network for use in selecting actions to be performed by an agent navigating through an environment to accomplish one or more goals comprises: receiving an observation image characterizing a current state of the environment; processing, using the action selection policy neural network, an input comprising the observation image to generate an action selection output; processing, using a geometry-prediction neural network, an intermediate output generated by the action selection policy neural network to predict a value of a feature of a geometry of the environment when in the current state; and backpropagating a gradient of a geometry-based auxiliary loss into the action selection policy neural network to determine a geometry-based auxiliary update for current values of the network parameters.
    Type: Application
    Filed: May 3, 2019
    Publication date: August 29, 2019
    Inventors: Fabio Viola, Piotr Wojciech Mirowski, Andrea Banino, Razvan Pascanu, Hubert Josef Soyer, Andrew James Ballard, Sudarshan Kumaran, Raia Thais Hadsell, Laurent Sifre, Rostislav Goroshin, Koray Kavukcuoglu, Misha Man Ray Denil
  • Publication number: 20180032864
    Abstract: Methods, systems and apparatus, including computer programs encoded on computer storage media, for training a value neural network that is configured to receive an observation characterizing a state of an environment being interacted with by an agent and to process the observation in accordance with parameters of the value neural network to generate a value score. One of the systems performs operations that include training a supervised learning policy neural network; initializing initial values of parameters of a reinforcement learning policy neural network having a same architecture as the supervised learning policy network to the trained values of the parameters of the supervised learning policy neural network; training the reinforcement learning policy neural network on second training data; and training the value neural network to generate a value score for the state of the environment that represents a predicted long-term reward resulting from the environment being in the state.
    Type: Application
    Filed: September 29, 2016
    Publication date: February 1, 2018
    Inventors: Thore Kurt Hartwig Graepel, Shih-Chieh Huang, David Silver, Arthur Clement Guez, Laurent Sifre, Ilya Sutskever, Christopher Maddison
  • Publication number: 20180032863
    Abstract: Methods, systems and apparatus, including computer programs encoded on computer storage media, for training a value neural network that is configured to receive an observation characterizing a state of an environment being interacted with by an agent and to process the observation in accordance with parameters of the value neural network to generate a value score. One of the systems performs operations that include training a supervised learning policy neural network; initializing initial values of parameters of a reinforcement learning policy neural network having a same architecture as the supervised learning policy network to the trained values of the parameters of the supervised learning policy neural network; training the reinforcement learning policy neural network on second training data; and training the value neural network to generate a value score for the state of the environment that represents a predicted long-term reward resulting from the environment being in the state.
    Type: Application
    Filed: September 29, 2016
    Publication date: February 1, 2018
    Inventors: Thore Kurt Hartwig Graepel, Shih-Chieh Huang, David Silver, Arthur Clement Guez, Laurent Sifre, Ilya Sutskever, Christopher Maddison