Patents by Inventor Remi MUNOS

Remi MUNOS has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11977983
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent. The method includes obtaining an observation characterizing a current state of an environment. For each layer parameter of each noisy layer of a neural network, a respective noise value is determined. For each layer parameter of each noisy layer, a noisy current value for the layer parameter is determined from a current value of the layer parameter, a current value of a corresponding noise parameter, and the noise value. A network input including the observation is processed using the neural network in accordance with the noisy current values to generate a network output for the network input. An action is selected from a set of possible actions to be performed by the agent in response to the observation using the network output.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: May 7, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Mohammad Gheshlaghi Azar, Meire Fortunato, Bilal Piot, Olivier Claude Pietquin, Jacob Lee Menick, Volodymyr Mnih, Charles Blundell, Remi Munos
  • Publication number: 20240127060
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network used to select actions to be performed by an agent interacting with an environment. In one aspect, a system comprises a plurality of actor computing units and a plurality of learner computing units. The actor computing units generate experience tuple trajectories that are used by the learner computing units to update learner action selection neural network parameters using a reinforcement learning technique. The reinforcement learning technique may be an off-policy actor critic reinforcement learning technique.
    Type: Application
    Filed: October 16, 2023
    Publication date: April 18, 2024
    Inventors: Hubert Josef Soyer, Lasse Espeholt, Karen Simonyan, Yotam Doron, Vlad Firoiu, Volodymyr Mnih, Koray Kavukcuoglu, Remi Munos, Thomas Ward, Timothy James Alexander Harley, Iain Robert Dunning
  • Patent number: 11868894
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network used to select actions to be performed by an agent interacting with an environment. In one aspect, a system comprises a plurality of actor computing units and a plurality of learner computing units. The actor computing units generate experience tuple trajectories that are used by the learner computing units to update learner action selection neural network parameters using a reinforcement learning technique. The reinforcement learning technique may be an off-policy actor critic reinforcement learning technique.
    Type: Grant
    Filed: January 4, 2023
    Date of Patent: January 9, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Hubert Josef Soyer, Lasse Espeholt, Karen Simonyan, Yotam Doron, Vlad Firoiu, Volodymyr Mnih, Koray Kavukcuoglu, Remi Munos, Thomas Ward, Timothy James Alexander Harley, Iain Robert Dunning
  • Patent number: 11727264
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining data identifying (i) a first observation characterizing a first state of the environment, (ii) an action performed by the agent in response to the first observation, and (iii) an actual reward received resulting from the agent performing the action in response to the first observation; determining a pseudo-count for the first observation; determining an exploration reward bonus that incentivizes the agent to explore the environment from the pseudo-count for the first observation; generating a combined reward from the actual reward and the exploration reward bonus; and adjusting current values of the parameters of the neural network using the combined reward.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: August 15, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Marc Gendron-Bellemare, Remi Munos, Srinivasan Sriram
  • Publication number: 20230153617
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network used to select actions to be performed by an agent interacting with an environment. In one aspect, a system comprises a plurality of actor computing units and a plurality of learner computing units. The actor computing units generate experience tuple trajectories that are used by the learner computing units to update learner action selection neural network parameters using a reinforcement learning technique. The reinforcement learning technique may be an off-policy actor critic reinforcement learning technique.
    Type: Application
    Filed: January 4, 2023
    Publication date: May 18, 2023
    Inventors: Hubert Josef Soyer, Lasse Espeholt, Karen Simonyan, Yotam Doron, Vlad Firoiu, Volodymyr Mnih, Koray Kavukcuoglu, Remi Munos, Thomas Ward, Timothy James Alexander Harley, Iain Robert Dunning
  • Publication number: 20230083486
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an environment representation neural network of a reinforcement learning system controls an agent to perform a given task. In one aspect, the method includes: receiving a current observation input and a future observation input; generating, from the future observation input, a future latent representation of the future state of the environment; processing, using the environment representation neural network, to generate a current internal representation of the current state of the environment; generating, from the current internal representation, a predicted future latent representation; evaluating an objective function measuring a difference between the future latent representation and the predicted future latent representation; and determining, based on a determined gradient of the objective function, an update to the current values of the environment representation parameters.
    Type: Application
    Filed: February 8, 2021
    Publication date: March 16, 2023
    Inventors: Zhaohan Guo, Mohammad Gheshlaghi Azar, Bernardo Avila Pires, Florent Altché, Jean-Bastien François Laurent Grill, Bilal Piot, Remi Munos
  • Patent number: 11604997
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network. The policy neural network is used to select actions to be performed by an agent that interacts with an environment by receiving an observation characterizing a state of the environment and performing an action from a set of actions in response to the received observation. A trajectory is obtained from a replay memory, and a final update to current values of the policy network parameters is determined for each training observation in the trajectory. The final updates to the current values of the policy network parameters are determined from selected action updates and leave-one-out updates.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: March 14, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Marc Gendron-Bellemare, Mohammad Gheshlaghi Azar, Audrunas Gruslys, Remi Munos
  • Patent number: 11593646
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network used to select actions to be performed by an agent interacting with an environment. In one aspect, a system comprises a plurality of actor computing units and a plurality of learner computing units. The actor computing units generate experience tuple trajectories that are used by the learner computing units to update learner action selection neural network parameters using a reinforcement learning technique. The reinforcement learning technique may be an off-policy actor critic reinforcement learning technique.
    Type: Grant
    Filed: February 5, 2019
    Date of Patent: February 28, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Hubert Josef Soyer, Lasse Espeholt, Karen Simonyan, Yotam Doron, Vlad Firoiu, Volodymyr Mnih, Koray Kavukcuoglu, Remi Munos, Thomas Ward, Timothy James Alexander Harley, Iain Robert Dunning
  • Patent number: 11256990
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a recurrent neural network on training sequences using backpropagation through time. In one aspect, a method includes receiving a training sequence including a respective input at each of a number of time steps; obtaining data defining an amount of memory allocated to storing forward propagation information for use during backpropagation; determining, from the number of time steps in the training sequence and from the amount of memory allocated to storing the forward propagation information, a training policy for processing the training sequence, wherein the training policy defines when to store forward propagation information during forward propagation of the training sequence; and training the recurrent neural network on the training sequence in accordance with the training policy.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: February 22, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Marc Lanctot, Audrunas Gruslys, Ivo Danihelka, Remi Munos
  • Publication number: 20210383225
    Abstract: A computer-implemented method of training a neural network. The method comprises processing a first transformed view of a training data item, e.g. an image, with a target neural network to generate a target output, processing a second transformed view of the training data item, e.g. image, with an online neural network to generate a prediction of the target output, updating parameters of the online neural network to minimize an error between the prediction of the target output and the target output, and updating parameters of the target neural network based on the parameters of the online neural network. The method can effectively train an encoder neural network without using labelled training data items, and without using a contrastive loss, i.e. without needing “negative examples” which comprise transformed views of different data items.
    Type: Application
    Filed: June 4, 2021
    Publication date: December 9, 2021
    Inventors: Jean-Bastien François Laurent Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Remi Munos, Michal Valko
  • Publication number: 20210150355
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model. In one aspect, a method includes receiving training data for training the machine learning model on a plurality of tasks, where each task includes multiple batches of training data. A task is selected in accordance with a current task selection policy. A batch of training data is selected from the selected task. The machine learning model is trained on the selected batch of training data to determine updated values of the model parameters. A learning progress measure that represents a progress of the training of the machine learning model as a result of training the machine learning model on the selected batch of training data is determined. The current task selection policy is updated using the learning progress measure.
    Type: Application
    Filed: January 27, 2021
    Publication date: May 20, 2021
    Inventors: Marc Gendron-Bellemare, Jacob Lee Menick, Alexander Benjamin Graves, Koray Kavukcuoglu, Remi Munos
  • Publication number: 20210110271
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network. The policy neural network is used to select actions to be performed by an agent that interacts with an environment by receiving an observation characterizing a state of the environment and performing an action from a set of actions in response to the received observation. A trajectory is obtained from a replay memory, and a final update to current values of the policy network parameters is determined for each training observation in the trajectory. The final updates to the current values of the policy network parameters are determined from selected action updates and leave-one-out updates.
    Type: Application
    Filed: June 11, 2018
    Publication date: April 15, 2021
    Inventors: Marc Gendron-Bellemare, Mohammad Gheshlaghi Azar, Audrunas Gruslys, Remi Munos
  • Publication number: 20210065012
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent. The method includes obtaining an observation characterizing a current state of an environment. For each layer parameter of each noisy layer of a neural network, a respective noise value is determined. For each layer parameter of each noisy layer, a noisy current value for the layer parameter is determined from a current value of the layer parameter, a current value of a corresponding noise parameter, and the noise value. A network input including the observation is processed using the neural network in accordance with the noisy current values to generate a network output for the network input. An action is selected from a set of possible actions to be performed by the agent in response to the observation using the network output.
    Type: Application
    Filed: September 14, 2020
    Publication date: March 4, 2021
    Inventors: Mohammad Gheshlaghi Azar, Meire Fortunato, Bilal Piot, Olivier Claude Pietquin, Jacob Lee Menick, Volodymyr Mnih, Charles Blundell, Remi Munos
  • Patent number: 10936949
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model. In one aspect, a method includes receiving training data for training the machine learning model on a plurality of tasks, where each task includes multiple batches of training data. A task is selected in accordance with a current task selection policy. A batch of training data is selected from the selected task. The machine learning model is trained on the selected batch of training data to determine updated values of the model parameters. A learning progress measure that represents a progress of the training of the machine learning model as a result of training the machine learning model on the selected batch of training data is determined. The current task selection policy is updated using the learning progress measure.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: March 2, 2021
    Assignee: DeepMind Technologies Limited
    Inventors: Marc Gendron-Bellemare, Jacob Lee Menick, Alexander Benjamin Graves, Koray Kavukcuoglu, Remi Munos
  • Publication number: 20210034970
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network used to select actions to be performed by an agent interacting with an environment. In one aspect, a system comprises a plurality of actor computing units and a plurality of learner computing units. The actor computing units generate experience tuple trajectories that are used by the learner computing units to update learner action selection neural network parameters using a reinforcement learning technique. The reinforcement learning technique may be an off-policy actor critic reinforcement learning technique.
    Type: Application
    Filed: February 5, 2019
    Publication date: February 4, 2021
    Inventors: Hubert Josef Soyer, Lasse Espeholt, Karen Simonyan, Yotam Doron, Vlad Firoiu, Volodymyr Mnih, Koray Kavukcuoglu, Remi Munos, Thomas Ward, Timothy James Alexander Harley, Iain Robert Dunning
  • Patent number: 10839293
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent. The method includes obtaining an observation characterizing a current state of an environment. For each layer parameter of each noisy layer of a neural network, a respective noise value is determined. For each layer parameter of each noisy layer, a noisy current value for the layer parameter is determined from a current value of the layer parameter, a current value of a corresponding noise parameter, and the noise value. A network input including the observation is processed using the neural network in accordance with the noisy current values to generate a network output for the network input. An action is selected from a set of possible actions to be performed by the agent in response to the observation using the network output.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: November 17, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Mohammad Gheshlaghi Azar, Meire Fortunato, Bilal Piot, Olivier Claude Pietquin, Jacob Lee Menick, Volodymyr Mnih, Charles Blundell, Remi Munos
  • Publication number: 20200327405
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining data identifying (i) a first observation characterizing a first state of the environment, (ii) an action performed by the agent in response to the first observation, and (iii) an actual reward received resulting from the agent performing the action in response to the first observation; determining a pseudo-count for the first observation; determining an exploration reward bonus that incentivizes the agent to explore the environment from the pseudo-count for the first observation; generating a combined reward from the actual reward and the exploration reward bonus; and adjusting current values of the parameters of the neural network using the combined reward.
    Type: Application
    Filed: May 18, 2017
    Publication date: October 15, 2020
    Inventors: Marc Gendron-Bellemare, Remi Munos, Srinivasan Sriram
  • Publication number: 20190362238
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent. The method includes obtaining an observation characterizing a current state of an environment. For each layer parameter of each noisy layer of a neural network, a respective noise value is determined. For each layer parameter of each noisy layer, a noisy current value for the layer parameter is determined from a current value of the layer parameter, a current value of a corresponding noise parameter, and the noise value. A network input including the observation is processed using the neural network in accordance with the noisy current values to generate a network output for the network input. An action is selected from a set of possible actions to be performed by the agent in response to the observation using the network output.
    Type: Application
    Filed: June 12, 2019
    Publication date: November 28, 2019
    Inventors: Olivier Pietquin, Jacob Lee Menick, Mohammad Gheshlaghi Azar, Bilal Piot, Volodymyr Mnih, Charles Blundell, Meire Fortunato, Remi Munos
  • Publication number: 20190332938
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model. In one aspect, a method includes receiving training data for training the machine learning model on a plurality of tasks, where each task includes multiple batches of training data. A task is selected in accordance with a current task selection policy. A batch of training data is selected from the selected task. The machine learning model is trained on the selected batch of training data to determine updated values of the model parameters. A learning progress measure that represents a progress of the training of the machine learning model as a result of training the machine learning model on the selected batch of training data is determined. The current task selection policy is updated using the learning progress measure.
    Type: Application
    Filed: July 10, 2019
    Publication date: October 31, 2019
    Inventors: Marc Gendron-Bellemare, Jacob Lee Menick, Alexander Benjamin Graves, Koray Kavukcuoglu, Remi Munos
  • Publication number: 20190188572
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a recurrent neural network on training sequences using backpropagation through time. In one aspect, a method includes receiving a training sequence including a respective input at each of a number of time steps; obtaining data defining an amount of memory allocated to storing forward propagation information for use during backpropagation; determining, from the number of time steps in the training sequence and from the amount of memory allocated to storing the forward propagation information, a training policy for processing the training sequence, wherein the training policy defines when to store forward propagation information during forward propagation of the training sequence; and training the recurrent neural network on the training sequence in accordance with the training policy.
    Type: Application
    Filed: May 19, 2017
    Publication date: June 20, 2019
    Applicant: DeepMind Technologies Limited
    Inventors: Marc LANCTOT, Audrunas GRUSLYS, Ivo DANIHELKA, Remi MUNOS