Patents by Inventor Marc Gendron-Bellemare
Marc Gendron-Bellemare has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240370707Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent interacting with an environment. A current observation characterizing a current state of the environment is received. For each action in a set of multiple actions that can be performed by the agent to interact with the environment, a probability distribution is determined over possible Q returns for the action-current observation pair. For each action, a measure of central tendency of the possible Q returns with respect to the probability distributions for the action-current observation pair is determined. An action to be performed by the agent in response to the current observation is selected using the measures of central tendency.Type: ApplicationFiled: June 26, 2024Publication date: November 7, 2024Inventors: Marc Gendron-Bellemare, William Clinton Dabney
-
Patent number: 12056593Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent interacting with an environment. A current observation characterizing a current state of the environment is received. For each action in a set of multiple actions that can be performed by the agent to interact with the environment, a probability distribution is determined over possible Q returns for the action-current observation pair. For each action, a measure of central tendency of the possible Q returns with respect to the probability distributions for the action-current observation pair is determined. An action to be performed by the agent in response to the current observation is selected using the measures of central tendency.Type: GrantFiled: November 16, 2020Date of Patent: August 6, 2024Assignee: DeepMind Technologies LimitedInventors: Marc Gendron-Bellemare, William Clinton Dabney
-
Patent number: 11727264Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining data identifying (i) a first observation characterizing a first state of the environment, (ii) an action performed by the agent in response to the first observation, and (iii) an actual reward received resulting from the agent performing the action in response to the first observation; determining a pseudo-count for the first observation; determining an exploration reward bonus that incentivizes the agent to explore the environment from the pseudo-count for the first observation; generating a combined reward from the actual reward and the exploration reward bonus; and adjusting current values of the parameters of the neural network using the combined reward.Type: GrantFiled: May 18, 2017Date of Patent: August 15, 2023Assignee: DeepMind Technologies LimitedInventors: Marc Gendron-Bellemare, Remi Munos, Srinivasan Sriram
-
Publication number: 20230102544Abstract: Approaches are described for training an action selection neural network system for use in controlling an agent interacting with an environment to perform a task, using a contrastive loss function based on a policy similarity metric. In one aspect, a method includes: obtaining a first observation of a first training environment; obtaining a plurality of second observations of a second training environment; for each second observation, determining a respective policy similarity metric between the second observation and the first observation; processing the first observation and the second observations using the representation neural network to generate a first representation of the first training observation and a respective second representation of each second training observation; and training the representation neural network on a contrastive loss function computed using the policy similarity metrics and the first and second representations.Type: ApplicationFiled: September 28, 2021Publication date: March 30, 2023Inventors: Rishabh Agarwal, Marlos Cholodovskis Machado, Pablo Samuel Castro Rivadeneira, Marc Gendron-Bellemare
-
Patent number: 11604997Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network. The policy neural network is used to select actions to be performed by an agent that interacts with an environment by receiving an observation characterizing a state of the environment and performing an action from a set of actions in response to the received observation. A trajectory is obtained from a replay memory, and a final update to current values of the policy network parameters is determined for each training observation in the trajectory. The final updates to the current values of the policy network parameters are determined from selected action updates and leave-one-out updates.Type: GrantFiled: June 11, 2018Date of Patent: March 14, 2023Assignee: DeepMind Technologies LimitedInventors: Marc Gendron-Bellemare, Mohammad Gheshlaghi Azar, Audrunas Gruslys, Remi Munos
-
Patent number: 11429898Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for evaluating reinforcement learning policies. One of the methods includes receiving a plurality of training histories for a reinforcement learning agent; determining a total reward for each training observation in the training histories; partitioning the training observations into a plurality of partitions; determining, for each partition and from the partitioned training observations, a probability that the reinforcement learning agent will receive the total reward for the partition if the reinforcement learning agent performs the action for the partition in response to receiving the current observation; determining, from the probabilities and for each total reward, a respective estimated value of performing each action in response to receiving the current observation; and selecting an action from the pre-determined set of actions from the estimated values in accordance with an action selection policy.Type: GrantFiled: October 14, 2019Date of Patent: August 30, 2022Assignee: DeepMind Technologies LimitedInventors: Joel William Veness, Marc Gendron-Bellemare
-
Publication number: 20210150355Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model. In one aspect, a method includes receiving training data for training the machine learning model on a plurality of tasks, where each task includes multiple batches of training data. A task is selected in accordance with a current task selection policy. A batch of training data is selected from the selected task. The machine learning model is trained on the selected batch of training data to determine updated values of the model parameters. A learning progress measure that represents a progress of the training of the machine learning model as a result of training the machine learning model on the selected batch of training data is determined. The current task selection policy is updated using the learning progress measure.Type: ApplicationFiled: January 27, 2021Publication date: May 20, 2021Inventors: Marc Gendron-Bellemare, Jacob Lee Menick, Alexander Benjamin Graves, Koray Kavukcuoglu, Remi Munos
-
Publication number: 20210124352Abstract: The technology relates to navigating aerial vehicles using deep reinforcement learning techniques to generate flight policies. An operational system for controlling flight of an aerial vehicle may include a computing system configured to process an input vector representing a state of the aerial vehicle and output an action, an operation-ready policies server configured to store a trained neural network encoding a learned flight policy, and a controller configured to control the aerial vehicle. The input vector may be processed using the trained neural network encoding the learned flight policy.Type: ApplicationFiled: October 29, 2019Publication date: April 29, 2021Applicant: LOON LLCInventors: Salvatore J. Candido, Jun Gong, Marc Gendron-Bellemare
-
Publication number: 20210123741Abstract: The technology relates to navigating aerial vehicles using deep reinforcement learning techniques to generate flight policies. A computing system may include a simulator configured to produce simulations of a flight of the aerial vehicle in a region of an atmosphere, a replay buffer configured to store frames of the simulations, and a learning module having a deep reinforcement learning architecture configured to, by a reinforcement learning algorithm, process an input of a set of frames, and output a neural network encoding a learned flight policy. A meta-learning system may include stacks of learning systems, a coordinator configured to provide an instruction to the learning systems that includes a parameter and a start time, and an evaluation server configured to evaluate resulting rewards from learned flight policies generated by the learning systems.Type: ApplicationFiled: October 29, 2019Publication date: April 29, 2021Applicant: LOON LLCInventors: Salvatore J. Candido, Jun Gong, Marc Gendron-Bellemare, Marlos Cholodovskis Machado
-
Publication number: 20210110271Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network. The policy neural network is used to select actions to be performed by an agent that interacts with an environment by receiving an observation characterizing a state of the environment and performing an action from a set of actions in response to the received observation. A trajectory is obtained from a replay memory, and a final update to current values of the policy network parameters is determined for each training observation in the trajectory. The final updates to the current values of the policy network parameters are determined from selected action updates and leave-one-out updates.Type: ApplicationFiled: June 11, 2018Publication date: April 15, 2021Inventors: Marc Gendron-Bellemare, Mohammad Gheshlaghi Azar, Audrunas Gruslys, Remi Munos
-
Publication number: 20210064970Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent interacting with an environment. A current observation characterizing a current state of the environment is received. For each action in a set of multiple actions that can be performed by the agent to interact with the environment, a probability distribution is determined over possible Q returns for the action-current observation pair. For each action, a measure of central tendency of the possible Q returns with respect to the probability distributions for the action-current observation pair is determined. An action to be performed by the agent in response to the current observation is selected using the measures of central tendency.Type: ApplicationFiled: November 16, 2020Publication date: March 4, 2021Inventors: Marc Gendron-Bellemare, William Clinton Dabney
-
Patent number: 10936949Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model. In one aspect, a method includes receiving training data for training the machine learning model on a plurality of tasks, where each task includes multiple batches of training data. A task is selected in accordance with a current task selection policy. A batch of training data is selected from the selected task. The machine learning model is trained on the selected batch of training data to determine updated values of the model parameters. A learning progress measure that represents a progress of the training of the machine learning model as a result of training the machine learning model on the selected batch of training data is determined. The current task selection policy is updated using the learning progress measure.Type: GrantFiled: July 10, 2019Date of Patent: March 2, 2021Assignee: DeepMind Technologies LimitedInventors: Marc Gendron-Bellemare, Jacob Lee Menick, Alexander Benjamin Graves, Koray Kavukcuoglu, Remi Munos
-
Patent number: 10860920Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent interacting with an environment. A current observation characterizing a current state of the environment is received. For each action in a set of multiple actions that can be performed by the agent to interact with the environment, a probability distribution is determined over possible Q returns for the action-current observation pair. For each action, a measure of central tendency of the possible Q returns with respect to the probability distributions for the action-current observation pair is determined. An action to be performed by the agent in response to the current observation is selected using the measures of central tendency.Type: GrantFiled: July 10, 2019Date of Patent: December 8, 2020Assignee: DeepMind Technologies LimitedInventors: Marc Gendron-Bellemare, William Clinton Dabney
-
Publication number: 20200327405Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining data identifying (i) a first observation characterizing a first state of the environment, (ii) an action performed by the agent in response to the first observation, and (iii) an actual reward received resulting from the agent performing the action in response to the first observation; determining a pseudo-count for the first observation; determining an exploration reward bonus that incentivizes the agent to explore the environment from the pseudo-count for the first observation; generating a combined reward from the actual reward and the exploration reward bonus; and adjusting current values of the parameters of the neural network using the combined reward.Type: ApplicationFiled: May 18, 2017Publication date: October 15, 2020Inventors: Marc Gendron-Bellemare, Remi Munos, Srinivasan Sriram
-
Publication number: 20190332938Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model. In one aspect, a method includes receiving training data for training the machine learning model on a plurality of tasks, where each task includes multiple batches of training data. A task is selected in accordance with a current task selection policy. A batch of training data is selected from the selected task. The machine learning model is trained on the selected batch of training data to determine updated values of the model parameters. A learning progress measure that represents a progress of the training of the machine learning model as a result of training the machine learning model on the selected batch of training data is determined. The current task selection policy is updated using the learning progress measure.Type: ApplicationFiled: July 10, 2019Publication date: October 31, 2019Inventors: Marc Gendron-Bellemare, Jacob Lee Menick, Alexander Benjamin Graves, Koray Kavukcuoglu, Remi Munos
-
Publication number: 20190332923Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent interacting with an environment. A current observation characterizing a current state of the environment is received. For each action in a set of multiple actions that can be performed by the agent to interact with the environment, a probability distribution is determined over possible Q returns for the action-current observation pair. For each action, a measure of central tendency of the possible Q returns with respect to the probability distributions for the action-current observation pair is determined. An action to be performed by the agent in response to the current observation is selected using the measures of central tendency.Type: ApplicationFiled: July 10, 2019Publication date: October 31, 2019Inventors: Marc Gendron-Bellemare, William Clinton Dabney
-
Patent number: 10445653Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for evaluating reinforcement learning policies. One of the methods includes receiving a plurality of training histories for a reinforcement learning agent; determining a total reward for each training observation in the training histories; partitioning the training observations into a plurality of partitions; determining, for each partition and from the partitioned training observations, a probability that the reinforcement learning agent will receive the total reward for the partition if the reinforcement learning agent performs the action for the partition in response to receiving the current observation; determining, from the probabilities and for each total reward, a respective estimated value of performing each action in response to receiving the current observation; and selecting an action from the pre-determined set of actions from the estimated values in accordance with an action selection policy.Type: GrantFiled: August 7, 2015Date of Patent: October 15, 2019Assignee: DeepMind Technologies LimitedInventors: Joel William Veness, Marc Gendron-Bellemare