Patents by Inventor Hado Philip van Hasselt

Hado Philip van Hasselt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11886992
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a Q network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a plurality of experience tuples and training the Q network on each of the experience tuples using the Q network and a target Q network that is identical to the Q network but with the current values of the parameters of the target Q network being different from the current values of the parameters of the Q network.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: January 30, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Hado Philip van Hasselt, Arthur Clément Guez
  • Patent number: 11836620
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning. The embodiments described herein apply meta-learning (and in particular, meta-gradient reinforcement learning) to learn an optimum return function G so that the training of the system is improved. This provides a more effective and efficient means of training a reinforcement learning system as the system is able to converge on an optimum set of one or more policy parameters ? more quickly by training the return function G as it goes. In particular, the return function G is made dependent on the one or more policy parameters ? and a meta-objective function J? is used that is differentiated with respect to the one or more return parameters ? to improve the training of the return function G.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: December 5, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Zhongwen Xu, Hado Philip van Hasselt, David Silver
  • Patent number: 11769051
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using normalized target outputs. One of the methods includes updating current values of the normalization parameters to account for the target output for the training item; determining a normalized target output for the training item by normalizing the target output for the training item in accordance with the updated normalization parameter values; processing the training item using the neural network to generate a normalized output for the training item in accordance with current values of main parameters of the neural network; determining an error for the training item using the normalized target output and the normalized output; and using the error to adjust the current values of the main parameters of the neural network.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: September 26, 2023
    Assignee: DeepMind Technologies Limited
    Inventor: Hado Philip van Hasselt
  • Publication number: 20230144995
    Abstract: A reinforcement learning system, method, and computer program code for controlling an agent to perform a plurality of tasks while interacting with an environment. The system learns options, where an option comprises a sequence of primitive actions performed by the agent under control of an option policy neural network. In implementations the system discovers options which are useful for multiple different tasks by meta-learning rewards for training the option policy neural network whilst the agent is interacting with the environment.
    Type: Application
    Filed: June 7, 2021
    Publication date: May 11, 2023
    Inventors: Vivek Veeriah Jeya Veeraiah, Tom Ben Zion Zahavy, Matteo Hessel, Zhongwen Xu, Junhyuk Oh, Iurii Kemaev, Hado Philip van Hasselt, David Silver, Satinder Singh Baveja
  • Publication number: 20210319316
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using normalized target outputs. One of the methods includes updating current values of the normalization parameters to account for the target output for the training item; determining a normalized target output for the training item by normalizing the target output for the training item in accordance with the updated normalization parameter values; processing the training item using the neural network to generate a normalized output for the training item in accordance with current values of main parameters of the neural network; determining an error for the training item using the normalized target output and the normalized output; and using the error to adjust the current values of the main parameters of the neural network.
    Type: Application
    Filed: June 24, 2021
    Publication date: October 14, 2021
    Inventor: Hado Philip van Hasselt
  • Patent number: 11062206
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using normalized target outputs. One of the methods includes updating current values of the normalization parameters to account for the target output for the training item; determining a normalized target output for the training item by normalizing the target output for the training item in accordance with the updated normalization parameter values; processing the training item using the neural network to generate a normalized output for the training item in accordance with current values of main parameters of the neural network; determining an error for the training item using the normalized target output and the normalized output; and using the error to adjust the current values of the main parameters of the neural network.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: July 13, 2021
    Assignee: DeepMind Technologies Limited
    Inventor: Hado Philip van Hasselt
  • Publication number: 20210089915
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning. The embodiments described herein apply meta-learning (and in particular, meta-gradient reinforcement learning) to learn an optimum return function G so that the training of the system is improved. This provides a more effective and efficient means of training a reinforcement learning system as the system is able to converge on an optimum set of one or more policy parameters ? more quickly by training the return function G as it goes. In particular, the return function G is made dependent on the one or more policy parameters ? and a meta-objective function J? is used that is differentiated with respect to the one or more return parameters ? to improve the training of the return function G.
    Type: Application
    Filed: December 4, 2020
    Publication date: March 25, 2021
    Inventors: Zhongwen Xu, Hado Philip van Hasselt, David Silver
  • Patent number: 10860926
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning. The embodiments described herein apply meta-learning (and in particular, meta-gradient reinforcement learning) to learn an optimum return function G so that the training of the system is improved. This provides a more effective and efficient means of training a reinforcement learning system as the system is able to converge on an optimum set of one or more policy parameters ? more quickly by training the return function G as it goes. In particular, the return function G is made dependent on the one or more policy parameters ? and a meta-objective function J? is used that is differentiated with respect to the one or more return parameters ? to improve the training of the return function G.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: December 8, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Zhongwen Xu, Hado Philip van Hasselt, David Silver
  • Publication number: 20200364569
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a Q network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a plurality of experience tuples and training the Q network on each of the experience tuples using the Q network and a target Q network that is identical to the Q network but with the current values of the parameters of the target Q network being different from the current values of the parameters of the Q network.
    Type: Application
    Filed: August 3, 2020
    Publication date: November 19, 2020
    Inventors: Hado Philip van Hasselt, Arthur Clément Guez
  • Publication number: 20200327399
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for prediction of an outcome related to an environment. In one aspect, a system comprises a state representation neural network that is configured to: receive an observation characterizing a state of an environment being interacted with by an agent and process the observation to generate an internal state representation of the environment state; a prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a predicted subsequent state representation of a subsequent state of the environment and a predicted reward for the subsequent state; and a value prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a value prediction.
    Type: Application
    Filed: June 25, 2020
    Publication date: October 15, 2020
    Inventors: David Silver, Tom Schaul, Matteo Hessel, Hado Philip van Hasselt
  • Patent number: 10733504
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a Q network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a plurality of experience tuples and training the Q network on each of the experience tuples using the Q network and a target Q network that is identical to the Q network but with the current values of the parameters of the target Q network being different from the current values of the parameters of the Q network.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: August 4, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Hado Philip van Hasselt, Arthur Clément Guez
  • Patent number: 10733501
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for prediction of an outcome related to an environment. In one aspect, a system comprises a state representation neural network that is configured to: receive an observation characterizing a state of an environment being interacted with by an agent and process the observation to generate an internal state representation of the environment state; a prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a predicted subsequent state representation of a subsequent state of the environment and a predicted reward for the subsequent state; and a value prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a value prediction.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: August 4, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: David Silver, Tom Schaul, Matteo Hessel, Hado Philip van Hasselt
  • Publication number: 20190354859
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning. The embodiments described herein apply meta-learning (and in particular, meta-gradient reinforcement learning) to learn an optimum return function G so that the training of the system is improved. This provides a more effective and efficient means of training a reinforcement learning system as the system is able to converge on an optimum set of one or more policy parameters ? more quickly by training the return function G as it goes. In particular, the return function G is made dependent on the one or more policy parameters ? and a meta-objective function J? is used that is differentiated with respect to the one or more return parameters ? to improve the training of the return function G.
    Type: Application
    Filed: May 20, 2019
    Publication date: November 21, 2019
    Inventors: Zhongwen Xu, Hado Philip van Hasselt, David Silver
  • Publication number: 20190259051
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for prediction of an outcome related to an environment. In one aspect, a system comprises a state representation neural network that is configured to: receive an observation characterizing a state of an environment being interacted with by an agent and process the observation to generate an internal state representation of the environment state; a prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a predicted subsequent state representation of a subsequent state of the environment and a predicted reward for the subsequent state; and a value prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a value prediction.
    Type: Application
    Filed: May 3, 2019
    Publication date: August 22, 2019
    Inventors: David Silver, Tom Schaul, Matteo Hessel, Hado Philip van Hasselt
  • Publication number: 20190244099
    Abstract: A method of training an action selection neural network for controlling an agent interacting with an environment to perform different tasks is described. The method includes obtaining a first trajectory of transitions generated while the agent was performing an episode of the first task from multiple tasks; and training the action selection neural network on the first trajectory to adjust the control policies for the multiple tasks. The training includes, for each transition in the first trajectory: generating respective policy outputs for the initial observation in the transition for each task in a subset of tasks that includes the first task and one other task; generating respective target policy outputs for each task using the reward in the transition, and determining an update to the current parameter values based on, for each task, a gradient of a loss between the policy output and the target policy output for the task.
    Type: Application
    Filed: February 5, 2019
    Publication date: August 8, 2019
    Inventors: Tom Schaul, Matteo Hessel, Hado Philip van Hasselt, Daniel J. Mankowitz
  • Publication number: 20170140268
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using normalized target outputs. One of the methods includes updating current values of the normalization parameters to account for the target output for the training item; determining a normalized target output for the training item by normalizing the target output for the training item in accordance with the updated normalization parameter values; processing the training item using the neural network to generate a normalized output for the training item in accordance with current values of main parameters of the neural network; determining an error for the training item using the normalized target output and the normalized output; and using the error to adjust the current values of the main parameters of the neural network.
    Type: Application
    Filed: November 11, 2016
    Publication date: May 18, 2017
    Inventor: Hado Philip van Hasselt
  • Publication number: 20170076201
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a Q network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a plurality of experience tuples and training the Q network on each of the experience tuples using the Q network and a target Q network that is identical to the Q network but with the current values of the parameters of the target Q network being different from the current values of the parameters of the Q network.
    Type: Application
    Filed: September 9, 2016
    Publication date: March 16, 2017
    Inventors: Hado Philip van Hasselt, Arthur Clément Guez