Patents by Inventor Hado Philip van Hasselt
Hado Philip van Hasselt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11886992Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a Q network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a plurality of experience tuples and training the Q network on each of the experience tuples using the Q network and a target Q network that is identical to the Q network but with the current values of the parameters of the target Q network being different from the current values of the parameters of the Q network.Type: GrantFiled: August 3, 2020Date of Patent: January 30, 2024Assignee: DeepMind Technologies LimitedInventors: Hado Philip van Hasselt, Arthur Clément Guez
-
Patent number: 11836620Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning. The embodiments described herein apply meta-learning (and in particular, meta-gradient reinforcement learning) to learn an optimum return function G so that the training of the system is improved. This provides a more effective and efficient means of training a reinforcement learning system as the system is able to converge on an optimum set of one or more policy parameters ? more quickly by training the return function G as it goes. In particular, the return function G is made dependent on the one or more policy parameters ? and a meta-objective function J? is used that is differentiated with respect to the one or more return parameters ? to improve the training of the return function G.Type: GrantFiled: December 4, 2020Date of Patent: December 5, 2023Assignee: DeepMind Technologies LimitedInventors: Zhongwen Xu, Hado Philip van Hasselt, David Silver
-
Patent number: 11769051Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using normalized target outputs. One of the methods includes updating current values of the normalization parameters to account for the target output for the training item; determining a normalized target output for the training item by normalizing the target output for the training item in accordance with the updated normalization parameter values; processing the training item using the neural network to generate a normalized output for the training item in accordance with current values of main parameters of the neural network; determining an error for the training item using the normalized target output and the normalized output; and using the error to adjust the current values of the main parameters of the neural network.Type: GrantFiled: June 24, 2021Date of Patent: September 26, 2023Assignee: DeepMind Technologies LimitedInventor: Hado Philip van Hasselt
-
Publication number: 20230144995Abstract: A reinforcement learning system, method, and computer program code for controlling an agent to perform a plurality of tasks while interacting with an environment. The system learns options, where an option comprises a sequence of primitive actions performed by the agent under control of an option policy neural network. In implementations the system discovers options which are useful for multiple different tasks by meta-learning rewards for training the option policy neural network whilst the agent is interacting with the environment.Type: ApplicationFiled: June 7, 2021Publication date: May 11, 2023Inventors: Vivek Veeriah Jeya Veeraiah, Tom Ben Zion Zahavy, Matteo Hessel, Zhongwen Xu, Junhyuk Oh, Iurii Kemaev, Hado Philip van Hasselt, David Silver, Satinder Singh Baveja
-
Publication number: 20210319316Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using normalized target outputs. One of the methods includes updating current values of the normalization parameters to account for the target output for the training item; determining a normalized target output for the training item by normalizing the target output for the training item in accordance with the updated normalization parameter values; processing the training item using the neural network to generate a normalized output for the training item in accordance with current values of main parameters of the neural network; determining an error for the training item using the normalized target output and the normalized output; and using the error to adjust the current values of the main parameters of the neural network.Type: ApplicationFiled: June 24, 2021Publication date: October 14, 2021Inventor: Hado Philip van Hasselt
-
Patent number: 11062206Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using normalized target outputs. One of the methods includes updating current values of the normalization parameters to account for the target output for the training item; determining a normalized target output for the training item by normalizing the target output for the training item in accordance with the updated normalization parameter values; processing the training item using the neural network to generate a normalized output for the training item in accordance with current values of main parameters of the neural network; determining an error for the training item using the normalized target output and the normalized output; and using the error to adjust the current values of the main parameters of the neural network.Type: GrantFiled: November 11, 2016Date of Patent: July 13, 2021Assignee: DeepMind Technologies LimitedInventor: Hado Philip van Hasselt
-
Publication number: 20210089915Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning. The embodiments described herein apply meta-learning (and in particular, meta-gradient reinforcement learning) to learn an optimum return function G so that the training of the system is improved. This provides a more effective and efficient means of training a reinforcement learning system as the system is able to converge on an optimum set of one or more policy parameters ? more quickly by training the return function G as it goes. In particular, the return function G is made dependent on the one or more policy parameters ? and a meta-objective function J? is used that is differentiated with respect to the one or more return parameters ? to improve the training of the return function G.Type: ApplicationFiled: December 4, 2020Publication date: March 25, 2021Inventors: Zhongwen Xu, Hado Philip van Hasselt, David Silver
-
Patent number: 10860926Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning. The embodiments described herein apply meta-learning (and in particular, meta-gradient reinforcement learning) to learn an optimum return function G so that the training of the system is improved. This provides a more effective and efficient means of training a reinforcement learning system as the system is able to converge on an optimum set of one or more policy parameters ? more quickly by training the return function G as it goes. In particular, the return function G is made dependent on the one or more policy parameters ? and a meta-objective function J? is used that is differentiated with respect to the one or more return parameters ? to improve the training of the return function G.Type: GrantFiled: May 20, 2019Date of Patent: December 8, 2020Assignee: DeepMind Technologies LimitedInventors: Zhongwen Xu, Hado Philip van Hasselt, David Silver
-
Publication number: 20200364569Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a Q network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a plurality of experience tuples and training the Q network on each of the experience tuples using the Q network and a target Q network that is identical to the Q network but with the current values of the parameters of the target Q network being different from the current values of the parameters of the Q network.Type: ApplicationFiled: August 3, 2020Publication date: November 19, 2020Inventors: Hado Philip van Hasselt, Arthur Clément Guez
-
Publication number: 20200327399Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for prediction of an outcome related to an environment. In one aspect, a system comprises a state representation neural network that is configured to: receive an observation characterizing a state of an environment being interacted with by an agent and process the observation to generate an internal state representation of the environment state; a prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a predicted subsequent state representation of a subsequent state of the environment and a predicted reward for the subsequent state; and a value prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a value prediction.Type: ApplicationFiled: June 25, 2020Publication date: October 15, 2020Inventors: David Silver, Tom Schaul, Matteo Hessel, Hado Philip van Hasselt
-
Patent number: 10733504Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a Q network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a plurality of experience tuples and training the Q network on each of the experience tuples using the Q network and a target Q network that is identical to the Q network but with the current values of the parameters of the target Q network being different from the current values of the parameters of the Q network.Type: GrantFiled: September 9, 2016Date of Patent: August 4, 2020Assignee: DeepMind Technologies LimitedInventors: Hado Philip van Hasselt, Arthur Clément Guez
-
Patent number: 10733501Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for prediction of an outcome related to an environment. In one aspect, a system comprises a state representation neural network that is configured to: receive an observation characterizing a state of an environment being interacted with by an agent and process the observation to generate an internal state representation of the environment state; a prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a predicted subsequent state representation of a subsequent state of the environment and a predicted reward for the subsequent state; and a value prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a value prediction.Type: GrantFiled: May 3, 2019Date of Patent: August 4, 2020Assignee: DeepMind Technologies LimitedInventors: David Silver, Tom Schaul, Matteo Hessel, Hado Philip van Hasselt
-
Publication number: 20190354859Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning. The embodiments described herein apply meta-learning (and in particular, meta-gradient reinforcement learning) to learn an optimum return function G so that the training of the system is improved. This provides a more effective and efficient means of training a reinforcement learning system as the system is able to converge on an optimum set of one or more policy parameters ? more quickly by training the return function G as it goes. In particular, the return function G is made dependent on the one or more policy parameters ? and a meta-objective function J? is used that is differentiated with respect to the one or more return parameters ? to improve the training of the return function G.Type: ApplicationFiled: May 20, 2019Publication date: November 21, 2019Inventors: Zhongwen Xu, Hado Philip van Hasselt, David Silver
-
Publication number: 20190259051Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for prediction of an outcome related to an environment. In one aspect, a system comprises a state representation neural network that is configured to: receive an observation characterizing a state of an environment being interacted with by an agent and process the observation to generate an internal state representation of the environment state; a prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a predicted subsequent state representation of a subsequent state of the environment and a predicted reward for the subsequent state; and a value prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a value prediction.Type: ApplicationFiled: May 3, 2019Publication date: August 22, 2019Inventors: David Silver, Tom Schaul, Matteo Hessel, Hado Philip van Hasselt
-
Publication number: 20190244099Abstract: A method of training an action selection neural network for controlling an agent interacting with an environment to perform different tasks is described. The method includes obtaining a first trajectory of transitions generated while the agent was performing an episode of the first task from multiple tasks; and training the action selection neural network on the first trajectory to adjust the control policies for the multiple tasks. The training includes, for each transition in the first trajectory: generating respective policy outputs for the initial observation in the transition for each task in a subset of tasks that includes the first task and one other task; generating respective target policy outputs for each task using the reward in the transition, and determining an update to the current parameter values based on, for each task, a gradient of a loss between the policy output and the target policy output for the task.Type: ApplicationFiled: February 5, 2019Publication date: August 8, 2019Inventors: Tom Schaul, Matteo Hessel, Hado Philip van Hasselt, Daniel J. Mankowitz
-
Publication number: 20170140268Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using normalized target outputs. One of the methods includes updating current values of the normalization parameters to account for the target output for the training item; determining a normalized target output for the training item by normalizing the target output for the training item in accordance with the updated normalization parameter values; processing the training item using the neural network to generate a normalized output for the training item in accordance with current values of main parameters of the neural network; determining an error for the training item using the normalized target output and the normalized output; and using the error to adjust the current values of the main parameters of the neural network.Type: ApplicationFiled: November 11, 2016Publication date: May 18, 2017Inventor: Hado Philip van Hasselt
-
Publication number: 20170076201Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a Q network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a plurality of experience tuples and training the Q network on each of the experience tuples using the Q network and a target Q network that is identical to the Q network but with the current values of the parameters of the target Q network being different from the current values of the parameters of the Q network.Type: ApplicationFiled: September 9, 2016Publication date: March 16, 2017Inventors: Hado Philip van Hasselt, Arthur Clément Guez