Patents by Inventor Marc Lanctot
Marc Lanctot has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240046112Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating control policies for controlling agents in an environment. One of the methods includes, at each of a plurality of iterations: obtaining a current joint control policy for a plurality of agents, the current joint control policy specifying a respective current control policy for each agent; and updating the current joint control policy, comprising, for each agent: generating a respective reward estimate for each of a plurality of alternate control policies that is an estimate of a reward received by the agent if the agent is controlled using the alternate control policy while the other agents are controlled using the respective current control policies; computing a best response for the agent from the respective reward estimates; and updating the respective current control policy for the agent using the best response for the agent.Type: ApplicationFiled: February 7, 2022Publication date: February 8, 2024Inventors: Luke Christopher Marris, Paul Fernand Michel Muller, Marc Lanctot, Thore Kurt Hartwig Graepel
-
Publication number: 20220261635Abstract: Methods, systems and apparatus, including computer programs encoded on computer storage media, for training a policy neural network by repeatedly updating the policy neural network at each of a plurality of training iterations. One of the methods includes generating training data for the training iteration by controlling the agent in accordance with an improved policy that selects actions in response to input state representations. A best response computation is performed using (i) a candidate policy generated from respective policy neural networks as of one or more preceding iterations and (ii) a candidate value neural network. The candidate value neural network is configured to generate a value output that is an estimate of a value of the environment being in the state characterized by a state representation to complete a particular task. The policy neural network is updated by training the policy neural network on the training data.Type: ApplicationFiled: January 7, 2022Publication date: August 18, 2022Inventors: Thomas William Anthony, Thomas Edward Eccles, Andrea Tacchetti, János Kramár, Ian Michael Gemp, Thomas Chalmers Hudson, Nicolas Pierre Mickaël Porcel, Marc Lanctot, Julien Perolat, Richard Everett, Thore Kurt Hartwig Graepel, Yoram Bachrach
-
Patent number: 11256990Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a recurrent neural network on training sequences using backpropagation through time. In one aspect, a method includes receiving a training sequence including a respective input at each of a number of time steps; obtaining data defining an amount of memory allocated to storing forward propagation information for use during backpropagation; determining, from the number of time steps in the training sequence and from the amount of memory allocated to storing the forward propagation information, a training policy for processing the training sequence, wherein the training policy defines when to store forward propagation information during forward propagation of the training sequence; and training the recurrent neural network on the training sequence in accordance with the training policy.Type: GrantFiled: May 19, 2017Date of Patent: February 22, 2022Assignee: DeepMind Technologies LimitedInventors: Marc Lanctot, Audrunas Gruslys, Ivo Danihelka, Remi Munos
-
Patent number: 10572798Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium, for selecting an actions from a set of actions to be performed by an agent interacting with an environment. In one aspect, the system includes a dueling deep neural network. The dueling deep neural network includes a value subnetwork, an advantage subnetwork, and a combining layer. The value subnetwork processes a representation of an observation to generate a value estimate. The advantage subnetwork processes the representation of the observation to generate an advantage estimate for each action in the set of actions. The combining layer combines the value estimate and the respective advantage estimate for each action to generate a respective Q value for the action. The system selects an action to be performed by the agent in response to the observation using the respective Q values for the actions in the set of actions.Type: GrantFiled: November 11, 2016Date of Patent: February 25, 2020Assignee: DeepMind Technologies LimitedInventors: Ziyu Wang, Joao Ferdinando Gomes de Freitas, Marc Lanctot
-
Publication number: 20190188572Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a recurrent neural network on training sequences using backpropagation through time. In one aspect, a method includes receiving a training sequence including a respective input at each of a number of time steps; obtaining data defining an amount of memory allocated to storing forward propagation information for use during backpropagation; determining, from the number of time steps in the training sequence and from the amount of memory allocated to storing the forward propagation information, a training policy for processing the training sequence, wherein the training policy defines when to store forward propagation information during forward propagation of the training sequence; and training the recurrent neural network on the training sequence in accordance with the training policy.Type: ApplicationFiled: May 19, 2017Publication date: June 20, 2019Applicant: DeepMind Technologies LimitedInventors: Marc LANCTOT, Audrunas GRUSLYS, Ivo DANIHELKA, Remi MUNOS
-
Patent number: 10296825Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium, for selecting an actions from a set of actions to be performed by an agent interacting with an environment. In one aspect, the system includes a dueling deep neural network. The dueling deep neural network includes a value subnetwork, an advantage subnetwork, and a combining layer. The value subnetwork processes a representation of an observation to generate a value estimate. The advantage subnetwork processes the representation of the observation to generate an advantage estimate for each action in the set of actions. The combining layer combines the value estimate and the respective advantage estimate for each action to generate a respective Q value for the action. The system selects an action to be performed by the agent in response to the observation using the respective Q values for the actions in the set of actions.Type: GrantFiled: May 11, 2018Date of Patent: May 21, 2019Assignee: DeepMind Technologies LimitedInventors: Ziyu Wang, Joao Ferdinando Gomes de Freitas, Marc Lanctot
-
Publication number: 20180260689Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium, for selecting an actions from a set of actions to be performed by an agent interacting with an environment. In one aspect, the system includes a dueling deep neural network. The dueling deep neural network includes a value subnetwork, an advantage subnetwork, and a combining layer. The value subnetwork processes a representation of an observation to generate a value estimate. The advantage subnetwork processes the representation of the observation to generate an advantage estimate for each action in the set of actions. The combining layer combines the value estimate and the respective advantage estimate for each action to generate a respective Q value for the action. The system selects an action to be performed by the agent in response to the observation using the respective Q values for the actions in the set of actions.Type: ApplicationFiled: May 11, 2018Publication date: September 13, 2018Inventors: Ziyu Wang, Joao Ferdinando Gomes de Freitas, Marc Lanctot
-
Publication number: 20170140266Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium, for selecting an actions from a set of actions to be performed by an agent interacting with an environment. In one aspect, the system includes a dueling deep neural network. The dueling deep neural network includes a value subnetwork, an advantage subnetwork, and a combining layer. The value subnetwork processes a representation of an observation to generate a value estimate. The advantage subnetwork processes the representation of the observation to generate an advantage estimate for each action in the set of actions. The combining layer combines the value estimate and the respective advantage estimate for each action to generate a respective Q value for the action. The system selects an action to be performed by the agent in response to the observation using the respective Q values for the actions in the set of actions.Type: ApplicationFiled: November 11, 2016Publication date: May 18, 2017Applicant: Google Inc.Inventors: Ziyu Wang, Joao Ferdinando Gomes de Freitas, Marc Lanctot