Patents by Inventor Marc Lanctot

Marc Lanctot has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240046112
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating control policies for controlling agents in an environment. One of the methods includes, at each of a plurality of iterations: obtaining a current joint control policy for a plurality of agents, the current joint control policy specifying a respective current control policy for each agent; and updating the current joint control policy, comprising, for each agent: generating a respective reward estimate for each of a plurality of alternate control policies that is an estimate of a reward received by the agent if the agent is controlled using the alternate control policy while the other agents are controlled using the respective current control policies; computing a best response for the agent from the respective reward estimates; and updating the respective current control policy for the agent using the best response for the agent.
    Type: Application
    Filed: February 7, 2022
    Publication date: February 8, 2024
    Inventors: Luke Christopher Marris, Paul Fernand Michel Muller, Marc Lanctot, Thore Kurt Hartwig Graepel
  • Publication number: 20220261635
    Abstract: Methods, systems and apparatus, including computer programs encoded on computer storage media, for training a policy neural network by repeatedly updating the policy neural network at each of a plurality of training iterations. One of the methods includes generating training data for the training iteration by controlling the agent in accordance with an improved policy that selects actions in response to input state representations. A best response computation is performed using (i) a candidate policy generated from respective policy neural networks as of one or more preceding iterations and (ii) a candidate value neural network. The candidate value neural network is configured to generate a value output that is an estimate of a value of the environment being in the state characterized by a state representation to complete a particular task. The policy neural network is updated by training the policy neural network on the training data.
    Type: Application
    Filed: January 7, 2022
    Publication date: August 18, 2022
    Inventors: Thomas William Anthony, Thomas Edward Eccles, Andrea Tacchetti, János Kramár, Ian Michael Gemp, Thomas Chalmers Hudson, Nicolas Pierre Mickaël Porcel, Marc Lanctot, Julien Perolat, Richard Everett, Thore Kurt Hartwig Graepel, Yoram Bachrach
  • Patent number: 11256990
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a recurrent neural network on training sequences using backpropagation through time. In one aspect, a method includes receiving a training sequence including a respective input at each of a number of time steps; obtaining data defining an amount of memory allocated to storing forward propagation information for use during backpropagation; determining, from the number of time steps in the training sequence and from the amount of memory allocated to storing the forward propagation information, a training policy for processing the training sequence, wherein the training policy defines when to store forward propagation information during forward propagation of the training sequence; and training the recurrent neural network on the training sequence in accordance with the training policy.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: February 22, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Marc Lanctot, Audrunas Gruslys, Ivo Danihelka, Remi Munos
  • Patent number: 10572798
    Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium, for selecting an actions from a set of actions to be performed by an agent interacting with an environment. In one aspect, the system includes a dueling deep neural network. The dueling deep neural network includes a value subnetwork, an advantage subnetwork, and a combining layer. The value subnetwork processes a representation of an observation to generate a value estimate. The advantage subnetwork processes the representation of the observation to generate an advantage estimate for each action in the set of actions. The combining layer combines the value estimate and the respective advantage estimate for each action to generate a respective Q value for the action. The system selects an action to be performed by the agent in response to the observation using the respective Q values for the actions in the set of actions.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: February 25, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Ziyu Wang, Joao Ferdinando Gomes de Freitas, Marc Lanctot
  • Publication number: 20190188572
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a recurrent neural network on training sequences using backpropagation through time. In one aspect, a method includes receiving a training sequence including a respective input at each of a number of time steps; obtaining data defining an amount of memory allocated to storing forward propagation information for use during backpropagation; determining, from the number of time steps in the training sequence and from the amount of memory allocated to storing the forward propagation information, a training policy for processing the training sequence, wherein the training policy defines when to store forward propagation information during forward propagation of the training sequence; and training the recurrent neural network on the training sequence in accordance with the training policy.
    Type: Application
    Filed: May 19, 2017
    Publication date: June 20, 2019
    Applicant: DeepMind Technologies Limited
    Inventors: Marc LANCTOT, Audrunas GRUSLYS, Ivo DANIHELKA, Remi MUNOS
  • Patent number: 10296825
    Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium, for selecting an actions from a set of actions to be performed by an agent interacting with an environment. In one aspect, the system includes a dueling deep neural network. The dueling deep neural network includes a value subnetwork, an advantage subnetwork, and a combining layer. The value subnetwork processes a representation of an observation to generate a value estimate. The advantage subnetwork processes the representation of the observation to generate an advantage estimate for each action in the set of actions. The combining layer combines the value estimate and the respective advantage estimate for each action to generate a respective Q value for the action. The system selects an action to be performed by the agent in response to the observation using the respective Q values for the actions in the set of actions.
    Type: Grant
    Filed: May 11, 2018
    Date of Patent: May 21, 2019
    Assignee: DeepMind Technologies Limited
    Inventors: Ziyu Wang, Joao Ferdinando Gomes de Freitas, Marc Lanctot
  • Publication number: 20180260689
    Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium, for selecting an actions from a set of actions to be performed by an agent interacting with an environment. In one aspect, the system includes a dueling deep neural network. The dueling deep neural network includes a value subnetwork, an advantage subnetwork, and a combining layer. The value subnetwork processes a representation of an observation to generate a value estimate. The advantage subnetwork processes the representation of the observation to generate an advantage estimate for each action in the set of actions. The combining layer combines the value estimate and the respective advantage estimate for each action to generate a respective Q value for the action. The system selects an action to be performed by the agent in response to the observation using the respective Q values for the actions in the set of actions.
    Type: Application
    Filed: May 11, 2018
    Publication date: September 13, 2018
    Inventors: Ziyu Wang, Joao Ferdinando Gomes de Freitas, Marc Lanctot
  • Publication number: 20170140266
    Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium, for selecting an actions from a set of actions to be performed by an agent interacting with an environment. In one aspect, the system includes a dueling deep neural network. The dueling deep neural network includes a value subnetwork, an advantage subnetwork, and a combining layer. The value subnetwork processes a representation of an observation to generate a value estimate. The advantage subnetwork processes the representation of the observation to generate an advantage estimate for each action in the set of actions. The combining layer combines the value estimate and the respective advantage estimate for each action to generate a respective Q value for the action. The system selects an action to be performed by the agent in response to the observation using the respective Q values for the actions in the set of actions.
    Type: Application
    Filed: November 11, 2016
    Publication date: May 18, 2017
    Applicant: Google Inc.
    Inventors: Ziyu Wang, Joao Ferdinando Gomes de Freitas, Marc Lanctot