Patents by Inventor Arthur Clément Guez

Arthur Clément Guez has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11886992
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a Q network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a plurality of experience tuples and training the Q network on each of the experience tuples using the Q network and a target Q network that is identical to the Q network but with the current values of the parameters of the target Q network being different from the current values of the parameters of the Q network.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: January 30, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Hado Philip van Hasselt, Arthur Clément Guez
  • Publication number: 20220366245
    Abstract: A reinforcement learning method and system that selects actions to be performed by a reinforcement learning agent interacting with an environment. A causal model is implemented by a hindsight model neural network and trained using hindsight i.e. using future environment state trajectories. As the method and system does not have access to this future information when selecting an action, the hindsight model neural network is used to train a model neural network which is conditioned on data from current observations, which learns to predict an output of the hindsight model neural network.
    Type: Application
    Filed: September 23, 2020
    Publication date: November 17, 2022
    Inventors: Arthur Clement Guez, Fabio Viola, Theophane Guillaume Weber, Lars Buesing, Nicolas Manfred Otto Heess
  • Patent number: 11328183
    Abstract: A neural network system is proposed. The neural network can be trained by model-based reinforcement learning to select actions to be performed by an agent interacting with an environment, to perform a task in an attempt to achieve a specified result. The system may comprise at least one imagination core which receives a current observation characterizing a current state of the environment, and optionally historical observations, and which includes a model of the environment. The imagination core may be configured to output trajectory data in response to the current observation, and/or historical observations. The trajectory data comprising a sequence of future features of the environment imagined by the imagination core. The system may also include a rollout encoder to encode the features, and an output stage to receive data derived from the rollout embedding and to output action policy data for identifying an action based on the current observation.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: May 10, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Daniel Pieter Wierstra, Yujia Li, Razvan Pascanu, Peter William Battaglia, Theophane Guillaume Weber, Lars Buesing, David Paul Reichert, Arthur Clement Guez, Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Manfred Otto Heess, Sebastien Henri Andre Racaniere
  • Publication number: 20210073594
    Abstract: A neural network system is proposed. The neural network can be trained by model-based reinforcement learning to select actions to be performed by an agent interacting with an environment, to perform a task in an attempt to achieve a specified result. The system may comprise at least one imagination core which receives a current observation characterizing a current state of the environment, and optionally historical observations, and which includes a model of the environment. The imagination core may be configured to output trajectory data in response to the current observation, and/or historical observations. The trajectory data comprising a sequence of future features of the environment imagined by the imagination core. The system may also include a rollout encoder to encode the features, and an output stage to receive data derived from the rollout embedding and to output action policy data for identifying an action based on the current observation.
    Type: Application
    Filed: September 14, 2020
    Publication date: March 11, 2021
    Inventors: Daniel Pieter Wierstra, Yujia Li, Razvan Pascanu, Peter William Battaglia, Theophane Guillaume Weber, Lars Buesing, David Paul Reichert, Arthur Clement Guez, Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Manfred Otto Heess, Sebastien Henri Andre Racaniere
  • Patent number: 10867242
    Abstract: Methods, systems and apparatus, including computer programs encoded on computer storage media, for training a value neural network that is configured to receive an observation characterizing a state of an environment being interacted with by an agent and to process the observation in accordance with parameters of the value neural network to generate a value score. One of the systems performs operations that include training a supervised learning policy neural network; initializing initial values of parameters of a reinforcement learning policy neural network having a same architecture as the supervised learning policy network to the trained values of the parameters of the supervised learning policy neural network; training the reinforcement learning policy neural network on second training data; and training the value neural network to generate a value score for the state of the environment that represents a predicted long-term reward resulting from the environment being in the state.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: December 15, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Thore Kurt Hartwig Graepel, Shih-Chieh Huang, David Silver, Arthur Clement Guez, Laurent Sifre, Ilya Sutskever, Christopher Maddison
  • Patent number: 10860927
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for controlling an agent interacting with an environment. One of the methods includes obtaining a representation of an observation; processing the representation using a convolutional long short-term memory (LSTM) neural network comprising a plurality of convolutional LSTM neural network layers; processing an action selection input comprising the final LSTM hidden state output for the time step using an action selection neural network that is configured to receive the action selection input and to process the action selection input to generate an action selection output that defines an action to be performed by the agent at the time step; selecting, from the action selection output, the action to be performed by the agent at the time step in accordance with an action selection policy; and causing the agent to perform the selected action.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: December 8, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Mehdi Mirza Mohammadi, Arthur Clement Guez, Karol Gregor, Rishabh Kabra
  • Publication number: 20200364569
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a Q network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a plurality of experience tuples and training the Q network on each of the experience tuples using the Q network and a target Q network that is identical to the Q network but with the current values of the parameters of the target Q network being different from the current values of the parameters of the Q network.
    Type: Application
    Filed: August 3, 2020
    Publication date: November 19, 2020
    Inventors: Hado Philip van Hasselt, Arthur Clément Guez
  • Patent number: 10776670
    Abstract: A neural network system is proposed. The neural network can be trained by model-based reinforcement learning to select actions to be performed by an agent interacting with an environment, to perform a task in an attempt to achieve a specified result. The system may comprise at least one imagination core which receives a current observation characterizing a current state of the environment, and optionally historical observations, and which includes a model of the environment. The imagination core may be configured to output trajectory data in response to the current observation, and/or historical observations. The trajectory data comprising a sequence of future features of the environment imagined by the imagination core. The system may also include a rollout encoder to encode the features, and an output stage to receive data derived from the rollout embedding and to output action policy data for identifying an action based on the current observation.
    Type: Grant
    Filed: November 19, 2019
    Date of Patent: September 15, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Daniel Pieter Wierstra, Yujia Li, Razvan Pascanu, Peter William Battaglia, Theophane Guillaume Weber, Lars Buesing, David Paul Reichert, Arthur Clement Guez, Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Manfred Otto Heess, Sebastien Henri Andre Racaniere
  • Patent number: 10733504
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a Q network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a plurality of experience tuples and training the Q network on each of the experience tuples using the Q network and a target Q network that is identical to the Q network but with the current values of the parameters of the target Q network being different from the current values of the parameters of the Q network.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: August 4, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Hado Philip van Hasselt, Arthur Clément Guez
  • Publication number: 20200104709
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for controlling an agent interacting with an environment. One of the methods includes obtaining a representation of an observation; processing the representation using a convolutional long short-term memory (LSTM) neural network comprising a plurality of convolutional LSTM neural network layers; processing an action selection input comprising the final LSTM hidden state output for the time step using an action selection neural network that is configured to receive the action selection input and to process the action selection input to generate an action selection output that defines an action to be performed by the agent at the time step; selecting, from the action selection output, the action to be performed by the agent at the time step in accordance with an action selection policy; and causing the agent to perform the selected action.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 2, 2020
    Inventors: Mehdi Mirza Mohammadi, Arthur Clement Guez, Karol Gregor, Rishabh Kabra
  • Publication number: 20200090006
    Abstract: A neural network system is proposed. The neural network can be trained by model-based reinforcement learning to select actions to be performed by an agent interacting with an environment, to perform a task in an attempt to achieve a specified result. The system may comprise at least one imagination core which receives a current observation characterizing a current state of the environment, and optionally historical observations, and which includes a model of the environment. The imagination core may be configured to output trajectory data in response to the current observation, and/or historical observations. The trajectory data comprising a sequence of future features of the environment imagined by the imagination core. The system may also include a rollout encoder to encode the features, and an output stage to receive data derived from the rollout embedding and to output action policy data for identifying an action based on the current observation.
    Type: Application
    Filed: November 19, 2019
    Publication date: March 19, 2020
    Inventors: Daniel Pieter Wierstra, Yujia Li, Razvan Pascanu, Peter William Battaglia, Theophane Guillaume Weber, Lars Buesing, David Paul Reichert, Arthur Clement Guez, Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Manfred Otto Heess, Sebastien Henri Andre Racaniere
  • Publication number: 20180032863
    Abstract: Methods, systems and apparatus, including computer programs encoded on computer storage media, for training a value neural network that is configured to receive an observation characterizing a state of an environment being interacted with by an agent and to process the observation in accordance with parameters of the value neural network to generate a value score. One of the systems performs operations that include training a supervised learning policy neural network; initializing initial values of parameters of a reinforcement learning policy neural network having a same architecture as the supervised learning policy network to the trained values of the parameters of the supervised learning policy neural network; training the reinforcement learning policy neural network on second training data; and training the value neural network to generate a value score for the state of the environment that represents a predicted long-term reward resulting from the environment being in the state.
    Type: Application
    Filed: September 29, 2016
    Publication date: February 1, 2018
    Inventors: Thore Kurt Hartwig Graepel, Shih-Chieh Huang, David Silver, Arthur Clement Guez, Laurent Sifre, Ilya Sutskever, Christopher Maddison
  • Publication number: 20180032864
    Abstract: Methods, systems and apparatus, including computer programs encoded on computer storage media, for training a value neural network that is configured to receive an observation characterizing a state of an environment being interacted with by an agent and to process the observation in accordance with parameters of the value neural network to generate a value score. One of the systems performs operations that include training a supervised learning policy neural network; initializing initial values of parameters of a reinforcement learning policy neural network having a same architecture as the supervised learning policy network to the trained values of the parameters of the supervised learning policy neural network; training the reinforcement learning policy neural network on second training data; and training the value neural network to generate a value score for the state of the environment that represents a predicted long-term reward resulting from the environment being in the state.
    Type: Application
    Filed: September 29, 2016
    Publication date: February 1, 2018
    Inventors: Thore Kurt Hartwig Graepel, Shih-Chieh Huang, David Silver, Arthur Clement Guez, Laurent Sifre, Ilya Sutskever, Christopher Maddison
  • Publication number: 20170076201
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a Q network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a plurality of experience tuples and training the Q network on each of the experience tuples using the Q network and a target Q network that is identical to the Q network but with the current values of the parameters of the target Q network being different from the current values of the parameters of the Q network.
    Type: Application
    Filed: September 9, 2016
    Publication date: March 16, 2017
    Inventors: Hado Philip van Hasselt, Arthur Clément Guez