Patents by Inventor Adrià Puigdomènech Badia
Adrià Puigdomènech Badia has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240362481Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.Type: ApplicationFiled: May 13, 2024Publication date: October 31, 2024Inventors: Volodymyr Mnih, Adrià Puigdomènech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
-
Publication number: 20240320506Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for controlling a reinforcement learning agent in an environment to perform a task using a retrieval-augmented action selection process.Type: ApplicationFiled: October 5, 2022Publication date: September 26, 2024Inventors: Anirudh Goyal, Andrea Banino, Abram Luke Friesen, Theophane Guillaume Weber, Adrià Puigdomènech Badia, Nan Ke, Simon Osindero, Timothy Paul Lillicrap, Charles Blundell
-
Patent number: 12020155Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.Type: GrantFiled: April 29, 2022Date of Patent: June 25, 2024Assignee: DeepMind Technologies LimitedInventors: Volodymyr Mnih, Adrià Puigdomènech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
-
Publication number: 20240095495Abstract: A system for controlling an agent interacting with an environment to perform a task. The system includes an action selection neural network configured to generate action selection outputs that are used to select actions to be performed by the agent. The action selection neural network includes an encoder sub network configured to generate encoded representations of the current observations; an attention sub network configured to generate attention sub network outputs with the used of an attention mechanism; a recurrent sub network configured to generate recurrent sub network outputs; and an action selection sub network configured to generate the action selection outputs that are used to select the actions to be performed by the agent in response to the current observations.Type: ApplicationFiled: February 7, 2022Publication date: March 21, 2024Inventors: Andrea Banino, Adrià Puigdomènech Badia, Jacob Charles Walker, Timothy Anthony Julian Scholtes, Jovana Mitrovic, Charles Blundell
-
Publication number: 20240028866Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.Type: ApplicationFiled: June 13, 2023Publication date: January 25, 2024Inventors: Adrià Puigdomènech Badia, Pablo Sprechmann, Alex Vitvitskyi, Zhaohan Guo, Bilal Piot, Steven James Kapturowski, Olivier Tieleman, Charles Blundell
-
Patent number: 11783182Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.Type: GrantFiled: February 8, 2021Date of Patent: October 10, 2023Assignee: DeepMind Technologies LimitedInventors: Volodymyr Mnih, Adrià Puigdomènech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
-
Patent number: 11720796Abstract: A method includes maintaining respective episodic memory data for each of multiple actions; receiving a current observation characterizing a current state of an environment being interacted with by an agent; processing the current observation using an embedding neural network in accordance with current values of parameters of the embedding neural network to generate a current key embedding for the current observation; for each action of the plurality of actions: determining the p nearest key embeddings in the episodic memory data for the action to the current key embedding according to a distance measure, and determining a Q value for the action from the return estimates mapped to by the p nearest key embeddings in the episodic memory data for the action; and selecting, using the Q values for the actions, an action from the multiple actions as the action to be performed by the agent.Type: GrantFiled: April 23, 2020Date of Patent: August 8, 2023Assignee: DeepMind Technologies LimitedInventors: Benigno Uria-Martínez, Alexander Pritzel, Charles Blundell, Adrià Puigdomènech Badia
-
Patent number: 11714990Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.Type: GrantFiled: May 22, 2020Date of Patent: August 1, 2023Assignee: DeepMind Technologies LimitedInventors: Adrià Puigdomènech Badia, Pablo Sprechmann, Alex Vitvitskyi, Zhaohan Guo, Bilal Piot, Steven James Kapturowski, Olivier Tieleman, Charles Blundell
-
Publication number: 20230059004Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning with adaptive return computation schemes. In one aspect, a method includes: maintaining data specifying a policy for selecting between multiple different return computation schemes, each return computation scheme assigning a different importance to exploring the environment while performing an episode of a task; selecting, using the policy, a return computation scheme from the multiple different return computation schemes; controlling an agent to perform the episode of the task to maximize a return computed according to the selected return computation scheme; identifying rewards that were generated as a result of the agent performing the episode of the task; and updating, using the identified rewards, the policy for selecting between multiple different return computation schemes.Type: ApplicationFiled: February 8, 2021Publication date: February 23, 2023Inventors: Adrià Puigdomènech Badia, Bilal Piot, Pablo Sprechmann, Steven James Kapturowski, Alex Vitvitskyi, Zhaohan Guo, Charles Blundell
-
Publication number: 20220261647Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.Type: ApplicationFiled: April 29, 2022Publication date: August 18, 2022Inventors: Volodymyr Mnih, Adrià Puigdomènech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
-
Publication number: 20220253698Abstract: A neural network based memory system with external memory for storing representations of knowledge items. The memory can be used to retrieve indirectly related knowledge items by recirculating queries, and is useful for relational reasoning. Implementations of the system control how many times queries are recirculated, and hence the degree of relational reasoning, to minimize computation.Type: ApplicationFiled: May 22, 2020Publication date: August 11, 2022Inventors: Andrea Banino, Charles Blundell, Adrià Puigdomènech Badia, Raphael Koster, Sudarshan Kumaran
-
Patent number: 11334792Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.Type: GrantFiled: May 3, 2019Date of Patent: May 17, 2022Assignee: DeepMind Technologies LimitedInventors: Volodymyr Mnih, Adria Puigdomenech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
-
Patent number: 11328183Abstract: A neural network system is proposed. The neural network can be trained by model-based reinforcement learning to select actions to be performed by an agent interacting with an environment, to perform a task in an attempt to achieve a specified result. The system may comprise at least one imagination core which receives a current observation characterizing a current state of the environment, and optionally historical observations, and which includes a model of the environment. The imagination core may be configured to output trajectory data in response to the current observation, and/or historical observations. The trajectory data comprising a sequence of future features of the environment imagined by the imagination core. The system may also include a rollout encoder to encode the features, and an output stage to receive data derived from the rollout embedding and to output action policy data for identifying an action based on the current observation.Type: GrantFiled: September 14, 2020Date of Patent: May 10, 2022Assignee: DeepMind Technologies LimitedInventors: Daniel Pieter Wierstra, Yujia Li, Razvan Pascanu, Peter William Battaglia, Theophane Guillaume Weber, Lars Buesing, David Paul Reichert, Arthur Clement Guez, Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Manfred Otto Heess, Sebastien Henri Andre Racaniere
-
Publication number: 20210166127Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.Type: ApplicationFiled: February 8, 2021Publication date: June 3, 2021Inventors: Volodymyr Mnih, Adrià Puigdomènech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
-
Publication number: 20210073594Abstract: A neural network system is proposed. The neural network can be trained by model-based reinforcement learning to select actions to be performed by an agent interacting with an environment, to perform a task in an attempt to achieve a specified result. The system may comprise at least one imagination core which receives a current observation characterizing a current state of the environment, and optionally historical observations, and which includes a model of the environment. The imagination core may be configured to output trajectory data in response to the current observation, and/or historical observations. The trajectory data comprising a sequence of future features of the environment imagined by the imagination core. The system may also include a rollout encoder to encode the features, and an output stage to receive data derived from the rollout embedding and to output action policy data for identifying an action based on the current observation.Type: ApplicationFiled: September 14, 2020Publication date: March 11, 2021Inventors: Daniel Pieter Wierstra, Yujia Li, Razvan Pascanu, Peter William Battaglia, Theophane Guillaume Weber, Lars Buesing, David Paul Reichert, Arthur Clement Guez, Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Manfred Otto Heess, Sebastien Henri Andre Racaniere
-
Patent number: 10936946Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.Type: GrantFiled: November 11, 2016Date of Patent: March 2, 2021Assignee: DeepMind Technologies LimitedInventors: Volodymyr Mnih, Adrià Puigdomènech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
-
Publication number: 20200372366Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.Type: ApplicationFiled: May 22, 2020Publication date: November 26, 2020Inventors: Adrià Puigdomènech Badia, Pablo Sprechmann, Alex Vitvitskyi, Zhaohan Guo, Bilal Piot, Steven James Kapturowski, Olivier Tieleman, Charles Blundell
-
Patent number: 10776670Abstract: A neural network system is proposed. The neural network can be trained by model-based reinforcement learning to select actions to be performed by an agent interacting with an environment, to perform a task in an attempt to achieve a specified result. The system may comprise at least one imagination core which receives a current observation characterizing a current state of the environment, and optionally historical observations, and which includes a model of the environment. The imagination core may be configured to output trajectory data in response to the current observation, and/or historical observations. The trajectory data comprising a sequence of future features of the environment imagined by the imagination core. The system may also include a rollout encoder to encode the features, and an output stage to receive data derived from the rollout embedding and to output action policy data for identifying an action based on the current observation.Type: GrantFiled: November 19, 2019Date of Patent: September 15, 2020Assignee: DeepMind Technologies LimitedInventors: Daniel Pieter Wierstra, Yujia Li, Razvan Pascanu, Peter William Battaglia, Theophane Guillaume Weber, Lars Buesing, David Paul Reichert, Arthur Clement Guez, Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Manfred Otto Heess, Sebastien Henri Andre Racaniere
-
Publication number: 20200285940Abstract: There is described herein a computer-implemented method of processing an input data item. The method comprises processing the input data item using a parametric model to generate output data, wherein the parametric model comprises a first sub-model and a second sub-model. The processing comprises processing, by the first sub-model, the input data to generate a query data item, retrieving, from a memory storing data point-value pairs, at least one data point-value pair based upon the query data item and modifying weights of the second sub-model based upon the retrieved at least one data point-value pair. The output data is then generated based upon the modified second sub-model.Type: ApplicationFiled: October 29, 2018Publication date: September 10, 2020Inventors: Pablo Sprechmann, Siddhant Jayakumar, Jack William Rae, Alexander Pritzel, Adrià Puigdomènech Badia, Oriol Vinyals, Razvan Pascanu, Charles Blundell
-
Publication number: 20200265317Abstract: A method includes maintaining respective episodic memory data for each of multiple actions; receiving a current observation characterizing a current state of an environment being interacted with by an agent; processing the current observation using an embedding neural network in accordance with current values of parameters of the embedding neural network to generate a current key embedding for the current observation; for each action of the plurality of actions: determining the p nearest key embeddings in the episodic memory data for the action to the current key embedding according to a distance measure, and determining a Q value for the action from the return estimates mapped to by the p nearest key embeddings in the episodic memory data for the action; and selecting, using the Q values for the actions, an action from the multiple actions as the action to be performed by the agent.Type: ApplicationFiled: April 23, 2020Publication date: August 20, 2020Inventors: Benigno Uria-Martínez, Alexander Pritzel, Charles Blundell, Adrià Puigdomènech Badia