Patents by Inventor Maxwell Elliot Jaderberg
Maxwell Elliot Jaderberg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240370725Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network having a plurality of policy parameters and used to select actions to be performed by an agent to control the agent to perform a particular task while interacting with one or more other agents in an environment. In one aspect, the method includes: maintaining data specifying a pool of candidate action selection policies; maintaining data specifying respective matchmaking policy; and training the policy neural network using a reinforcement learning technique to update the policy parameters. The policy parameters define policies to be used in controlling the agent to perform the particular task.Type: ApplicationFiled: July 12, 2024Publication date: November 7, 2024Inventors: David Silver, Oriol Vinyals, Maxwell Elliot Jaderberg
-
Publication number: 20240346310Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. A method includes: training a neural network having a plurality of network parameters to perform a particular neural network task and to determine trained values of the network parameters using an iterative training process having a plurality of hyperparameters, the method comprising: maintaining a plurality of candidate neural networks and, for each of the candidate neural networks, data specifying: (i) respective values of the network parameters for the candidate neural network, (ii) respective values of the hyperparameters for the candidate neural network, and (iii) a quality measure that measures a performance of the candidate neural network on the particular neural network task; and for each of the plurality of candidate neural networks, repeatedly performing additional training operations.Type: ApplicationFiled: March 21, 2024Publication date: October 17, 2024Inventors: Maxwell Elliot Jaderberg, Wojciech Czarnecki, Timothy Frederick Goldie Green, Valentin Clement Dalibard
-
Publication number: 20240330701Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for raining an agent neural network for use in controlling an agent to perform a plurality of tasks. One of the methods includes maintaining population data specifying a population of one or more candidate agent neural networks; and training each candidate agent neural network on a respective set of one or more tasks to update the parameter values of the parameters of the candidate agent neural networks in the population data, the training comprising, for each candidate agent neural network: obtaining data identifying a candidate task; obtaining data specifying a control policy for the candidate task; determining whether to train the candidate agent neural network on the candidate task; and in response to determining to train the candidate agent neural network on the candidate task, training the candidate agent neural network on the candidate task.Type: ApplicationFiled: July 27, 2022Publication date: October 3, 2024Inventors: Maxwell Elliot Jaderberg, Wojciech Czarnecki
-
Publication number: 20240320469Abstract: A system including an attention neural network that is configured to receive an input sequence and to process the input sequence to generate an output is described. The attention neural network includes: an attention block configured to receive a query input, a key input, and a value input that are derived from an attention block input. The attention block includes an attention neural network layer configured to: receive an attention layer input derived from the query input, the key input, and the value input, and apply an attention mechanism to the query input, the key input, and the value input to generate an attention layer output for the attention neural network layer; and a gating neural network layer configured to apply a gating mechanism to the attention block input and the attention layer output of the attention neural network layer to generate a gated attention output.Type: ApplicationFiled: May 30, 2024Publication date: September 26, 2024Inventors: Emilio Parisotto, Hasuk Song, Jack William Rae, Siddhant Madhu Jayakumar, Maxwell Elliot Jaderberg, Razvan Pascanu, Caglar Gulcehre
-
Patent number: 12067491Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network having a plurality of policy parameters and used to select actions to be performed by an agent to control the agent to perform a particular task while interacting with one or more other agents in an environment. In one aspect, the method includes: maintaining data specifying a pool of candidate action selection policies; maintaining data specifying respective matchmaking policy; and training the policy neural network using a reinforcement learning technique to update the policy parameters. The policy parameters define policies to be used in controlling the agent to perform the particular task.Type: GrantFiled: April 6, 2023Date of Patent: August 20, 2024Assignee: DeepMind Technologies LimitedInventors: David Silver, Oriol Vinyals, Maxwell Elliot Jaderberg
-
Publication number: 20240242091Abstract: Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network for performing a task. The system maintains data specifying (i) a plurality of candidate neural networks and (ii) a partitioning of the plurality of candidate neural networks into a plurality of partitions. The system repeatedly performs operations, including: training each of the candidate neural networks; evaluating each candidate neural network using a respective fitness function for the partition; and for each partition, updating the respective values of the one or more hyperparameters for at least one of the candidate neural networks in the partition based on the respective fitness metrics of the candidate neural networks in the partition. After repeatedly performing the operations, the system selects, from the maintained data, the respective values of the network parameters of one of the candidate neural networks.Type: ApplicationFiled: May 30, 2022Publication date: July 18, 2024Inventors: Valentin Clement Dalibard, Maxwell Elliot Jaderberg
-
Patent number: 12033055Abstract: A system including an attention neural network that is configured to receive an input sequence and to process the input sequence to generate an output is described. The attention neural network includes: an attention block configured to receive a query input, a key input, and a value input that are derived from an attention block input. The attention block includes an attention neural network layer configured to: receive an attention layer input derived from the query input, the key input, and the value input, and apply an attention mechanism to the query input, the key input, and the value input to generate an attention layer output for the attention neural network layer; and a gating neural network layer configured to apply a gating mechanism to the attention block input and the attention layer output of the attention neural network layer to generate a gated attention output.Type: GrantFiled: September 7, 2020Date of Patent: July 9, 2024Assignee: DeepMind Technologies LimitedInventors: Emilio Parisotto, Hasuk Song, Jack William Rae, Siddhant Madhu Jayakumar, Maxwell Elliot Jaderberg, Razvan Pascanu, Caglar Gulcehre
-
Publication number: 20240220774Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for reinforcement learning. One of the methods includes selecting an action to be performed by the agent using both a slow updating recurrent neural network and a fast updating recurrent neural network that receives a fast updating input that includes the hidden state of the slow updating recurrent neural network.Type: ApplicationFiled: December 11, 2023Publication date: July 4, 2024Inventors: Iain Robert Dunning, Wojciech Czarnecki, Maxwell Elliot Jaderberg
-
Publication number: 20240144015Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. The method includes: training an action selection policy neural network, and during the training of the action selection neural network, training one or more auxiliary control neural networks and a reward prediction neural network. Each of the auxiliary control neural networks is configured to receive a respective intermediate output generated by the action selection policy neural network and generate a policy output for a corresponding auxiliary control task. The reward prediction neural network is configured to receive one or more intermediate outputs generated by the action selection policy neural network and generate a corresponding predicted reward.Type: ApplicationFiled: November 3, 2023Publication date: May 2, 2024Inventors: Volodymyr Mnih, Wojciech Czarnecki, Maxwell Elliot Jaderberg, Tom Schaul, David Silver, Koray Kavukcuoglu
-
Patent number: 11941527Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. A method includes: training a neural network having a plurality of network parameters to perform a particular neural network task and to determine trained values of the network parameters using an iterative training process having a plurality of hyperparameters, the method comprising: maintaining a plurality of candidate neural networks and, for each of the candidate neural networks, data specifying: (i) respective values of the network parameters for the candidate neural network, (ii) respective values of the hyperparameters for the candidate neural network, and (iii) a quality measure that measures a performance of the candidate neural network on the particular neural network task; and for each of the plurality of candidate neural networks, repeatedly performing additional training operations.Type: GrantFiled: March 13, 2023Date of Patent: March 26, 2024Assignee: DeepMind Technologies LimitedInventors: Maxwell Elliot Jaderberg, Wojciech Czarnecki, Timothy Frederick Goldie Green, Valentin Clement Dalibard
-
Patent number: 11907821Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a machine learning model. A method includes: maintaining a plurality of training sessions; assigning, to each worker of one or more workers, a respective training session of the plurality of training sessions; repeatedly performing operations until meeting one or more termination criteria, the operations comprising: receiving an updated training session from a respective worker of the one or more workers, selecting a second training session, selecting, based on comparing the updated training session and the second training session using a fitness evaluation function, either the updated training session or the second training session as a parent training session, generating a child training session from the selected parent training session, and assigning the child training session to an available worker, and selecting a candidate model to be a trained model for the machine learning model.Type: GrantFiled: September 27, 2019Date of Patent: February 20, 2024Assignee: DeepMind Technologies LimitedInventors: Ang Li, Valentin Clement Dalibard, David Budden, Ola Spyra, Maxwell Elliot Jaderberg, Timothy James Alexander Harley, Sagi Perel, Chenjie Gu, Pramod Gupta
-
Patent number: 11842261Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for reinforcement learning. One of the methods includes selecting an action to be performed by the agent using both a slow updating recurrent neural network and a fast updating recurrent neural network that receives a fast updating input that includes the hidden state of the slow updating recurrent neural network.Type: GrantFiled: December 14, 2020Date of Patent: December 12, 2023Assignee: DeepMind Technologies LimitedInventors: Iain Robert Dunning, Wojciech Czarnecki, Maxwell Elliot Jaderberg
-
Patent number: 11842281Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. The method includes: training an action selection policy neural network, and during the training of the action selection neural network, training one or more auxiliary control neural networks and a reward prediction neural network. Each of the auxiliary control neural networks is configured to receive a respective intermediate output generated by the action selection policy neural network and generate a policy output for a corresponding auxiliary control task. The reward prediction neural network is configured to receive one or more intermediate outputs generated by the action selection policy neural network and generate a corresponding predicted reward.Type: GrantFiled: February 24, 2021Date of Patent: December 12, 2023Assignee: DeepMind Technologies LimitedInventors: Volodymyr Mnih, Wojciech Czarnecki, Maxwell Elliot Jaderberg, Tom Schaul, David Silver, Koray Kavukcuoglu
-
Publication number: 20230281445Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. A method includes: training a neural network having a plurality of network parameters to perform a particular neural network task and to determine trained values of the network parameters using an iterative training process having a plurality of hyperparameters, the method comprising: maintaining a plurality of candidate neural networks and, for each of the candidate neural networks, data specifying: (i) respective values of the network parameters for the candidate neural network, (ii) respective values of the hyperparameters for the candidate neural network, and (iii) a quality measure that measures a performance of the candidate neural network on the particular neural network task; and for each of the plurality of candidate neural networks, repeatedly performing additional training operations.Type: ApplicationFiled: March 13, 2023Publication date: September 7, 2023Inventors: Maxwell Elliot Jaderberg, Wojciech Czarnecki, Timothy Frederick Goldie Green, Valentin Clement Dalibard
-
Patent number: 11734572Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing inputs using an image processing neural network system that includes a spatial transformer module. One of the methods includes receiving an input feature map derived from the one or more input images, and applying a spatial transformation to the input feature map to generate a transformed feature map, comprising: processing the input feature map to generate spatial transformation parameters for the spatial transformation, and sampling from the input feature map in accordance with the spatial transformation parameters to generate the transformed feature map.Type: GrantFiled: August 17, 2020Date of Patent: August 22, 2023Assignee: DeepMind Technologies LimitedInventors: Maxwell Elliot Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu
-
Publication number: 20230244936Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network having a plurality of policy parameters and used to select actions to be performed by an agent to control the agent to perform a particular task while interacting with one or more other agents in an environment. In one aspect, the method includes: maintaining data specifying a pool of candidate action selection policies; maintaining data specifying respective matchmaking policy; and training the policy neural network using a reinforcement learning technique to update the policy parameters. The policy parameters define policies to be used in controlling the agent to perform the particular task.Type: ApplicationFiled: April 6, 2023Publication date: August 3, 2023Inventors: David Silver, Oriol Vinyals, Maxwell Elliot Jaderberg
-
Patent number: 11715009Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network including a first subnetwork followed by a second subnetwork on training inputs by optimizing an objective function. In one aspect, a method includes processing a training input using the neural network to generate a training model output, including processing a subnetwork input for the training input using the first subnetwork to generate a subnetwork activation for the training input in accordance with current values of parameters of the first subnetwork, and providing the subnetwork activation as input to the second subnetwork; determining a synthetic gradient of the objective function for the first subnetwork by processing the subnetwork activation using a synthetic gradient model in accordance with current values of parameters of the synthetic gradient model; and updating the current values of the parameters of the first subnetwork using the synthetic gradient.Type: GrantFiled: May 19, 2017Date of Patent: August 1, 2023Assignee: DeepMind Technologies LimitedInventors: Oriol Vinyals, Alexander Benjamin Graves, Wojciech Czarnecki, Koray Kavukcuoglu, Simon Osindero, Maxwell Elliot Jaderberg
-
Patent number: 11627165Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network having a plurality of policy parameters and used to select actions to be performed by an agent to control the agent to perform a particular task while interacting with one or more other agents in an environment. In one aspect, the method includes: maintaining data specifying a pool of candidate action selection policies; maintaining data specifying respective matchmaking policy; and training the policy neural network using a reinforcement learning technique to update the policy parameters. The policy parameters define policies to be used in controlling the agent to perform the particular task.Type: GrantFiled: January 24, 2020Date of Patent: April 11, 2023Assignee: DeepMind Technologies LimitedInventors: David Silver, Oriol Vinyals, Maxwell Elliot Jaderberg
-
Patent number: 11604985Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. A method includes: training a neural network having multiple network parameters to perform a particular neural network task and to determine trained values of the network parameters using an iterative training process having multiple hyperparameters, the method includes: maintaining multiple candidate neural networks and, for each of the multiple candidate neural networks, data specifying: (i) respective values of network parameters for the candidate neural network, (ii) respective values of hyperparameters for the candidate neural network, and (iii) a quality measure that measures a performance of the candidate neural network on the particular neural network task; and for each of the multiple candidate neural networks, repeatedly performing additional training operations.Type: GrantFiled: November 22, 2018Date of Patent: March 14, 2023Assignee: DeepMind Technologies LimitedInventors: Maxwell Elliot Jaderberg, Wojciech Czarnecki, Timothy Frederick Goldie Green, Valentin Clement Dalibard
-
Publication number: 20220366218Abstract: A system including an attention neural network that is configured to receive an input sequence and to process the input sequence to generate an output is described. The attention neural network includes: an attention block configured to receive a query input, a key input, and a value input that are derived from an attention block input. The attention block includes an attention neural network layer configured to: receive an attention layer input derived from the query input, the key input, and the value input, and apply an attention mechanism to the query input, the key input, and the value input to generate an attention layer output for the attention neural network layer; and a gating neural network layer configured to apply a gating mechanism to the attention block input and the attention layer output of the attention neural network layer to generate a gated attention output.Type: ApplicationFiled: September 7, 2020Publication date: November 17, 2022Inventors: Emilio Parisotto, Hasuk Song, Jack William Rae, Siddhant Madhu Jayakumar, Maxwell Elliot Jaderberg, Razvan Pascanu, Caglar Gulcehre