Patents by Inventor Charles BLUNDELL

Charles BLUNDELL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240046106
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using multi-task neural networks. One of the methods includes receiving a first network input and data identifying a first machine learning task to be performed on the first network input; selecting a path through the plurality of layers in a super neural network that is specific to the first machine learning task, the path specifying, for each of the layers, a proper subset of the modular neural networks in the layer that are designated as active when performing the first machine learning task; and causing the super neural network to process the first network input using (i) for each layer, the modular neural networks in the layer that are designated as active by the selected path and (ii) the set of one or more output layers corresponding to the identified first machine learning task.
    Type: Application
    Filed: October 16, 2023
    Publication date: February 8, 2024
    Inventors: Daniel Pieter Wierstra, Chrisantha Thomas Fernando, Alexander Pritzel, Dylan Sunil Banarse, Charles Blundell, Andrei-Alexandru Rusu, Yori Zwols, David Ha
  • Publication number: 20240028866
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.
    Type: Application
    Filed: June 13, 2023
    Publication date: January 25, 2024
    Inventors: Adrià Puigdomènech Badia, Pablo Sprechmann, Alex Vitvitskyi, Zhaohan Guo, Bilal Piot, Steven James Kapturowski, Olivier Tieleman, Charles Blundell
  • Patent number: 11836630
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network. In one aspect, a method includes maintaining data specifying, for each of the network parameters, current values of a respective set of distribution parameters that define a posterior distribution over possible values for the network parameter. A respective current training value for each of the network parameters is determined from a respective temporary gradient value for the network parameter. The current values of the respective sets of distribution parameters for the network parameters are updated in accordance with the respective current training values for the network parameters. The trained values of the network parameters are determined based on the updated current values of the respective sets of distribution parameters.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: December 5, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Meire Fortunato, Charles Blundell, Oriol Vinyals
  • Publication number: 20230334288
    Abstract: Methods, systems, and apparatus for classifying a new example using a comparison set of comparison examples. One method includes maintaining a comparison set, the comparison set including comparison examples and a respective label vector for each of the comparison examples, each label vector including a respective score for each label in a predetermined set of labels; receiving a new example; determining a respective attention weight for each comparison example by applying a neural network attention mechanism to the new example and to the comparison examples; and generating a respective label score for each label in the predetermined set of labels from, for each of the comparison examples, the respective attention weight for the comparison example and the respective label vector for the comparison example, in which the respective label score for each of the labels represents a likelihood that the label is a correct label for the new example.
    Type: Application
    Filed: June 16, 2023
    Publication date: October 19, 2023
    Inventors: Charles Blundell, Oriol Vinyals
  • Patent number: 11790238
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using multi-task neural networks. One of the methods includes receiving a first network input and data identifying a first machine learning task to be performed on the first network input; selecting a path through the plurality of layers in a super neural network that is specific to the first machine learning task, the path specifying, for each of the layers, a proper subset of the modular neural networks in the layer that are designated as active when performing the first machine learning task; and causing the super neural network to process the first network input using (i) for each layer, the modular neural networks in the layer that are designated as active by the selected path and (ii) the set of one or more output layers corresponding to the identified first machine learning task.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: October 17, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Daniel Pieter Wierstra, Chrisantha Thomas Fernando, Alexander Pritzel, Dylan Sunil Banarse, Charles Blundell, Andrei-Alexandru Rusu, Yori Zwols, David Ha
  • Patent number: 11720796
    Abstract: A method includes maintaining respective episodic memory data for each of multiple actions; receiving a current observation characterizing a current state of an environment being interacted with by an agent; processing the current observation using an embedding neural network in accordance with current values of parameters of the embedding neural network to generate a current key embedding for the current observation; for each action of the plurality of actions: determining the p nearest key embeddings in the episodic memory data for the action to the current key embedding according to a distance measure, and determining a Q value for the action from the return estimates mapped to by the p nearest key embeddings in the episodic memory data for the action; and selecting, using the Q values for the actions, an action from the multiple actions as the action to be performed by the agent.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: August 8, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Benigno Uria-Martínez, Alexander Pritzel, Charles Blundell, Adrià Puigdomènech Badia
  • Patent number: 11714993
    Abstract: Methods, systems, and apparatus for classifying a new example using a comparison set of comparison examples. One method includes maintaining a comparison set, the comparison set including comparison examples and a respective label vector for each of the comparison examples, each label vector including a respective score for each label in a predetermined set of labels; receiving a new example; determining a respective attention weight for each comparison example by applying a neural network attention mechanism to the new example and to the comparison examples; and generating a respective label score for each label in the predetermined set of labels from, for each of the comparison examples, the respective attention weight for the comparison example and the respective label vector for the comparison example, in which the respective label score for each of the labels represents a likelihood that the label is a correct label for the new example.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: August 1, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Charles Blundell, Oriol Vinyals
  • Patent number: 11714990
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: August 1, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Adrià Puigdomènech Badia, Pablo Sprechmann, Alex Vitvitskyi, Zhaohan Guo, Bilal Piot, Steven James Kapturowski, Olivier Tieleman, Charles Blundell
  • Publication number: 20230124261
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a spatial embedding neural network that is configured to process data characterizing motion of an agent that is interacting with an environment to generate spatial embeddings. In one aspect, a method comprises: processing data characterizing the motion of the agent in the environment at the current time step using a spatial embedding neural network to generate a current spatial embedding for the current time step; determining a predicted score and a target score for each of a plurality of slots in an external memory, wherein each slot stores: (i) a representation of an observation characterizing a state of the environment, and (ii) a spatial embedding; and determining an update to values of the set of spatial embedding neural network parameters based on an error between the predicted scores and the target scores.
    Type: Application
    Filed: May 12, 2021
    Publication date: April 20, 2023
    Inventors: Benigno Uria-Martínez, Andrea Banino, Borja Ibarz Gabardos, Vinicius Zambaldi, Charles Blundell
  • Publication number: 20230059004
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning with adaptive return computation schemes. In one aspect, a method includes: maintaining data specifying a policy for selecting between multiple different return computation schemes, each return computation scheme assigning a different importance to exploring the environment while performing an episode of a task; selecting, using the policy, a return computation scheme from the multiple different return computation schemes; controlling an agent to perform the episode of the task to maximize a return computed according to the selected return computation scheme; identifying rewards that were generated as a result of the agent performing the episode of the task; and updating, using the identified rewards, the policy for selecting between multiple different return computation schemes.
    Type: Application
    Filed: February 8, 2021
    Publication date: February 23, 2023
    Inventors: Adrià Puigdomènech Badia, Bilal Piot, Pablo Sprechmann, Steven James Kapturowski, Alex Vitvitskyi, Zhaohan Guo, Charles Blundell
  • Patent number: 11562209
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for content recommendation using neural networks. In One aspect, a method includes: receiving context information for an action recommendation from multiple possible actions; processing the context information using a neural network that includes Bayesian neural network layers to generate, for each of the actions, one or more parameters of a distribution over possible action scores for the action, where each parameter for each Bayesian layer is associated with data representing a probability distribution over multiple possible current values for the parameter; for each parameter of each Bayesian neural network layer, selecting the current value for the parameter using data representing probability distribution over possible current values for the parameter; and selecting an action from multiple possible actions using the parameters of the distributions over the possible action scores for the action.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: January 24, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Charles Blundell, Julien Robert Michel Cornebise
  • Publication number: 20220383074
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing persistent message passing using graph neural networks.
    Type: Application
    Filed: May 31, 2022
    Publication date: December 1, 2022
    Inventors: Heiko Strathmann, Mohammadamin Barekatain, Charles Blundell, Petar Velickovic
  • Patent number: 11423300
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a system output using a remembered value of a neural network hidden state. In one aspect, a system comprises an external memory that maintains context experience tuples respectively comprising: (i) a key embedding of context data, and (ii) a value of a hidden state of a neural network at the respective previous time step. The neural network is configured to receive a system input and a remembered value of the hidden state of the neural network and to generate a system output. The system comprises a memory interface subsystem that is configured to determine a key embedding for current context data, determine a remembered value of the hidden state of the neural network based on the key embedding, and provide the remembered value of the hidden state as an input to the neural network.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: August 23, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Samuel Ritter, Xiao Jing Wang, Siddhant Jayakumar, Razvan Pascanu, Charles Blundell, Matthew Botvinick
  • Publication number: 20220253698
    Abstract: A neural network based memory system with external memory for storing representations of knowledge items. The memory can be used to retrieve indirectly related knowledge items by recirculating queries, and is useful for relational reasoning. Implementations of the system control how many times queries are recirculated, and hence the degree of relational reasoning, to minimize computation.
    Type: Application
    Filed: May 22, 2020
    Publication date: August 11, 2022
    Inventors: Andrea Banino, Charles Blundell, Adrià Puigdomènech Badia, Raphael Koster, Sudarshan Kumaran
  • Publication number: 20210383228
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating prediction outputs characterizing a set of entities. In one aspect, a method comprises: obtaining data defining a graph, comprising: (i) a set of nodes, wherein each node represents a respective entity from the set of entities, (ii) a current set of edges, wherein each edge connects a pair of nodes, and (iii) a respective current embedding of each node; at each of a plurality of time steps: updating the respective current embedding of each node, comprising processing data defining the graph using a graph neural network; and updating the current set of edges based at least in part on the updated embeddings of the nodes; and at one or more of the plurality of time steps: generating a prediction output characterizing the set of entities based on the current embeddings of the nodes.
    Type: Application
    Filed: June 4, 2021
    Publication date: December 9, 2021
    Inventors: Petar Velickovic, Charles Blundell, Oriol Vinyals, Razvan Pascanu, Lars Buesing, Matthew Overlan
  • Publication number: 20210224578
    Abstract: Methods, systems, and apparatus for classifying a new example using a comparison set of comparison examples. One method includes maintaining a comparison set, the comparison set including comparison examples and a respective label vector for each of the comparison examples, each label vector including a respective score for each label in a predetermined set of labels; receiving a new example; determining a respective attention weight for each comparison example by applying a neural network attention mechanism to the new example and to the comparison examples; and generating a respective label score for each label in the predetermined set of labels from, for each of the comparison examples, the respective attention weight for the comparison example and the respective label vector for the comparison example, in which the respective label score for each of the labels represents a likelihood that the label is a correct label for the new example.
    Type: Application
    Filed: April 6, 2021
    Publication date: July 22, 2021
    Inventors: Charles Blundell, Oriol Vinyals
  • Patent number: 10997472
    Abstract: Methods, systems, and apparatus for classifying a new example using a comparison set of comparison examples. One method includes maintaining a comparison set, the comparison set including comparison examples and a respective label vector for each of the comparison examples, each label vector including a respective score for each label in a predetermined set of labels; receiving a new example; determining a respective attention weight for each comparison example by applying a neural network attention mechanism to the new example and to the comparison examples; and generating a respective label score for each label in the predetermined set of labels from, for each of the comparison examples, the respective attention weight for the comparison example and the respective label vector for the comparison example, in which the respective label score for each of the labels represents a likelihood that the label is a correct label for the new example.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: May 4, 2021
    Assignee: DeepMind Technologies Limited
    Inventors: Charles Blundell, Oriol Vinyals
  • Publication number: 20210065012
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent. The method includes obtaining an observation characterizing a current state of an environment. For each layer parameter of each noisy layer of a neural network, a respective noise value is determined. For each layer parameter of each noisy layer, a noisy current value for the layer parameter is determined from a current value of the layer parameter, a current value of a corresponding noise parameter, and the noise value. A network input including the observation is processed using the neural network in accordance with the noisy current values to generate a network output for the network input. An action is selected from a set of possible actions to be performed by the agent in response to the observation using the network output.
    Type: Application
    Filed: September 14, 2020
    Publication date: March 4, 2021
    Inventors: Mohammad Gheshlaghi Azar, Meire Fortunato, Bilal Piot, Olivier Claude Pietquin, Jacob Lee Menick, Volodymyr Mnih, Charles Blundell, Remi Munos
  • Publication number: 20210004689
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network. In one aspect, a method includes maintaining data specifying, for each of the network parameters, current values of a respective set of distribution parameters that define a posterior distribution over possible values for the network parameter. A respective current training value for each of the network parameters is determined from a respective temporary gradient value for the network parameter. The current values of the respective sets of distribution parameters for the network parameters are updated in accordance with the respective current training values for the network parameters. The trained values of the network parameters are determined based on the updated current values of the respective sets of distribution parameters.
    Type: Application
    Filed: September 17, 2020
    Publication date: January 7, 2021
    Inventors: Meire Fortunato, Charles Blundell, Oriol Vinyals
  • Publication number: 20200380372
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using multi-task neural networks. One of the methods includes receiving a first network input and data identifying a first machine learning task to be performed on the first network input; selecting a path through the plurality of layers in a super neural network that is specific to the first machine learning task, the path specifying, for each of the layers, a proper subset of the modular neural networks in the layer that are designated as active when performing the first machine learning task; and causing the super neural network to process the first network input using (i) for each layer, the modular neural networks in the layer that are designated as active by the selected path and (ii) the set of one or more output layers corresponding to the identified first machine learning task.
    Type: Application
    Filed: August 17, 2020
    Publication date: December 3, 2020
    Inventors: Daniel Pieter Wierstra, Chrisantha Thomas Fernando, Alexander Pritzel, Dylan Sunil Banarse, Charles Blundell, Andrei-Alexandru Rusu, Yori Zwols, David Ha