Patents Assigned to DeepMind Technologies
-
Patent number: 11836620Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning. The embodiments described herein apply meta-learning (and in particular, meta-gradient reinforcement learning) to learn an optimum return function G so that the training of the system is improved. This provides a more effective and efficient means of training a reinforcement learning system as the system is able to converge on an optimum set of one or more policy parameters ? more quickly by training the return function G as it goes. In particular, the return function G is made dependent on the one or more policy parameters ? and a meta-objective function J? is used that is differentiated with respect to the one or more return parameters ? to improve the training of the return function G.Type: GrantFiled: December 4, 2020Date of Patent: December 5, 2023Assignee: DeepMind Technologies LimitedInventors: Zhongwen Xu, Hado Philip van Hasselt, David Silver
-
Patent number: 11836599Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for improving operational efficiency within a data center by modeling data center performance and predicting power usage efficiency. An example method receives a state input characterizing a current state of a data center. For each data center setting slate, the state input and the data center setting slate are processed through an ensemble of machine learning models. Each machine learning model is configured to receive and process the state input and the data center setting slate to generate an efficiency score that characterizes a predicted resource efficiency of the data center if the data center settings defined by the data center setting slate are adopted t. The method selects, based on the efficiency scores for the data center setting slates, new values for the data center settings.Type: GrantFiled: May 26, 2021Date of Patent: December 5, 2023Assignee: DeepMind Technologies LimitedInventors: Richard Andrew Evans, Jim Gao, Michael C. Ryan, Gabriel Dulac-Arnold, Jonathan Karl Scholz, Todd Andrew Hester
-
Patent number: 11836625Abstract: Methods, systems and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network. One of the methods includes receiving an observation characterizing a current state of the environment; determining a target network output for the observation by performing a look ahead search of possible future states of the environment starting from the current state until the environment reaches a possible future state that satisfies one or more termination criteria, wherein the look ahead search is guided by the neural network in accordance with current values of the network parameters; selecting an action to be performed by the agent in response to the observation using the target network output generated by performing the look ahead search; and storing, in an exploration history data store, the target network output in association with the observation for use in updating the current values of the network parameters.Type: GrantFiled: September 19, 2022Date of Patent: December 5, 2023Assignee: DeepMind Technologies LimitedInventors: Karen Simonyan, David Silver, Julian Schrittwieser
-
Patent number: 11836596Abstract: A system including one or more computers and one or more storage devices storing instructions that when executed by the one or more computers cause the one or more computers to implement a memory and memory-based neural network is described. The memory is configured to store a respective memory vector at each of a plurality of memory locations in the memory. The memory-based neural network is configured to: at each of a plurality of time steps: receive an input; determine an update to the memory, wherein determining the update comprising applying an attention mechanism over the memory vectors in the memory and the received input; update the memory using the determined update to the memory; and generate an output for the current time step using the updated memory.Type: GrantFiled: November 30, 2020Date of Patent: December 5, 2023Assignee: DeepMind Technologies LimitedInventors: Mike Chrzanowski, Jack William Rae, Ryan Faulkner, Theophane Guillaume Weber, David Nunes Raposo, Adam Anthony Santoro
-
Patent number: 11829884Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing a machine learning task on a network input that is a sequence to generate a network output. In one aspect, one of the methods includes, for each particular sequence of layer inputs: for each attention layer in the neural network: maintaining episodic memory data; maintaining compressed memory data; receiving a layer input to be processed by the attention layer; and applying an attention mechanism over (i) the compressed representation in the compressed memory data for the layer, (ii) the hidden states in the episodic memory data for the layer, and (iii) the respective hidden state at each of the plurality of input positions in the particular network input to generate a respective activation for each input position in the layer input.Type: GrantFiled: September 25, 2020Date of Patent: November 28, 2023Assignee: DeepMind Technologies LimitedInventors: Jack William Rae, Anna Potapenko, Timothy Paul Lillicrap
-
Patent number: 11830475Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network to perform speech synthesis. One of the methods includes obtaining a training data set for training a first neural network to process a spectral representation of an audio sample and to generate a prediction of the audio sample, wherein, after training, the first neural network obtains spectral representations of audio samples from a second neural network; for a plurality of audio samples in the training data set: generating a ground-truth spectral representation of the audio sample; and processing the ground-truth spectral representation using a third neural network to generate an updated spectral representation of the audio sample; and training the first neural network using the updated spectral representations, wherein the third neural network is configured to generate updated spectral representations that resemble spectral representations generated by the second neural network.Type: GrantFiled: June 1, 2022Date of Patent: November 28, 2023Assignee: DeepMind Technologies LimitedInventor: Norman Casagrande
-
Patent number: 11803746Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural programming. One of the methods includes processing a current neural network input using a core recurrent neural network to generate a neural network output; determining, from the neural network output, whether or not to end a currently invoked program and to return to a calling program from the set of programs; determining, from the neural network output, a next program to be called; determining, from the neural network output, contents of arguments to the next program to be called; receiving a representation of a current state of the environment; and generating a next neural network input from an embedding for the next program to be called and the representation of the current state of the environment.Type: GrantFiled: April 27, 2020Date of Patent: October 31, 2023Assignee: DeepMind Technologies LimitedInventors: Scott Ellison Reed, Joao Ferdinando Gomes de Freitas
-
Patent number: 11803750Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an actor neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a minibatch of experience tuples; and updating current values of the parameters of the actor neural network, comprising: for each experience tuple in the minibatch: processing the training observation and the training action in the experience tuple using a critic neural network to determine a neural network output for the experience tuple, and determining a target neural network output for the experience tuple; updating current values of the parameters of the critic neural network using errors between the target neural network outputs and the neural network outputs; and updating the current values of the parameters of the actor neural network using the critic neural network.Type: GrantFiled: September 14, 2020Date of Patent: October 31, 2023Assignee: DeepMind Technologies LimitedInventors: Timothy Paul Lillicrap, Jonathan James Hunt, Alexander Pritzel, Nicolas Manfred Otto Heess, Tom Erez, Yuval Tassa, David Silver, Daniel Pieter Wierstra
-
Patent number: 11790238Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using multi-task neural networks. One of the methods includes receiving a first network input and data identifying a first machine learning task to be performed on the first network input; selecting a path through the plurality of layers in a super neural network that is specific to the first machine learning task, the path specifying, for each of the layers, a proper subset of the modular neural networks in the layer that are designated as active when performing the first machine learning task; and causing the super neural network to process the first network input using (i) for each layer, the modular neural networks in the layer that are designated as active by the selected path and (ii) the set of one or more output layers corresponding to the identified first machine learning task.Type: GrantFiled: August 17, 2020Date of Patent: October 17, 2023Assignee: DeepMind Technologies LimitedInventors: Daniel Pieter Wierstra, Chrisantha Thomas Fernando, Alexander Pritzel, Dylan Sunil Banarse, Charles Blundell, Andrei-Alexandru Rusu, Yori Zwols, David Ha
-
Patent number: 11790209Abstract: Methods, and systems, including computer programs encoded on computer storage media for generating data items. A method includes reading a glimpse from a data item using a decoder hidden state vector of a decoder for a preceding time step, providing, as input to a encoder, the glimpse and decoder hidden state vector for the preceding time step for processing, receiving, as output from the encoder, a generated encoder hidden state vector for the time step, generating a decoder input from the generated encoder hidden state vector, providing the decoder input to the decoder for processing, receiving, as output from the decoder, a generated a decoder hidden state vector for the time step, generating a neural network output update from the decoder hidden state vector for the time step, and combining the neural network output update with a current neural network output to generate an updated neural network output.Type: GrantFiled: July 23, 2021Date of Patent: October 17, 2023Assignee: DeepMind Technologies LimitedInventors: Karol Gregor, Ivo Danihelka
-
Patent number: 11783182Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.Type: GrantFiled: February 8, 2021Date of Patent: October 10, 2023Assignee: DeepMind Technologies LimitedInventors: Volodymyr Mnih, Adrià Puigdomènech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
-
Patent number: 11775804Abstract: Methods and systems for performing a sequence of machine learning tasks. One system includes a sequence of deep neural networks (DNNs), including: a first DNN corresponding to a first machine learning task, wherein the first DNN comprises a first plurality of indexed layers, and each layer in the first plurality of indexed layers is configured to receive a respective layer input and process the layer input to generate a respective layer output; and one or more subsequent DNNs corresponding to one or more respective machine learning tasks, wherein each subsequent DNN comprises a respective plurality of indexed layers, and each layer in a respective plurality of indexed layers with index greater than one receives input from a preceding layer of the respective subsequent DNN, and one or more preceding layers of respective preceding DNNs, wherein a preceding layer is a layer whose index is one less than the current index.Type: GrantFiled: March 15, 2021Date of Patent: October 3, 2023Assignee: DeepMind Technologies LimitedInventors: Neil Charles Rabinowitz, Guillaume Desjardins, Andrei-Alexandru Rusu, Koray Kavukcuoglu, Raia Thais Hadsell, Razvan Pascanu, James Kirkpatrick, Hubert Josef Soyer
-
Patent number: 11775830Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes processing each training input using the neural network and in accordance with the current values of the network parameters to generate a network output for the training input; computing a respective loss for each of the training inputs by evaluating a loss function; identifying, from a plurality of possible perturbations, a maximally non-linear perturbation; and determining an update to the current values of the parameters of the neural network by performing an iteration of a neural network training procedure to decrease the respective losses for the training inputs and to decrease the non-linearity of the loss function for the identified maximally non-linear perturbation.Type: GrantFiled: December 12, 2022Date of Patent: October 3, 2023Assignee: DeepMind Technologies LimitedInventors: Chongli Qin, Sven Adrian Gowal, Soham De, Robert Stanforth, James Martens, Krishnamurthy Dvijotham, Dilip Krishnan, Alhussein Fawzi
-
Patent number: 11769057Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.Type: GrantFiled: June 6, 2022Date of Patent: September 26, 2023Assignee: DeepMind Technologies LimitedInventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
-
Patent number: 11769051Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using normalized target outputs. One of the methods includes updating current values of the normalization parameters to account for the target output for the training item; determining a normalized target output for the training item by normalizing the target output for the training item in accordance with the updated normalization parameter values; processing the training item using the neural network to generate a normalized output for the training item in accordance with current values of main parameters of the neural network; determining an error for the training item using the normalized target output and the normalized output; and using the error to adjust the current values of the main parameters of the neural network.Type: GrantFiled: June 24, 2021Date of Patent: September 26, 2023Assignee: DeepMind Technologies LimitedInventor: Hado Philip van Hasselt
-
Patent number: 11769049Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network system used to control an agent interacting with an environment to perform a specified task. One of the methods includes causing the agent to perform a task episode in which the agent attempts to perform the specified task; for each of one or more particular time steps in the sequence: generating a modified reward for the particular time step from (i) the actual reward at the time step and (ii) value predictions at one or more time steps that are more than a threshold number of time steps after the particular time step in the sequence; and training, through reinforcement learning, the neural network system using at least the modified rewards for the particular time steps.Type: GrantFiled: September 28, 2020Date of Patent: September 26, 2023Assignee: DeepMind Technologies LimitedInventors: Gregory Duncan Wayne, Timothy Paul Lillicrap, Chia-Chun Hung, Joshua Simon Abramson
-
Patent number: 11755879Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for processing and storing inputs for use in a neural network. One of the methods includes receiving input data for storage in a memory system comprising a first set of memory blocks, the memory blocks having an associated order; passing the input data to a highest ordered memory block; for each memory block for which there is a lower ordered memory block: applying a filter function to data currently stored by the memory block to generate filtered data and passing the filtered data to a lower ordered memory block; and for each memory block: combining the data currently stored in the memory block with the data passed to the memory block to generate updated data, and storing the updated data in the memory block.Type: GrantFiled: February 11, 2019Date of Patent: September 12, 2023Assignee: DeepMind Technologies LimitedInventors: Razvan Pascanu, William Clinton Dabney, Thomas Stepleton
-
Patent number: 11756561Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating discrete latent representations of input audio data. Only the discrete latent representation needs to be transmitted from an encoder system to a decoder system in order for the decoder system to be able to effectively to decode, i.e., reconstruct, the input audio data.Type: GrantFiled: February 17, 2022Date of Patent: September 12, 2023Assignee: DeepMind Technologies LimitedInventors: Cristina Garbacea, Aaron Gerard Antonius van den Oord, Yazhe Li, Sze Chie Lim, Alejandro Luebs, Oriol Vinyals, Thomas Chadwick Walters
-
Patent number: 11741334Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-efficient reinforcement learning. One of the systems is a system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by receiving observations characterizing states of the environment and, in response to each observation, performing an action selected from a continuous space of possible actions, wherein the actor neural network maps observations to next actions in accordance with values of parameters of the actor neural network, and wherein the system comprises: a plurality of workers, wherein each worker is configured to operate independently of each other worker, wherein each worker is associated with a respective agent replica that interacts with a respective replica of the environment during the training of the actor neural network.Type: GrantFiled: May 22, 2020Date of Patent: August 29, 2023Assignee: DeepMind Technologies LimitedInventors: Martin Riedmiller, Roland Hafner, Mel Vecerik, Timothy Paul Lillicrap, Thomas Lampe, Ivaylo Popov, Gabriel Barth-Maron, Nicolas Manfred Otto Heess
-
Patent number: 11734572Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing inputs using an image processing neural network system that includes a spatial transformer module. One of the methods includes receiving an input feature map derived from the one or more input images, and applying a spatial transformation to the input feature map to generate a transformed feature map, comprising: processing the input feature map to generate spatial transformation parameters for the spatial transformation, and sampling from the input feature map in accordance with the spatial transformation parameters to generate the transformed feature map.Type: GrantFiled: August 17, 2020Date of Patent: August 22, 2023Assignee: DeepMind Technologies LimitedInventors: Maxwell Elliot Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu