Patents by Inventor Daniel Pieter Wierstra

Daniel Pieter Wierstra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190354868
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using multi-task neural networks. One of the methods includes receiving a first network input and data identifying a first machine learning task to be performed on the first network input; selecting a path through the plurality of layers in a super neural network that is specific to the first machine learning task, the path specifying, for each of the layers, a proper subset of the modular neural networks in the layer that are designated as active when performing the first machine learning task; and causing the super neural network to process the first network input using (i) for each layer, the modular neural networks in the layer that are designated as active by the selected path and (ii) the set of one or more output layers corresponding to the identified first machine learning task.
    Type: Application
    Filed: July 30, 2019
    Publication date: November 21, 2019
    Inventors: Daniel Pieter Wierstra, Chrisantha Thomas Fernando, Alexander Pritzel, Dylan Sunil Banarse, Charles Blundell, Andrei-Alexandru Rusu, Yori Zwols, David Ha
  • Patent number: 10432953
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for compressing images using neural networks. One of the methods includes receiving an image; processing the image using an encoder neural network, wherein the encoder neural network is configured to receive the image and to process the image to generate an output defining values of a first number of latent variables that each represent a feature of the image; generating a compressed representation of the image using the output defining the values of the first number of latent variables; and providing the compressed representation of the image for use in generating a reconstruction of the image.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: October 1, 2019
    Assignee: DeepMind Technologies Limited
    Inventors: Daniel Pieter Wierstra, Karol Gregor, Frederic Olivier Besse
  • Publication number: 20190266475
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for environment simulation. In one aspect, a system comprises a recurrent neural network configured to, at each of a plurality of time steps, receive a preceding action for a preceding time step, update a preceding initial hidden state of the recurrent neural network from the preceding time step using the preceding action, update a preceding cell state of the recurrent neural network from the preceding time step using at least the initial hidden state for the time step, and determine a final hidden state for the time step using the cell state for the time step. The system further comprises a decoder neural network configured to receive the final hidden state for the time step and process the final hidden state to generate a predicted observation characterizing a predicted state of the environment at the time step.
    Type: Application
    Filed: May 3, 2019
    Publication date: August 29, 2019
    Inventors: Daniel Pieter Wierstra, Shakir Mohamed, Silvia Chiappa, Sebastien Henri Andre Racaniere
  • Publication number: 20170228637
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for augmenting neural networks with an external memory. One of the systems includes a controller neural network that includes a Least Recently Used Access (LRUA) subsystem configured to: maintain a respective usage weight for each of a plurality of locations in the external memory, and for each of the plurality of time steps: generate a respective reading weight for each location using a read key, read data from the locations in accordance with the reading weights, generate a respective writing weight for each of the locations from a respective reading weight from a preceding time step and the respective usage weight for the location, write a write vector to the locations in accordance with the writing weights, and update the respective usage weight from the respective reading weight and the respective writing weight.
    Type: Application
    Filed: December 30, 2016
    Publication date: August 10, 2017
    Inventors: Adam Anthony Santoro, Daniel Pieter Wierstra, Timothy Paul Lillicrap, Sergey Bartunov, Ivo Danihelka
  • Publication number: 20170230675
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for compressing images using neural networks. One of the methods includes receiving an image; processing the image using an encoder neural network, wherein the encoder neural network is configured to receive the image and to process the image to generate an output defining values of a first number of latent variables that each represent a feature of the image; generating a compressed representation of the image using the output defining the values of the first number of latent variables; and providing the compressed representation of the image for use in generating a reconstruction of the image.
    Type: Application
    Filed: December 30, 2016
    Publication date: August 10, 2017
    Inventors: Daniel Pieter Wierstra, Karol Gregor, Frederic Olivier Besse
  • Publication number: 20170024643
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an actor neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a minibatch of experience tuples; and updating current values of the parameters of the actor neural network, comprising: for each experience tuple in the minibatch: processing the training observation and the training action in the experience tuple using a critic neural network to determine a neural network output for the experience tuple, and determining a target neural network output for the experience tuple; updating current values of the parameters of the critic neural network using errors between the target neural network outputs and the neural network outputs; and updating the current values of the parameters of the actor neural network using the critic neural network.
    Type: Application
    Filed: July 22, 2016
    Publication date: January 26, 2017
    Inventors: Timothy Paul Lillicrap, Jonathan James Hunt, Alexander Pritzel, Nicolas Manfred Otto Heess, Tom Erez, Yuval Tassa, David Silver, Daniel Pieter Wierstra
  • Patent number: 9342781
    Abstract: We describe a signal processor, the signal processor comprising: a probability vector generation system, wherein said probability vector generation system has an input to receive a category vector for a category of output example and an output to provide a probability vector for said category of output example, wherein said output example comprises a set of data points, and wherein said probability vector defines a probability of each of said set of data points for said category of output example; a memory storing a plurality of said category vectors, one for each of a plurality of said categories of output example; and a stochastic selector to select a said stored category of output example for presentation of the corresponding category vector to said probability vector generation system; wherein said signal processor is configured to output data for an output example corresponding to said selected stored category.
    Type: Grant
    Filed: June 24, 2013
    Date of Patent: May 17, 2016
    Assignee: Google Inc.
    Inventors: Julien Robert Michel Cornebise, Danilo Jimenez Rezende, Daniël Pieter Wierstra
  • Publication number: 20140279777
    Abstract: We describe a signal processor, the signal processor comprising: a probability vector generation system, wherein said probability vector generation system has an input to receive a category vector for a category of output example and an output to provide a probability vector for said category of output example, wherein said output example comprises a set of data points, and wherein said probability vector defines a probability of each of said set of data points for said category of output example; a memory storing a plurality of said category vectors, one for each of a plurality of said categories of output example; and a stochastic selector to select a said stored category of output example for presentation of the corresponding category vector to said probability vector generation system; wherein said signal processor is configured to output data for an output example corresponding to said selected stored category.
    Type: Application
    Filed: June 24, 2013
    Publication date: September 18, 2014
    Applicant: Google Inc.
    Inventors: Julien Robert Michel Cornebise, Danilo Jimenez Rezende, Daniël Pieter Wierstra