Patents by Inventor Matthew Botvinick

Matthew Botvinick has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240020972
    Abstract: A video processing system configured to analyze a sequence of video frames to detect objects in the video frames and provide information relating to the detected objects in response to a query. The query may comprise, for example, a request for a prediction of a future event, or of the location of an object, or a request for a prediction of what would happen if an object were modified. The system uses a transformer neural network subsystem to process representations of objects in the video.
    Type: Application
    Filed: October 1, 2021
    Publication date: January 18, 2024
    Inventors: Fengning Ding, Adam Anthony Santoro, Felix George Hill, Matthew Botvinick, Luis Piloto
  • Patent number: 11769057
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.
    Type: Grant
    Filed: June 6, 2022
    Date of Patent: September 26, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
  • Publication number: 20230244907
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a sequence of data elements that includes a respective data element at each position in a sequence of positions. In one aspect, a method includes: for each position after a first position in the sequence of positions: obtaining a current sequence of data element embeddings that includes a respective data element embedding of each data element at a position that precedes the current position, obtaining a sequence of latent embeddings, and processing: (i) the current sequence of data element embeddings, and (ii) the sequence of latent embeddings, using a neural network to generate the data element at the current position. The neural network includes a sequence of neural network blocks including: (i) a cross-attention block, (ii) one or more self-attention blocks, and (iii) an output block.
    Type: Application
    Filed: January 30, 2023
    Publication date: August 3, 2023
    Inventors: Curtis Glenn-Macway Hawthorne, Andrew Coulter Jaegle, Catalina-Codruta Cangea, Sebastian Borgeaud Dit Avocat, Charlie Thomas Curtis Nash, Mateusz Malinowski, Sander Etienne Lea Dieleman, Oriol Vinyals, Matthew Botvinick, Ian Stuart Simon, Hannah Rachel Sheahan, Neil Zeghidour, Jean-Baptiste Alayrac, Joao Carreira, Jesse Engel
  • Publication number: 20230088555
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.
    Type: Application
    Filed: June 6, 2022
    Publication date: March 23, 2023
    Inventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
  • Patent number: 11423300
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a system output using a remembered value of a neural network hidden state. In one aspect, a system comprises an external memory that maintains context experience tuples respectively comprising: (i) a key embedding of context data, and (ii) a value of a hidden state of a neural network at the respective previous time step. The neural network is configured to receive a system input and a remembered value of the hidden state of the neural network and to generate a system output. The system comprises a memory interface subsystem that is configured to determine a key embedding for current context data, determine a remembered value of the hidden state of the neural network based on the key embedding, and provide the remembered value of the hidden state as an input to the neural network.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: August 23, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Samuel Ritter, Xiao Jing Wang, Siddhant Jayakumar, Razvan Pascanu, Charles Blundell, Matthew Botvinick
  • Patent number: 11354823
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.
    Type: Grant
    Filed: July 11, 2018
    Date of Patent: June 7, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
  • Publication number: 20200234468
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.
    Type: Application
    Filed: July 11, 2018
    Publication date: July 23, 2020
    Inventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick