Patents by Inventor Matthew Botvinick
Matthew Botvinick has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250103856Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for using a neural network to generate a network output that characterizes an entity. In one aspect, a method includes: obtaining a representation of the entity as a set of data element embeddings, obtaining a set of latent embeddings, and processing: (i) the set of data element embeddings, and (ii) the set of latent embeddings, using the neural network to generate the network output. The neural network includes a sequence of neural network blocks including: (i) one or more local cross-attention blocks, and (ii) an output block. Each local cross-attention block partitions the set of latent embeddings and the set of data element embeddings into proper subsets, and updates each proper subset of the set of latent embeddings using attention over only the corresponding proper subset of the set of data element embeddings.Type: ApplicationFiled: January 30, 2023Publication date: March 27, 2025Inventors: Joao Carreira, Andrew Coulter Jaegle, Skanda Kumar Koppula, Daniel Zoran, Adrià Recasens Continente, Catalin-Dumitru Ionescu, Olivier Jean Hénaff, Evan Gerard Shelhamer, Relja Arandjelovic, Matthew Botvinick, Oriol Vinyals, Karen Simonyan, Andrew Zisserman
-
Publication number: 20240221362Abstract: A computer-implemented video generation neural network system, configured to determine a value for each of a set of object latent variables by sampling from a respective prior object latent distribution for the object latent variable. The system comprises a trained image frame decoder neural network configured to, for each pixel of each generated image frame and for each generated image frame time step process determined values of the object latent variables to determine parameters of a pixel distribution for each of the object latent variables, combine the pixel distributions for each of the object latent variables to determine a combined pixel distribution, and sample from the combined pixel distribution to determine a value for the pixel and for the time step.Type: ApplicationFiled: May 27, 2022Publication date: July 4, 2024Inventors: Rishabh Kabra, Daniel Zoran, Goker Erdogan, Antonia Phoebe Nina Creswell, Loic Matthey-de-l'Endroit, Matthew Botvinick, Alexander Lerchner, Christopher Paul Burgess
-
Publication number: 20240020972Abstract: A video processing system configured to analyze a sequence of video frames to detect objects in the video frames and provide information relating to the detected objects in response to a query. The query may comprise, for example, a request for a prediction of a future event, or of the location of an object, or a request for a prediction of what would happen if an object were modified. The system uses a transformer neural network subsystem to process representations of objects in the video.Type: ApplicationFiled: October 1, 2021Publication date: January 18, 2024Inventors: Fengning Ding, Adam Anthony Santoro, Felix George Hill, Matthew Botvinick, Luis Piloto
-
Patent number: 11769057Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.Type: GrantFiled: June 6, 2022Date of Patent: September 26, 2023Assignee: DeepMind Technologies LimitedInventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
-
Publication number: 20230244907Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a sequence of data elements that includes a respective data element at each position in a sequence of positions. In one aspect, a method includes: for each position after a first position in the sequence of positions: obtaining a current sequence of data element embeddings that includes a respective data element embedding of each data element at a position that precedes the current position, obtaining a sequence of latent embeddings, and processing: (i) the current sequence of data element embeddings, and (ii) the sequence of latent embeddings, using a neural network to generate the data element at the current position. The neural network includes a sequence of neural network blocks including: (i) a cross-attention block, (ii) one or more self-attention blocks, and (iii) an output block.Type: ApplicationFiled: January 30, 2023Publication date: August 3, 2023Inventors: Curtis Glenn-Macway Hawthorne, Andrew Coulter Jaegle, Catalina-Codruta Cangea, Sebastian Borgeaud Dit Avocat, Charlie Thomas Curtis Nash, Mateusz Malinowski, Sander Etienne Lea Dieleman, Oriol Vinyals, Matthew Botvinick, Ian Stuart Simon, Hannah Rachel Sheahan, Neil Zeghidour, Jean-Baptiste Alayrac, Joao Carreira, Jesse Engel
-
Publication number: 20230088555Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.Type: ApplicationFiled: June 6, 2022Publication date: March 23, 2023Inventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
-
Patent number: 11423300Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a system output using a remembered value of a neural network hidden state. In one aspect, a system comprises an external memory that maintains context experience tuples respectively comprising: (i) a key embedding of context data, and (ii) a value of a hidden state of a neural network at the respective previous time step. The neural network is configured to receive a system input and a remembered value of the hidden state of the neural network and to generate a system output. The system comprises a memory interface subsystem that is configured to determine a key embedding for current context data, determine a remembered value of the hidden state of the neural network based on the key embedding, and provide the remembered value of the hidden state as an input to the neural network.Type: GrantFiled: February 8, 2019Date of Patent: August 23, 2022Assignee: DeepMind Technologies LimitedInventors: Samuel Ritter, Xiao Jing Wang, Siddhant Jayakumar, Razvan Pascanu, Charles Blundell, Matthew Botvinick
-
Patent number: 11354823Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.Type: GrantFiled: July 11, 2018Date of Patent: June 7, 2022Assignee: DeepMind Technologies LimitedInventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
-
Publication number: 20200234468Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.Type: ApplicationFiled: July 11, 2018Publication date: July 23, 2020Inventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick