Patents by Inventor Alexander Lerchner
Alexander Lerchner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240221362Abstract: A computer-implemented video generation neural network system, configured to determine a value for each of a set of object latent variables by sampling from a respective prior object latent distribution for the object latent variable. The system comprises a trained image frame decoder neural network configured to, for each pixel of each generated image frame and for each generated image frame time step process determined values of the object latent variables to determine parameters of a pixel distribution for each of the object latent variables, combine the pixel distributions for each of the object latent variables to determine a combined pixel distribution, and sample from the combined pixel distribution to determine a value for the pixel and for the time step.Type: ApplicationFiled: May 27, 2022Publication date: July 4, 2024Inventors: Rishabh Kabra, Daniel Zoran, Goker Erdogan, Antonia Phoebe Nina Creswell, Loic Matthey-de-l'Endroit, Matthew Botvinick, Alexander Lerchner, Christopher Paul Burgess
-
Patent number: 11842270Abstract: We describe an artificial neural network comprising: an input layer of input neurons, one or more hidden layers of neurons in successive layers of neurons above the input layer, and at least one further, concept-identifying layer of neurons above the hidden layers. The neural network includes an activation memory coupled to an intermediate, hidden layer of neurons between the input concept-identifying layers to store a pattern of activation of the intermediate layer. The neural network further includes a system to determine an overlap between a plurality of the stored patterns of activation and to activate in the intermediate hidden layer an overlap pattern such that the concept-identifying layer of neurons is configured to identify features of the overlap patterns. We also describe related methods, processor control code, and computing systems for the neural network. Optionally further, higher level concept-identifying layers of neurons may be included.Type: GrantFiled: June 30, 2020Date of Patent: December 12, 2023Assignee: DeepMind Technologies LimitedInventors: Alexander Lerchner, Demis Hassabis
-
Patent number: 11769057Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.Type: GrantFiled: June 6, 2022Date of Patent: September 26, 2023Assignee: DeepMind Technologies LimitedInventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
-
Publication number: 20230088555Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.Type: ApplicationFiled: June 6, 2022Publication date: March 23, 2023Inventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
-
Patent number: 11354823Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.Type: GrantFiled: July 11, 2018Date of Patent: June 7, 2022Assignee: DeepMind Technologies LimitedInventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
-
Publication number: 20200234468Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.Type: ApplicationFiled: July 11, 2018Publication date: July 23, 2020Inventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
-
Patent number: 10643131Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a variational auto-encoder (VAE) to generate disentangled latent factors on unlabeled training images. In one aspect, a method includes receiving the plurality of unlabeled training images, and, for each unlabeled training image, processing the unlabeled training image using the VAE to determine the latent representation of the unlabeled training image and to generate a reconstruction of the unlabeled training image in accordance with current values of the parameters of the VAE, and adjusting current values of the parameters of the VAE by optimizing a loss function that depends on a quality of the reconstruction and also on a degree of independence between the latent factors in the latent representation of the unlabeled training image.Type: GrantFiled: August 5, 2019Date of Patent: May 5, 2020Assignee: DeepMind Technologies LimitedInventors: Loic Matthey-de-l'Endroit, Arka Tilak Pal, Shakir Mohamed, Xavier Glorot, Irina Higgins, Alexander Lerchner
-
Patent number: 10373055Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a variational auto-encoder (VAE) to generate disentangled latent factors on unlabeled training images. In one aspect, a method includes receiving the plurality of unlabeled training images, and, for each unlabeled training image, processing the unlabeled training image using the VAE to determine the latent representation of the unlabeled training image and to generate a reconstruction of the unlabeled training image in accordance with current values of the parameters of the VAE, and adjusting current values of the parameters of the VAE by optimizing a loss function that depends on a quality of the reconstruction and also on a degree of independence between the latent factors in the latent representation of the unlabeled training image.Type: GrantFiled: May 19, 2017Date of Patent: August 6, 2019Assignee: Deepmind Technologies LimitedInventors: Loic Matthey-de-l'Endroit, Arka Tilak Pal, Shakir Mohamed, Xavier Glorot, Irina Higgins, Alexander Lerchner