Patents by Inventor Irina Higgins

Irina Higgins has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104391
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for A training a language model for performing a reasoning task. The system obtains a plurality of training examples. Each training example includes a respective sample query text sequence characterizing a respective sample query and a respective reference response text sequence that includes a reference final answer to the respective sample query. The system trains a reward model on the plurality of training examples. The reward model is configured to receive an input including a query text sequence characterizing a query and one or more reasoning steps that have been generated in response to the query and process the input to compute a reward score indicating how successful the one or more reasoning steps are in yielding a correct final answer to the query. The system trains the language model using the trained reward model.
    Type: Application
    Filed: September 27, 2023
    Publication date: March 28, 2024
    Inventors: Irina Higgins, Jonathan Ken Uesato, Nathaniel Arthur Kushman, Ramana Kumar
  • Patent number: 11769057
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.
    Type: Grant
    Filed: June 6, 2022
    Date of Patent: September 26, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
  • Publication number: 20230088555
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.
    Type: Application
    Filed: June 6, 2022
    Publication date: March 23, 2023
    Inventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
  • Patent number: 11354823
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.
    Type: Grant
    Filed: July 11, 2018
    Date of Patent: June 7, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
  • Publication number: 20220121934
    Abstract: A method for automatically identifying a computer-implemented neural network which is able to generate a disentangled latent variable representation of an input data item. The method involves obtaining a pool of trained neural networks, encoding a set of evaluation data items using each of the trained neural networks to determine a respective set of latent representations for each of the trained neural networks, and determining a measure of similarity between the sets of latent representations in order to select a trained neural network with a disentangled latent variable representation.
    Type: Application
    Filed: January 23, 2020
    Publication date: April 21, 2022
    Inventors: Sunny Yunchen Duan, Irina Higgins
  • Publication number: 20220005603
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an auto-encoder to de-noise task specific electroencephalogram (EEG) signals. One of the methods includes training a variational auto-encoder (VAE) including to learn a plurality of parameter values of the VAE by applying, as first training input to the VAE, training data, the training data comprising electroencephalogram (EEG) data representing brain activities of individual persons when performing different tasks; and after the training, adapting the VAE for a specific task by applying, as second training input to the VAE, adaptation data, the adaptation data comprising task-specific EEG data representing brain activities of individual persons when performing the specific task.
    Type: Application
    Filed: July 6, 2020
    Publication date: January 6, 2022
    Inventors: Garrett Raymond Honke, Pramod Gupta, Irina Higgins
  • Publication number: 20200234468
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for learning visual concepts using neural networks. One of the methods includes receiving a new symbol input comprising one or more symbols from a vocabulary; and generating a new output image that depicts concepts referred to by the new symbol input, comprising: processing the new symbol input using a symbol encoder neural network to generate a new symbol encoder output for the new symbol input; sampling, from the distribution parameterized by the new symbol encoder output, a respective value for each of a plurality of visual factors; and processing a new image decoder input comprising the respective values for the visual factors using an image decoder neural network to generate the new output image.
    Type: Application
    Filed: July 11, 2018
    Publication date: July 23, 2020
    Inventors: Alexander Lerchner, Irina Higgins, Nicolas Sonnerat, Arka Tilak Pal, Demis Hassabis, Loic Matthey-de-l'Endroit, Christopher Paul Burgess, Matthew Botvinick
  • Patent number: 10643131
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a variational auto-encoder (VAE) to generate disentangled latent factors on unlabeled training images. In one aspect, a method includes receiving the plurality of unlabeled training images, and, for each unlabeled training image, processing the unlabeled training image using the VAE to determine the latent representation of the unlabeled training image and to generate a reconstruction of the unlabeled training image in accordance with current values of the parameters of the VAE, and adjusting current values of the parameters of the VAE by optimizing a loss function that depends on a quality of the reconstruction and also on a degree of independence between the latent factors in the latent representation of the unlabeled training image.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: May 5, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Loic Matthey-de-l'Endroit, Arka Tilak Pal, Shakir Mohamed, Xavier Glorot, Irina Higgins, Alexander Lerchner
  • Patent number: 10373055
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a variational auto-encoder (VAE) to generate disentangled latent factors on unlabeled training images. In one aspect, a method includes receiving the plurality of unlabeled training images, and, for each unlabeled training image, processing the unlabeled training image using the VAE to determine the latent representation of the unlabeled training image and to generate a reconstruction of the unlabeled training image in accordance with current values of the parameters of the VAE, and adjusting current values of the parameters of the VAE by optimizing a loss function that depends on a quality of the reconstruction and also on a degree of independence between the latent factors in the latent representation of the unlabeled training image.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: August 6, 2019
    Assignee: Deepmind Technologies Limited
    Inventors: Loic Matthey-de-l'Endroit, Arka Tilak Pal, Shakir Mohamed, Xavier Glorot, Irina Higgins, Alexander Lerchner