Patents by Inventor Sergio Gomez Colmenarejo

Sergio Gomez Colmenarejo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240042600
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Application
    Filed: June 8, 2023
    Publication date: February 8, 2024
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Publication number: 20230376771
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for training machine learning models. One method includes obtaining a machine learning model, wherein the machine learning model comprises one or more model parameters, and the machine learning model is trained using gradient descent techniques to optimize an objective function; determining an update rule for the model parameters using a recurrent neural network (RNN); and applying a determined update rule for a final time step in a sequence of multiple time steps to the model parameters.
    Type: Application
    Filed: March 8, 2023
    Publication date: November 23, 2023
    Inventors: Misha Man Ray Denil, Tom Schaul, Marcin Andrychowicz, Joao Ferdinando Gomes de Freitas, Sergio Gomez Colmenarejo, Matthew William Hoffman, David Benjamin Pfau
  • Patent number: 11734797
    Abstract: A method of generating an output image having an output resolution of N pixels×N pixels, each pixel in the output image having a respective color value for each of a plurality of color channels, the method comprising: obtaining a low-resolution version of the output image; and upscaling the low-resolution version of the output image to generate the output image having the output resolution by repeatedly performing the following operations: obtaining a current version of the output image having a current K×K resolution; and processing the current version of the output image using a set of convolutional neural networks that are specific to the current resolution to generate an updated version of the output image having a 2K×2K resolution.
    Type: Grant
    Filed: May 23, 2022
    Date of Patent: August 22, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Nal Emmerich Kalchbrenner, Daniel Belov, Sergio Gomez Colmenarejo, Aaron Gerard Antonius van den Oord, Ziyu Wang, Joao Ferdinando Gomes de Freitas, Scott Ellison Reed
  • Patent number: 11712799
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: August 1, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Patent number: 11663441
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection policy neural network, wherein the action selection policy neural network is configured to process an observation characterizing a state of an environment to generate an action selection policy output, wherein the action selection policy output is used to select an action to be performed by an agent interacting with an environment. In one aspect, a method comprises: obtaining an observation characterizing a state of the environment subsequent to the agent performing a selected action; generating a latent representation of the observation; processing the latent representation of the observation using a discriminator neural network to generate an imitation score; determining a reward from the imitation score; and adjusting the current values of the action selection policy neural network parameters based on the reward using a reinforcement learning training technique.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: May 30, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Scott Ellison Reed, Yusuf Aytar, Ziyu Wang, Tom Paine, Sergio Gomez Colmenarejo, David Budden, Tobias Pfaff, Aaron Gerard Antonius van den Oord, Oriol Vinyals, Alexander Novikov
  • Patent number: 11615310
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for training machine learning models. One method includes obtaining a machine learning model, wherein the machine learning model comprises one or more model parameters, and the machine learning model is trained using gradient descent techniques to optimize an objective function; determining an update rule for the model parameters using a recurrent neural network (RNN); and applying a determined update rule for a final time step in a sequence of multiple time steps to the model parameters.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: March 28, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Misha Man Ray Denil, Tom Schaul, Marcin Andrychowicz, Joao Ferdinando Gomes de Freitas, Sergio Gomez Colmenarejo, Matthew William Hoffman, David Benjamin Pfau
  • Publication number: 20220284546
    Abstract: A method of generating an output image having an output resolution of N pixels×N pixels, each pixel in the output image having a respective color value for each of a plurality of color channels, the method comprising: obtaining a low-resolution version of the output image; and upscaling the low-resolution version of the output image to generate the output image having the output resolution by repeatedly performing the following operations: obtaining a current version of the output image having a current K×K resolution; and processing the current version of the output image using a set of convolutional neural networks that are specific to the current resolution to generate an updated version of the output image having a 2K×2K resolution.
    Type: Application
    Filed: May 23, 2022
    Publication date: September 8, 2022
    Inventors: Nal Emmerich Kalchbrenner, Daniel Belov, Sergio Gomez Colmenarejo, Aaron Gerard Antonius van den Oord, Ziyu Wang, Joao Ferdinando Gomes de Freitas, Scott Ellison Reed
  • Publication number: 20220261639
    Abstract: A method is proposed of training a neural network to generate action data for controlling an agent to perform a task in an environment. The method includes obtaining, for each of a plurality of performances of the task, one or more first tuple datasets, each first tuple dataset comprising state data characterizing a state of the environment at a corresponding time during the performance of the task; and a concurrent process of training the neural network and a discriminator network. The training process comprises a plurality of neural network update steps and a plurality of discriminator network update steps.
    Type: Application
    Filed: July 16, 2020
    Publication date: August 18, 2022
    Inventors: Konrad Zolna, Scott Ellison Reed, Ziyu Wang, Alexander Novikov, Sergio Gomez Colmenarejo, Joao Ferdinando Gomes de Freitas, David Budden, Serkan Cabi
  • Patent number: 11361403
    Abstract: A method of generating an output image having an output resolution of N pixels×N pixels, each pixel in the output image having a respective color value for each of a plurality of color channels, the method comprising: obtaining a low-resolution version of the output image; and upscaling the low-resolution version of the output image to generate the output image having the output resolution by repeatedly performing the following operations: obtaining a current version of the output image having a current K×K resolution; and processing the current version of the output image using a set of convolutional neural networks that are specific to the current resolution to generate an updated version of the output image having a 2K×2K resolution.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: June 14, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Nal Emmerich Kalchbrenner, Daniel Belov, Sergio Gomez Colmenarejo, Aaron Gerard Antonius van den Oord, Ziyu Wang, Joao Ferdinando Gomes de Freitas, Scott Ellison Reed
  • Publication number: 20210078169
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Application
    Filed: September 14, 2020
    Publication date: March 18, 2021
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Publication number: 20210027425
    Abstract: A method of generating an output image having an output resolution of N pixels×N pixels, each pixel in the output image having a respective color value for each of a plurality of color channels, the method comprising: obtaining a low-resolution version of the output image; and upscaling the low-resolution version of the output image to generate the output image having the output resolution by repeatedly performing the following operations: obtaining a current version of the output image having a current K×K resolution; and processing the current version of the output image using a set of convolutional neural networks that are specific to the current resolution to generate an updated version of the output image having a 2K×2K resolution.
    Type: Application
    Filed: February 26, 2018
    Publication date: January 28, 2021
    Inventors: Nal Emmerich Kalchbrenner, Daniel Belov, Sergio Gomez Colmenarejo, Aaron Gerard Antonius van den Oord, Ziyu Wang, Joao Gomes de Freitas, Scott Ellison Reed
  • Publication number: 20200167633
    Abstract: A reinforcement learning system is proposed comprising a plurality of property detector neural networks. Each property detector neural network is arranged to receive data representing an object within an environment, and to generate property data associated with a property of the object. A processor is arranged to receive an instruction indicating a task associated with an object having an associated property, and process the output of the plurality of property detector neural networks based upon the instruction to generate a relevance data item. The relevance data item indicates objects within the environment associated with the task. The processor also generates a plurality of weights based upon the relevance data item, and, based on the weights, generates modified data representing the plurality of objects within the environment. A neural network is arranged to receive the modified data and to output an action associated with the task.
    Type: Application
    Filed: May 22, 2018
    Publication date: May 28, 2020
    Inventors: Misha Man Ray Denil, Sergio Gomez Colmenarejo, Serkan Cabi, David William Saxton, Joao Ferdinando Gomes de Freitas
  • Publication number: 20200104680
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection policy neural network, wherein the action selection policy neural network is configured to process an observation characterizing a state of an environment to generate an action selection policy output, wherein the action selection policy output is used to select an action to be performed by an agent interacting with an environment. In one aspect, a method comprises: obtaining an observation characterizing a state of the environment subsequent to the agent performing a selected action; generating a latent representation of the observation; processing the latent representation of the observation using a discriminator neural network to generate an imitation score; determining a reward from the imitation score; and adjusting the current values of the action selection policy neural network parameters based on the reward using a reinforcement learning training technique.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 2, 2020
    Inventors: Scott Ellison Reed, Yusuf Aytar, Ziyu Wang, Tom Paine, Sergio Gomez Colmenarejo, David Budden, Tobias Pfaff, Aaron Gerard Antonius van den Oord, Oriol Vinyals, Alexander Novikov
  • Publication number: 20190220748
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for training machine learning models. One method includes obtaining a machine learning model, wherein the machine learning model comprises one or more model parameters, and the machine learning model is trained using gradient descent techniques to optimize an objective function; determining an update rule for the model parameters using a recurrent neural network (RNN); and applying a determined update rule for a final time step in a sequence of multiple time steps to the model parameters.
    Type: Application
    Filed: May 19, 2017
    Publication date: July 18, 2019
    Inventors: Misha Man Ray Denil, Tom Schaul, Marcin Andrychowicz, Joao Ferdinando Gomes de Freitas, Sergio Gomez Colmenarejo, Matthew William Hoffman, David Benjamin Pfau