Patents by Inventor Misha Man Ray Denil

Misha Man Ray Denil has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240042600
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Application
    Filed: June 8, 2023
    Publication date: February 8, 2024
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Publication number: 20230376771
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for training machine learning models. One method includes obtaining a machine learning model, wherein the machine learning model comprises one or more model parameters, and the machine learning model is trained using gradient descent techniques to optimize an objective function; determining an update rule for the model parameters using a recurrent neural network (RNN); and applying a determined update rule for a final time step in a sequence of multiple time steps to the model parameters.
    Type: Application
    Filed: March 8, 2023
    Publication date: November 23, 2023
    Inventors: Misha Man Ray Denil, Tom Schaul, Marcin Andrychowicz, Joao Ferdinando Gomes de Freitas, Sergio Gomez Colmenarejo, Matthew William Hoffman, David Benjamin Pfau
  • Patent number: 11712799
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: August 1, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Patent number: 11615310
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for training machine learning models. One method includes obtaining a machine learning model, wherein the machine learning model comprises one or more model parameters, and the machine learning model is trained using gradient descent techniques to optimize an objective function; determining an update rule for the model parameters using a recurrent neural network (RNN); and applying a determined update rule for a final time step in a sequence of multiple time steps to the model parameters.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: March 28, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Misha Man Ray Denil, Tom Schaul, Marcin Andrychowicz, Joao Ferdinando Gomes de Freitas, Sergio Gomez Colmenarejo, Matthew William Hoffman, David Benjamin Pfau
  • Publication number: 20230061411
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting actions to be performed by an agent to interact with an environment using an action selection neural network. In one aspect, a method comprises, at each time step in a sequence of time steps: generating a current representation of a state of a task being performed by the agent in the environment as of the current time step as a sequence of data elements; autoregressively generating a sequence of data elements representing a current action to be performed by the agent at the current time step; and after autoregressively generating the sequence of data elements representing the current action, causing the agent to perform the current action at the current time step.
    Type: Application
    Filed: August 24, 2021
    Publication date: March 2, 2023
    Inventors: Tom Erez, Alexander Novikov, Emilio Parisotto, Jack William Rae, Konrad Zolna, Misha Man Ray Denil, Joao Ferdinando Gomes de Freitas, Oriol Vinyals, Scott Ellison Reed, Sergio Gomez, Ashley Deloris Edwards, Jacob Bruce, Gabriel Barth-Maron
  • Patent number: 11074481
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. In one aspect, a method of training an action selection policy neural network for use in selecting actions to be performed by an agent navigating through an environment to accomplish one or more goals comprises: receiving an observation image characterizing a current state of the environment; processing, using the action selection policy neural network, an input comprising the observation image to generate an action selection output; processing, using a geometry-prediction neural network, an intermediate output generated by the action selection policy neural network to predict a value of a feature of a geometry of the environment when in the current state; and backpropagating a gradient of a geometry-based auxiliary loss into the action selection policy neural network to determine a geometry-based auxiliary update for current values of the network parameters.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: July 27, 2021
    Assignee: DeepMind Technologies Limited
    Inventors: Fabio Viola, Piotr Wojciech Mirowski, Andrea Banino, Razvan Pascanu, Hubert Josef Soyer, Andrew James Ballard, Sudarshan Kumaran, Raia Thais Hadsell, Laurent Sifre, Rostislav Goroshin, Koray Kavukcuoglu, Misha Man Ray Denil
  • Publication number: 20210078169
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Application
    Filed: September 14, 2020
    Publication date: March 18, 2021
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Publication number: 20200167633
    Abstract: A reinforcement learning system is proposed comprising a plurality of property detector neural networks. Each property detector neural network is arranged to receive data representing an object within an environment, and to generate property data associated with a property of the object. A processor is arranged to receive an instruction indicating a task associated with an object having an associated property, and process the output of the plurality of property detector neural networks based upon the instruction to generate a relevance data item. The relevance data item indicates objects within the environment associated with the task. The processor also generates a plurality of weights based upon the relevance data item, and, based on the weights, generates modified data representing the plurality of objects within the environment. A neural network is arranged to receive the modified data and to output an action associated with the task.
    Type: Application
    Filed: May 22, 2018
    Publication date: May 28, 2020
    Inventors: Misha Man Ray Denil, Sergio Gomez Colmenarejo, Serkan Cabi, David William Saxton, Joao Ferdinando Gomes de Freitas
  • Publication number: 20200151515
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. In one aspect, a method of training an action selection policy neural network for use in selecting actions to be performed by an agent navigating through an environment to accomplish one or more goals comprises: receiving an observation image characterizing a current state of the environment; processing, using the action selection policy neural network, an input comprising the observation image to generate an action selection output; processing, using a geometry-prediction neural network, an intermediate output generated by the action selection policy neural network to predict a value of a feature of a geometry of the environment when in the current state; and backpropagating a gradient of a geometry-based auxiliary loss into the action selection policy neural network to determine a geometry-based auxiliary update for current values of the network parameters.
    Type: Application
    Filed: January 17, 2020
    Publication date: May 14, 2020
    Inventors: Fabio Viola, Piotr Wojciech Mirowski, Andrea Banino, Razvan Pascanu, Hubert Josef Soyer, Andrew James Ballard, Sudarshan Kumaran, Raia Thais Hadsell, Laurent Sifre, Rostislav Goroshin, Koray Kavukcuoglu, Misha Man Ray Denil
  • Patent number: 10572776
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. In one aspect, a method of training an action selection policy neural network for use in selecting actions to be performed by an agent navigating through an environment to accomplish one or more goals comprises: receiving an observation image characterizing a current state of the environment; processing, using the action selection policy neural network, an input comprising the observation image to generate an action selection output; processing, using a geometry-prediction neural network, an intermediate output generated by the action selection policy neural network to predict a value of a feature of a geometry of the environment when in the current state; and backpropagating a gradient of a geometry-based auxiliary loss into the action selection policy neural network to determine a geometry-based auxiliary update for current values of the network parameters.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: February 25, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Fabio Viola, Piotr Wojciech Mirowski, Andrea Banino, Razvan Pascanu, Hubert Josef Soyer, Andrew James Ballard, Sudarshan Kumaran, Raia Thais Hadsell, Laurent Sifre, Rostislav Goroshin, Koray Kavukcuoglu, Misha Man Ray Denil
  • Publication number: 20190266449
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. In one aspect, a method of training an action selection policy neural network for use in selecting actions to be performed by an agent navigating through an environment to accomplish one or more goals comprises: receiving an observation image characterizing a current state of the environment; processing, using the action selection policy neural network, an input comprising the observation image to generate an action selection output; processing, using a geometry-prediction neural network, an intermediate output generated by the action selection policy neural network to predict a value of a feature of a geometry of the environment when in the current state; and backpropagating a gradient of a geometry-based auxiliary loss into the action selection policy neural network to determine a geometry-based auxiliary update for current values of the network parameters.
    Type: Application
    Filed: May 3, 2019
    Publication date: August 29, 2019
    Inventors: Fabio Viola, Piotr Wojciech Mirowski, Andrea Banino, Razvan Pascanu, Hubert Josef Soyer, Andrew James Ballard, Sudarshan Kumaran, Raia Thais Hadsell, Laurent Sifre, Rostislav Goroshin, Koray Kavukcuoglu, Misha Man Ray Denil
  • Publication number: 20190220748
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for training machine learning models. One method includes obtaining a machine learning model, wherein the machine learning model comprises one or more model parameters, and the machine learning model is trained using gradient descent techniques to optimize an objective function; determining an update rule for the model parameters using a recurrent neural network (RNN); and applying a determined update rule for a final time step in a sequence of multiple time steps to the model parameters.
    Type: Application
    Filed: May 19, 2017
    Publication date: July 18, 2019
    Inventors: Misha Man Ray Denil, Tom Schaul, Marcin Andrychowicz, Joao Ferdinando Gomes de Freitas, Sergio Gomez Colmenarejo, Matthew William Hoffman, David Benjamin Pfau