Patents by Inventor Mel Vecerik

Mel Vecerik has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240062035
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-efficient reinforcement learning. One of the systems is a system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by receiving observations characterizing states of the environment and, in response to each observation, performing an action selected from a continuous space of possible actions, wherein the actor neural network maps observations to next actions in accordance with values of parameters of the actor neural network, and wherein the system comprises: a plurality of workers, wherein each worker is configured to operate independently of each other worker, wherein each worker is associated with a respective agent replica that interacts with a respective replica of the environment during the training of the actor neural network.
    Type: Application
    Filed: July 12, 2023
    Publication date: February 22, 2024
    Inventors: Martin Riedmiller, Roland Hafner, Mel Vecerik, Timothy Paul Lillicrap, Thomas Lampe, Ivaylo Popov, Gabriel Barth-Maron, Nicolas Manfred Otto Heess
  • Publication number: 20240042600
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Application
    Filed: June 8, 2023
    Publication date: February 8, 2024
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Patent number: 11886997
    Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
    Type: Grant
    Filed: October 7, 2022
    Date of Patent: January 30, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Olivier Pietquin, Martin Riedmiller, Wang Fumin, Bilal Piot, Mel Vecerik, Todd Andrew Hester, Thomas Rothoerl, Thomas Lampe, Nicolas Manfred Otto Heess, Jonathan Karl Scholz
  • Patent number: 11868882
    Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: January 9, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Olivier Claude Pietquin, Martin Riedmiller, Wang Fumin, Bilal Piot, Mel Vecerik, Todd Andrew Hester, Thomas Rothoerl, Thomas Lampe, Nicolas Manfred Otto Heess, Jonathan Karl Scholz
  • Publication number: 20230281966
    Abstract: A method for training a neural network to predict keypoints of unseen objects using a training data set including labeled and unlabeled training data is described. The method comprising: receiving the training data set comprising a plurality of training samples, each training sample comprising a set of synchronized images of one or more objects from a respective scene, wherein each image in the set is synchronously taken by a respective camera from a different point of view, and wherein a subset of the set of synchronized images is labeled with ground-truth keypoints and the remaining images in the set are unlabeled; and for each of one or more training samples of the plurality of training samples: training the neural network on the training sample by updating current values of parameters of the neural network to minimize a loss function which is a combination of a supervised loss function and an unsupervised loss function.
    Type: Application
    Filed: July 28, 2021
    Publication date: September 7, 2023
    Inventors: Mel VECERIK, Jonathan Karl SCHOLZ, Jean-Bapiste REGLI
  • Patent number: 11741334
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-efficient reinforcement learning. One of the systems is a system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by receiving observations characterizing states of the environment and, in response to each observation, performing an action selected from a continuous space of possible actions, wherein the actor neural network maps observations to next actions in accordance with values of parameters of the actor neural network, and wherein the system comprises: a plurality of workers, wherein each worker is configured to operate independently of each other worker, wherein each worker is associated with a respective agent replica that interacts with a respective replica of the environment during the training of the actor neural network.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: August 29, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Martin Riedmiller, Roland Hafner, Mel Vecerik, Timothy Paul Lillicrap, Thomas Lampe, Ivaylo Popov, Gabriel Barth-Maron, Nicolas Manfred Otto Heess
  • Patent number: 11712799
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: August 1, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Publication number: 20230023189
    Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
    Type: Application
    Filed: October 7, 2022
    Publication date: January 26, 2023
    Inventors: Olivier Pietquin, Martin Riedmiller, Wang Fumin, Bilal Piot, Mel Vecerik, Todd Andrew Hester, Thomas Rothoerl, Thomas Lampe, Nicolas Manfred Otto Heess, Jonathan Karl Scholz
  • Patent number: 11534911
    Abstract: A system includes a neural network system implemented by one or more computers. The neural network system is configured to receive an observation characterizing a current state of a real-world environment being interacted with by a robotic agent to perform a robotic task and to process the observation to generate a policy output that defines an action to be performed by the robotic agent in response to the observation. The neural network system includes: (i) a sequence of deep neural networks (DNNs), in which the sequence of DNNs includes a simulation-trained DNN that has been trained on interactions of a simulated version of the robotic agent with a simulated version of the real-world environment to perform a simulated version of the robotic task, and (ii) a first robot-trained DNN that is configured to receive the observation and to process the observation to generate the policy output.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: December 27, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Razvan Pascanu, Raia Thais Hadsell, Mel Vecerik, Thomas Rothoerl, Andrei-Alexandru Rusu, Nicolas Manfred Otto Heess
  • Publication number: 20220355472
    Abstract: A system includes a neural network system implemented by one or more computers. The neural network system is configured to receive an observation characterizing a current state of a real-world environment being interacted with by a robotic agent to perform a robotic task and to process the observation to generate a policy output that defines an action to be performed by the robotic agent in response to the observation. The neural network system includes: (i) a sequence of deep neural networks (DNNs), in which the sequence of DNNs includes a simulation-trained DNN that has been trained on interactions of a simulated version of the robotic agent with a simulated version of the real-world environment to perform a simulated version of the robotic task, and (ii) a first robot-trained DNN that is configured to receive the observation and to process the observation to generate the policy output.
    Type: Application
    Filed: July 25, 2022
    Publication date: November 10, 2022
    Inventors: Razvan Pascanu, Raia Thais Hadsell, Mel Vecerik, Thomas Rothoerl, Andrei-Alexandru Rusu, Nicolas Manfred Otto Heess
  • Patent number: 11468321
    Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: October 11, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Olivier Claude Pietquin, Martin Riedmiller, Wang Fumin, Bilal Piot, Mel Vecerik, Todd Andrew Hester, Thomas Rothoerl, Thomas Lampe, Nicolas Manfred Otto Heess, Jonathan Karl Scholz
  • Publication number: 20210078169
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Application
    Filed: September 14, 2020
    Publication date: March 18, 2021
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Patent number: 10872294
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection policy neural network. In one aspect, a method comprises: obtaining an expert observation; processing the expert observation using a generative neural network system to generate a given observation-given action pair, wherein the generative neural network system has been trained to be more likely to generate a particular observation-particular action pair if performing the particular action in response to the particular observation is more likely to result in the environment later reaching the state characterized by a target observation; processing the given observation using the action selection policy neural network to generate a given action score for the given action; and adjusting the current values of the action selection policy neural network parameters to increase the given action score for the given action.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: December 22, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Mel Vecerik, Yannick Schroecker, Jonathan Karl Scholz
  • Publication number: 20200285909
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-efficient reinforcement learning. One of the systems is a system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by receiving observations characterizing states of the environment and, in response to each observation, performing an action selected from a continuous space of possible actions, wherein the actor neural network maps observations to next actions in accordance with values of parameters of the actor neural network, and wherein the system comprises: a plurality of workers, wherein each worker is configured to operate independently of each other worker, wherein each worker is associated with a respective agent replica that interacts with a respective replica of the environment during the training of the actor neural network.
    Type: Application
    Filed: May 22, 2020
    Publication date: September 10, 2020
    Inventors: Martin Riedmiller, Roland Hafner, Mel Vecerik, Timothy Paul Lillicrap, Thomas Lampe, Ivaylo Popov, Gabriel Barth-Maron, Nicolas Manfred Otto Heess
  • Publication number: 20200223063
    Abstract: A system includes a neural network system implemented by one or more computers. The neural network system is configured to receive an observation characterizing a current state of a real-world environment being interacted with by a robotic agent to perform a robotic task and to process the observation to generate a policy output that defines an action to be performed by the robotic agent in response to the observation. The neural network system includes: (i) a sequence of deep neural networks (DNNs), in which the sequence of DNNs includes a simulation-trained DNN that has been trained on interactions of a simulated version of the robotic agent with a simulated version of the real-world environment to perform a simulated version of the robotic task, and (ii) a first robot-trained DNN that is configured to receive the observation and to process the observation to generate the policy output.
    Type: Application
    Filed: March 25, 2020
    Publication date: July 16, 2020
    Inventors: Razvan Pascanu, Raia Thais Hadsell, Mel Vecerik, Thomas Rothoerl, Andrei-Alexandru Rusu, Nicolas Manfred Otto Heess
  • Patent number: 10664725
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-efficient reinforcement learning. One of the systems is a system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by receiving observations characterizing states of the environment and, in response to each observation, performing an action selected from a continuous space of possible actions, wherein the actor neural network maps observations to next actions in accordance with values of parameters of the actor neural network, and wherein the system comprises: a plurality of workers, wherein each worker is configured to operate independently of each other worker, wherein each worker is associated with a respective agent replica that interacts with a respective replica of the environment during the training of the actor neural network.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: May 26, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Martin Riedmiller, Roland Hafner, Mel Vecerik, Timothy Paul Lillicrap, Thomas Lampe, Ivaylo Popov, Gabriel Barth-Maron, Nicolas Manfred Otto Heess
  • Publication number: 20200151562
    Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
    Type: Application
    Filed: June 28, 2018
    Publication date: May 14, 2020
    Inventors: Olivier Pietquin, Martin Riedmiller, Wang Fumin, Bilal Piot, Mel Vecerik, Todd Andrew Hester, Thomas Rothörl, Thomas Lampe, Nicolas Manfred Otto Heess, Jonathan Karl Scholz
  • Patent number: 10632618
    Abstract: A system includes a neural network system implemented by one or more computers. The neural network system is configured to receive an observation characterizing a current state of a real-world environment being interacted with by a robotic agent to perform a robotic task and to process the observation to generate a policy output that defines an action to be performed by the robotic agent in response to the observation. The neural network system includes: (i) a sequence of deep neural networks (DNNs), in which the sequence of DNNs includes a simulation-trained DNN that has been trained on interactions of a simulated version of the robotic agent with a simulated version of the real-world environment to perform a simulated version of the robotic task, and (ii) a first robot-trained DNN that is configured to receive the observation and to process the observation to generate the policy output.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: April 28, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Razvan Pascanu, Raia Thais Hadsell, Mel Vecerik, Thomas Rothoerl, Andrei-Alexandru Rusu, Nicolas Manfred Otto Heess
  • Publication number: 20200104684
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection policy neural network. In one aspect, a method comprises: obtaining an expert observation; processing the expert observation using a generative neural network system to generate a given observation-given action pair, wherein the generative neural network system has been trained to be more likely to generate a particular observation-particular action pair if performing the particular action in response to the particular observation is more likely to result in the environment later reaching the state characterized by a target observation; processing the given observation using the action selection policy neural network to generate a given action score for the given action; and adjusting the current values of the action selection policy neural network parameters to increase the given action score for the given action.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 2, 2020
    Inventors: Mel Vecerik, Yannick Schroecker, Jonathan Karl Scholz
  • Publication number: 20190354813
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-efficient reinforcement learning. One of the systems is a system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by receiving observations characterizing states of the environment and, in response to each observation, performing an action selected from a continuous space of possible actions, wherein the actor neural network maps observations to next actions in accordance with values of parameters of the actor neural network, and wherein the system comprises: a plurality of workers, wherein each worker is configured to operate independently of each other worker, wherein each worker is associated with a respective agent replica that interacts with a respective replica of the environment during the training of the actor neural network.
    Type: Application
    Filed: July 31, 2019
    Publication date: November 21, 2019
    Inventors: Martin Riedmiller, Roland Hafner, Mel Vecerik, Timothy Paul Lillicrap, Thomas Lampe, Ivaylo Popov, Gabriel Barth-Maron, Nicolas Manfred Otto Heess