Patents by Inventor David BUDDEN

David BUDDEN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11948085
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by a reinforcement learning agent interacting with an environment. In particular, the actions are selected from a continuous action space and the system trains the action selection neural network jointly with a distribution Q network that is used to update the parameters of the action selection neural network.
    Type: Grant
    Filed: April 19, 2023
    Date of Patent: April 2, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: David Budden, Matthew William Hoffman, Gabriel Barth-Maron
  • Patent number: 11907821
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a machine learning model. A method includes: maintaining a plurality of training sessions; assigning, to each worker of one or more workers, a respective training session of the plurality of training sessions; repeatedly performing operations until meeting one or more termination criteria, the operations comprising: receiving an updated training session from a respective worker of the one or more workers, selecting a second training session, selecting, based on comparing the updated training session and the second training session using a fitness evaluation function, either the updated training session or the second training session as a parent training session, generating a child training session from the selected parent training session, and assigning the child training session to an available worker, and selecting a candidate model to be a trained model for the machine learning model.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: February 20, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Ang Li, Valentin Clement Dalibard, David Budden, Ola Spyra, Maxwell Elliot Jaderberg, Timothy James Alexander Harley, Sagi Perel, Chenjie Gu, Pramod Gupta
  • Publication number: 20240042600
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Application
    Filed: June 8, 2023
    Publication date: February 8, 2024
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Publication number: 20230409907
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by a reinforcement learning agent interacting with an environment. In particular, the actions are selected from a continuous action space and the system trains the action selection neural network jointly with a distribution Q network that is used to update the parameters of the action selection neural network.
    Type: Application
    Filed: April 19, 2023
    Publication date: December 21, 2023
    Inventors: David Budden, Matthew William Hoffman, Gabriel Barth-Maron
  • Publication number: 20230252288
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network used to select actions to be performed by an agent interacting with an environment. One of the systems includes (i) a plurality of actor computing units, in which each of the actor computing units is configured to maintain a respective replica of the action selection neural network and to perform a plurality of actor operations, and (ii) one or more learner computing units, in which each of the one or more learner computing units is configured to perform a plurality of learner operations.
    Type: Application
    Filed: April 6, 2023
    Publication date: August 10, 2023
    Inventors: David Budden, Gabriel Barth-Maron, John Quan, Daniel George Horgan
  • Patent number: 11712799
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: August 1, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Patent number: 11663475
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by a reinforcement learning agent interacting with an environment. In particular, the actions are selected from a continuous action space and the system trains the action selection neural network jointly with a distribution Q network that is used to update the parameters of the action selection neural network.
    Type: Grant
    Filed: September 15, 2022
    Date of Patent: May 30, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: David Budden, Matthew William Hoffman, Gabriel Barth-Maron
  • Patent number: 11663441
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection policy neural network, wherein the action selection policy neural network is configured to process an observation characterizing a state of an environment to generate an action selection policy output, wherein the action selection policy output is used to select an action to be performed by an agent interacting with an environment. In one aspect, a method comprises: obtaining an observation characterizing a state of the environment subsequent to the agent performing a selected action; generating a latent representation of the observation; processing the latent representation of the observation using a discriminator neural network to generate an imitation score; determining a reward from the imitation score; and adjusting the current values of the action selection policy neural network parameters based on the reward using a reinforcement learning training technique.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: May 30, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Scott Ellison Reed, Yusuf Aytar, Ziyu Wang, Tom Paine, Sergio Gomez Colmenarejo, David Budden, Tobias Pfaff, Aaron Gerard Antonius van den Oord, Oriol Vinyals, Alexander Novikov
  • Patent number: 11625604
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network used to select actions to be performed by an agent interacting with an environment. One of the systems includes (i) a plurality of actor computing units, in which each of the actor computing units is configured to maintain a respective replica of the action selection neural network and to perform a plurality of actor operations, and (ii) one or more learner computing units, in which each of the one or more learner computing units is configured to perform a plurality of learner operations.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: April 11, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: David Budden, Gabriel Barth-Maron, John Quan, Daniel George Horgan
  • Publication number: 20230079338
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer-readable storage media, for training a neural network to control a real-world agent interacting with a real-world environment to cause the real-world agent to perform a particular task. One of the methods includes training the neural network to determine first values of the parameters by optimizing a first task-specific objective that measures a performance of the policy neural network in controlling a simulated version of the real-world agent; obtaining real-world data generated from interactions of the real-world agent with the real-world environment; and training the neural network to determine trained values of the parameters from the first values of the parameters by jointly optimizing (i) a self-supervised objective that measures at least a performance of internal representations generated by the neural network on a self-supervised task performed on the real-world data and (ii) a second task-specific objective.
    Type: Application
    Filed: October 8, 2020
    Publication date: March 16, 2023
    Inventors: Eren Sezener, Joel William Veness, Marcus Hutter, Jianan Wang, David Budden
  • Publication number: 20230020071
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by a reinforcement learning agent interacting with an environment. In particular, the actions are selected from a continuous action space and the system trains the action selection neural network jointly with a distribution Q network that is used to update the parameters of the action selection neural network.
    Type: Application
    Filed: September 15, 2022
    Publication date: January 19, 2023
    Inventors: David Budden, Matthew William Hoffman, Gabriel Barth-Maron
  • Patent number: 11481629
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by a reinforcement learning agent interacting with an environment. In particular, the actions are selected from a continuous action space and the system trains the action selection neural network jointly with a distribution Q network that is used to update the parameters of the action selection neural network.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: October 25, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: David Budden, Matthew William Hoffman, Gabriel Barth-Maron
  • Publication number: 20220261639
    Abstract: A method is proposed of training a neural network to generate action data for controlling an agent to perform a task in an environment. The method includes obtaining, for each of a plurality of performances of the task, one or more first tuple datasets, each first tuple dataset comprising state data characterizing a state of the environment at a corresponding time during the performance of the task; and a concurrent process of training the neural network and a discriminator network. The training process comprises a plurality of neural network update steps and a plurality of discriminator network update steps.
    Type: Application
    Filed: July 16, 2020
    Publication date: August 18, 2022
    Inventors: Konrad Zolna, Scott Ellison Reed, Ziyu Wang, Alexander Novikov, Sergio Gomez Colmenarejo, Joao Ferdinando Gomes de Freitas, David Budden, Serkan Cabi
  • Patent number: 11138744
    Abstract: A method for determining whether a goal is achieved by a trajectory of a ball using a mobile computer device comprises capturing a sequence of video frames of the ball with a camera of the mobile computer device; detecting the ball in at least three of the video frames; computing a trajectory of the ball using the detections of the ball; detecting a goal image in at least one of the video frames; computing whether the trajectory of the ball achieves intersection or similar with a goal plane computed from the goal image according to a goal criterion.
    Type: Grant
    Filed: November 10, 2017
    Date of Patent: October 5, 2021
    Assignee: FORMALYTICS HOLDINGS PTY LTD
    Inventors: Andrew Hall, David Budden, Grant Etherington, Holly Ade Simpson, Tres Kani, Se Yeun Kim
  • Publication number: 20210097443
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a machine learning model. A method includes: maintaining a plurality of training sessions; assigning, to each worker of one or more workers, a respective training session of the plurality of training sessions; repeatedly performing operations until meeting one or more termination criteria, the operations comprising: receiving an updated training session from a respective worker of the one or more workers, selecting a second training session, selecting, based on comparing the updated training session and the second training session using a fitness evaluation function, either the updated training session or the second training session as a parent training session, generating a child training session from the selected parent training session, and assigning the child training session to an available worker, and selecting a candidate model to be a trained model for the machine learning model.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 1, 2021
    Inventors: Ang Li, Valentin Clement Dalibard, David Budden, Ola Spyra, Maxwell Elliot Jaderberg, Timothy James Alexander Harley, Sagi Perel, Chenjie Gu, Pramod Gupta
  • Publication number: 20210078169
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Application
    Filed: September 14, 2020
    Publication date: March 18, 2021
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Publication number: 20200293883
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by a reinforcement learning agent interacting with an environment. In particular, the actions are selected from a continuous action space and the system trains the action selection neural network jointly with a distribution Q network that is used to update the parameters of the action selection neural network.
    Type: Application
    Filed: October 29, 2018
    Publication date: September 17, 2020
    Inventors: David Budden, Matthew William Hoffman, Gabriel Barth-Maron
  • Publication number: 20200265305
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network used to select actions to be performed by an agent interacting with an environment. One of the systems includes (i) a plurality of actor computing units, in which each of the actor computing units is configured to maintain a respective replica of the action selection neural network and to perform a plurality of actor operations, and (ii) one or more learner computing units, in which each of the one or more learner computing units is configured to perform a plurality of learner operations.
    Type: Application
    Filed: October 29, 2018
    Publication date: August 20, 2020
    Inventors: David Budden, Gabriel Barth-Maron, John Quan, Daniel George Horgan
  • Publication number: 20200104680
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection policy neural network, wherein the action selection policy neural network is configured to process an observation characterizing a state of an environment to generate an action selection policy output, wherein the action selection policy output is used to select an action to be performed by an agent interacting with an environment. In one aspect, a method comprises: obtaining an observation characterizing a state of the environment subsequent to the agent performing a selected action; generating a latent representation of the observation; processing the latent representation of the observation using a discriminator neural network to generate an imitation score; determining a reward from the imitation score; and adjusting the current values of the action selection policy neural network parameters based on the reward using a reinforcement learning training technique.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 2, 2020
    Inventors: Scott Ellison Reed, Yusuf Aytar, Ziyu Wang, Tom Paine, Sergio Gomez Colmenarejo, David Budden, Tobias Pfaff, Aaron Gerard Antonius van den Oord, Oriol Vinyals, Alexander Novikov
  • Patent number: 10482285
    Abstract: User events of a platform are processed to extract aggregate information about users of the platform at an event processing system. A query relating to the user events is received at the system and at least one query parameter is determined from the query. Various privacy controls are disclosed for ensuring that any information released in response to the query cannot be used to identify users individually or to infer information about individual users.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: November 19, 2019
    Assignee: Mediasift Limited
    Inventors: Lorenzo Alberton, Alistair Joseph Bastian, Timothy David Budden