Patents by Inventor David Silver

David Silver has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230084700
    Abstract: Methods, systems and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network. One of the methods includes receiving an observation characterizing a current state of the environment; determining a target network output for the observation by performing a look ahead search of possible future states of the environment starting from the current state until the environment reaches a possible future state that satisfies one or more termination criteria, wherein the look ahead search is guided by the neural network in accordance with current values of the network parameters; selecting an action to be performed by the agent in response to the observation using the target network output generated by performing the look ahead search; and storing, in an exploration history data store, the target network output in association with the observation for use in updating the current values of the network parameters.
    Type: Application
    Filed: September 19, 2022
    Publication date: March 16, 2023
    Inventors: Karen Simonyan, David Silver, Julian Schrittwieser
  • Patent number: 11568250
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
    Type: Grant
    Filed: May 4, 2020
    Date of Patent: January 31, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Tom Schaul, John Quan, David Silver
  • Patent number: 11507827
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributed training of reinforcement learning systems. One of the methods includes receiving, by a learner, current values of the parameters of the Q network from a parameter server, wherein each learner maintains a respective learner Q network replica and a respective target Q network replica; updating, by the learner, the parameters of the learner Q network replica maintained by the learner using the current values; selecting, by the learner, an experience tuple from a respective replay memory; computing, by the learner, a gradient from the experience tuple using the learner Q network replica maintained by the learner and the target Q network replica maintained by the learner; and providing, by the learner, the computed gradient to the parameter server.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: November 22, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Praveen Deepak Srinivasan, Rory Fearon, Cagdas Alcicek, Arun Sarath Nair, Samuel Blackwell, Vedavyas Panneershelvam, Alessandro De Maria, Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Mustafa Suleyman
  • Patent number: 11449750
    Abstract: Methods, systems and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network. One of the methods includes receiving an observation characterizing a current state of the environment; determining a target network output for the observation by performing a look ahead search of possible future states of the environment starting from the current state until the environment reaches a possible future state that satisfies one or more termination criteria, wherein the look ahead search is guided by the neural network in accordance with current values of the network parameters; selecting an action to be performed by the agent in response to the observation using the target network output generated by performing the look ahead search; and storing, in an exploration history data store, the target network output in association with the observation for use in updating the current values of the network parameters.
    Type: Grant
    Filed: May 28, 2018
    Date of Patent: September 20, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Karen Simonyan, David Silver, Julian Schrittwieser
  • Publication number: 20220283088
    Abstract: The technology described herein provides a system and method for measuring an amount of virus in a sample to be tested. The system comprises a light emitting diode operable to emit UV light towards a sample to be tested, and a detector operable to detect light from fluorescence events induced in a sample by UV light emitted from the light emitting diode. An amount of virus in the sample is then estimated based on at least the light from fluorescence events that is detected by the detector.
    Type: Application
    Filed: September 9, 2021
    Publication date: September 8, 2022
    Inventor: Joshua David Silver
  • Publication number: 20220261647
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.
    Type: Application
    Filed: April 29, 2022
    Publication date: August 18, 2022
    Inventors: Volodymyr Mnih, Adrià Puigdomènech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
  • Patent number: 11334792
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: May 17, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Volodymyr Mnih, Adria Puigdomenech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
  • Publication number: 20220042821
    Abstract: Aspects of the disclosure relate to generating scouting objectives in order to update map information used to control a fleet of vehicles in an autonomous driving mode. For instance, a notification from a vehicle of the fleet identifying a feature and a location of the feature may be received. A first bound for a scouting area may be identified based on the location of the feature. A second bound for the scouting area may be identified based on a lane closest to the feature. A scouting objective may be generated for the feature based on the first bound and the second bound.
    Type: Application
    Filed: August 10, 2020
    Publication date: February 10, 2022
    Inventors: Katharine Patterson, Joshua Herbach, David Silver, David Margines
  • Publication number: 20210397827
    Abstract: Aspects of the disclosure relate to detecting and responding to malfunctioning traffic signals for a vehicle having an autonomous driving mode. For instance, information identifying a detected state of a traffic signal for an intersection. An anomaly for the traffic signal may be detected based on the detected state and prestored information about expected states of the traffic signal. The vehicle may be controlled in the autonomous driving mode based on the detected anomaly.
    Type: Application
    Filed: June 19, 2020
    Publication date: December 23, 2021
    Inventors: David Silver, Carl Kershaw, Jonathan Hsiao, Edward Hsiao
  • Publication number: 20210270969
    Abstract: Imaging apparatus (22) includes a radiation source (40), which emits pulsed beams (42) of optical radiation toward a target scene (24). An array (52) of sensing elements outputs signals indicative of respective times of incidence of photons on the sensing elements. Objective optics (54) form a first image of the target scene on the array of sensing elements. An image sensor (64) captures e a second image of the target scene. Processing and control circuitry (56, 58) is configured to process the second image so as to detect a relative motion between at least one object in the target scene and the apparatus, and which is configured to construct, responsively to the signals from the array, histograms of the times of incidence of the photons on the sensing elements and to adjust the histograms responsively to the detected relative motion, and to generate a depth map of the target scene based on the adjusted histograms.
    Type: Application
    Filed: September 2, 2019
    Publication date: September 2, 2021
    Inventors: David Silver, Eitan Hirsh, Moshe Laifenfeld, Tal Kaitz
  • Publication number: 20210208262
    Abstract: Imaging apparatus (22) includes a radiation source (40), which emits pulsed beams (42) of optical radiation toward a target scene (24). An array (52) of sensing elements (78) output signals indicative of respective times of incidence of photons in a first image of the target scene that is formed on the array of sensing elements. An image sensor (64) captures a second image of the target scene in registration with the first image. Processing and control circuitry (56, 58) identifies, responsively to the signals, areas of the array on which the pulses of optical radiation reflected from corresponding regions of the target scene are incident, and processes the signals from the sensing elements in the identified areas in order measure depth coordinates of the corresponding regions of the target scene based on the times of incidence, while identifying, responsively to the second image, one or more of the regions of the target scene as no-depth regions.
    Type: Application
    Filed: September 2, 2019
    Publication date: July 8, 2021
    Inventors: David Silver, Moshe Laifenfeld, Tal Kaitz
  • Publication number: 20210182688
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. The method includes: training an action selection policy neural network, and during the training of the action selection neural network, training one or more auxiliary control neural networks and a reward prediction neural network. Each of the auxiliary control neural networks is configured to receive a respective intermediate output generated by the action selection policy neural network and generate a policy output for a corresponding auxiliary control task. The reward prediction neural network is configured to receive one or more intermediate outputs generated by the action selection policy neural network and generate a corresponding predicted reward.
    Type: Application
    Filed: February 24, 2021
    Publication date: June 17, 2021
    Inventors: Volodymyr Mnih, Wojciech Czarnecki, Maxwell Elliot Jaderberg, Tom Schaul, David Silver, Koray Kavukcuoglu
  • Publication number: 20210166127
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.
    Type: Application
    Filed: February 8, 2021
    Publication date: June 3, 2021
    Inventors: Volodymyr Mnih, Adrià Puigdomènech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
  • Publication number: 20210089915
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning. The embodiments described herein apply meta-learning (and in particular, meta-gradient reinforcement learning) to learn an optimum return function G so that the training of the system is improved. This provides a more effective and efficient means of training a reinforcement learning system as the system is able to converge on an optimum set of one or more policy parameters ? more quickly by training the return function G as it goes. In particular, the return function G is made dependent on the one or more policy parameters ? and a meta-objective function J? is used that is differentiated with respect to the one or more return parameters ? to improve the training of the return function G.
    Type: Application
    Filed: December 4, 2020
    Publication date: March 25, 2021
    Inventors: Zhongwen Xu, Hado Philip van Hasselt, David Silver
  • Patent number: 10956820
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. The method includes: training an action selection policy neural network, and during the training of the action selection neural network, training one or more auxiliary control neural networks and a reward prediction neural network. Each of the auxiliary control neural networks is configured to receive a respective intermediate output generated by the action selection policy neural network and generate a policy output for a corresponding auxiliary control task. The reward prediction neural network is configured to receive one or more intermediate outputs generated by the action selection policy neural network and generate a corresponding predicted reward.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: March 23, 2021
    Assignee: DeepMind Technologies Limited
    Inventors: Volodymyr Mnih, Wojciech Czarnecki, Maxwell Elliot Jaderberg, Tom Schaul, David Silver, Koray Kavukcuoglu
  • Patent number: 10936946
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: March 2, 2021
    Assignee: DeepMind Technologies Limited
    Inventors: Volodymyr Mnih, Adrià Puigdomènech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
  • Publication number: 20200410351
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an actor neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a minibatch of experience tuples; and updating current values of the parameters of the actor neural network, comprising: for each experience tuple in the minibatch: processing the training observation and the training action in the experience tuple using a critic neural network to determine a neural network output for the experience tuple, and determining a target neural network output for the experience tuple; updating current values of the parameters of the critic neural network using errors between the target neural network outputs and the neural network outputs; and updating the current values of the parameters of the actor neural network using the critic neural network.
    Type: Application
    Filed: September 14, 2020
    Publication date: December 31, 2020
    Inventors: Timothy Paul Lillicrap, Jonathan James Hunt, Alexander Pritzel, Nicolas Manfred Otto Heess, Tom Erez, Yuval Tassa, David Silver, Daniel Pieter Wierstra
  • Patent number: 10867242
    Abstract: Methods, systems and apparatus, including computer programs encoded on computer storage media, for training a value neural network that is configured to receive an observation characterizing a state of an environment being interacted with by an agent and to process the observation in accordance with parameters of the value neural network to generate a value score. One of the systems performs operations that include training a supervised learning policy neural network; initializing initial values of parameters of a reinforcement learning policy neural network having a same architecture as the supervised learning policy network to the trained values of the parameters of the supervised learning policy neural network; training the reinforcement learning policy neural network on second training data; and training the value neural network to generate a value score for the state of the environment that represents a predicted long-term reward resulting from the environment being in the state.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: December 15, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Thore Kurt Hartwig Graepel, Shih-Chieh Huang, David Silver, Arthur Clement Guez, Laurent Sifre, Ilya Sutskever, Christopher Maddison
  • Patent number: 10860926
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning. The embodiments described herein apply meta-learning (and in particular, meta-gradient reinforcement learning) to learn an optimum return function G so that the training of the system is improved. This provides a more effective and efficient means of training a reinforcement learning system as the system is able to converge on an optimum set of one or more policy parameters ? more quickly by training the return function G as it goes. In particular, the return function G is made dependent on the one or more policy parameters ? and a meta-objective function J? is used that is differentiated with respect to the one or more return parameters ? to improve the training of the return function G.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: December 8, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Zhongwen Xu, Hado Philip van Hasselt, David Silver
  • Publication number: 20200327399
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for prediction of an outcome related to an environment. In one aspect, a system comprises a state representation neural network that is configured to: receive an observation characterizing a state of an environment being interacted with by an agent and process the observation to generate an internal state representation of the environment state; a prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a predicted subsequent state representation of a subsequent state of the environment and a predicted reward for the subsequent state; and a value prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a value prediction.
    Type: Application
    Filed: June 25, 2020
    Publication date: October 15, 2020
    Inventors: David Silver, Tom Schaul, Matteo Hessel, Hado Philip van Hasselt