Patents by Inventor David Silver

David Silver has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10776692
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an actor neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a minibatch of experience tuples; and updating current values of the parameters of the actor neural network, comprising: for each experience tuple in the minibatch: processing the training observation and the training action in the experience tuple using a critic neural network to determine a neural network output for the experience tuple, and determining a target neural network output for the experience tuple; updating current values of the parameters of the critic neural network using errors between the target neural network outputs and the neural network outputs; and updating the current values of the parameters of the actor neural network using the critic neural network.
    Type: Grant
    Filed: July 22, 2016
    Date of Patent: September 15, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Timothy Paul Lillicrap, Jonathan James Hunt, Alexander Pritzel, Nicolas Manfred Otto Heess, Tom Erez, Yuval Tassa, David Silver, Daniel Pieter Wierstra
  • Publication number: 20200265312
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
    Type: Application
    Filed: May 4, 2020
    Publication date: August 20, 2020
    Inventors: Tom Schaul, John Quan, David Silver
  • Patent number: 10733501
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for prediction of an outcome related to an environment. In one aspect, a system comprises a state representation neural network that is configured to: receive an observation characterizing a state of an environment being interacted with by an agent and process the observation to generate an internal state representation of the environment state; a prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a predicted subsequent state representation of a subsequent state of the environment and a predicted reward for the subsequent state; and a value prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a value prediction.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: August 4, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: David Silver, Tom Schaul, Matteo Hessel, Hado Philip van Hasselt
  • Publication number: 20200244707
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network having a plurality of policy parameters and used to select actions to be performed by an agent to control the agent to perform a particular task while interacting with one or more other agents in an environment. In one aspect, the method includes: maintaining data specifying a pool of candidate action selection policies; maintaining data specifying respective matchmaking policy; and training the policy neural network using a reinforcement learning technique to update the policy parameters. The policy parameters define policies to be used in controlling the agent to perform the particular task.
    Type: Application
    Filed: January 24, 2020
    Publication date: July 30, 2020
    Inventors: David Silver, Oriol Vinyals, Maxwell Elliot Jaderberg
  • Publication number: 20200175364
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for reinforcement learning. A reinforcement learning neural network selects actions to be performed by an agent interacting with an environment to perform a task in an attempt to achieve a specified result. The reinforcement learning neural network has at least one input to receive an input observation characterizing a state of the environment and at least one output for determining an action to be performed by the agent in response to the input observation. The system includes a reward function network coupled to the reinforcement learning neural network. The reward function network has an input to receive reward data characterizing a reward provided by one or more states of the environment and is configured to determine a reward function to provide one or more target values for training the reinforcement learning neural network.
    Type: Application
    Filed: May 22, 2018
    Publication date: June 4, 2020
    Inventors: Zhongwen Xu, Hado Phillip van Hasselt, Joseph Varughese Modayil, Andre da Motta Salles Barreto, David Silver
  • Patent number: 10650310
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: May 12, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Tom Schaul, John Quan, David Silver
  • Publication number: 20200143239
    Abstract: Methods, systems and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network. One of the methods includes receiving an observation characterizing a current state of the environment; determining a target network output for the observation by performing a look ahead search of possible future states of the environment starting from the current state until the environment reaches a possible future state that satisfies one or more termination criteria, wherein the look ahead search is guided by the neural network in accordance with current values of the network parameters; selecting an action to be performed by the agent in response to the observation using the target network output generated by performing the look ahead search; and storing, in an exploration history data store, the target network output in association with the observation for use in updating the current values of the network parameters.
    Type: Application
    Filed: May 28, 2018
    Publication date: May 7, 2020
    Inventors: Karen Simonyan, David Silver, Julian Schrittwieser
  • Patent number: 10628733
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for reinforcement learning using goals and observations. One of the methods includes receiving an observation characterizing a current state of the environment; receiving a goal characterizing a target state from a set of target states of the environment; processing the observation using an observation neural network to generate a numeric representation of the observation; processing the goal using a goal neural network to generate a numeric representation of the goal; combining the numeric representation of the observation and the numeric representation of the goal to generate a combined representation; processing the combined representation using an action score neural network to generate a respective score for each action in the predetermined set of actions; and selecting the action to be performed using the respective scores for the actions in the predetermined set of actions.
    Type: Grant
    Filed: April 6, 2016
    Date of Patent: April 21, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Tom Schaul, Daniel George Horgan, Karol Gregor, David Silver
  • Publication number: 20200117992
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributed training of reinforcement learning systems. One of the methods includes receiving, by a learner, current values of the parameters of the Q network from a parameter server, wherein each learner maintains a respective learner Q network replica and a respective target Q network replica; updating, by the learner, the parameters of the learner Q network replica maintained by the learner using the current values; selecting, by the learner, an experience tuple from a respective replay memory; computing, by the learner, a gradient from the experience tuple using the learner Q network replica maintained by the learner and the target Q network replica maintained by the learner; and providing, by the learner, the computed gradient to the parameter server.
    Type: Application
    Filed: October 14, 2019
    Publication date: April 16, 2020
    Inventors: Praveen Deepak Srinivasan, Rory Fearon, Cagdas Alcicek, Arun Sarath Nair, Samuel Blackwell, Vedavyas Panneershelvam, Alessandro De Maria, Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Mustafa Suleyman
  • Patent number: 10564428
    Abstract: A near eye display comprising: a display device; and a waveguide; wherein the waveguide comprises an entry diffractive element and an exit diffractive element; wherein the display device is arranged to direct light into the waveguide via the entry diffractive element and wherein the exit diffractive element is arranged to direct light from the waveguide towards a user's eye; wherein the display device is arranged to output images in a repeating sequence of two or more different colours. Providing the images are transmitted in sufficiently quick succession, the brain will not perceive them as separate images, but will instead essentially merge them (as if they were overlaid on top of one another. As the brain is good at pattern recognition, it can compensate for any minor misalignments that occur between the different images. In this way the time-multiplexing approach avoids the need for multiple parallel waveguides for each specific colour or colour band.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: February 18, 2020
    Inventor: Joshua David Silver
  • Publication number: 20190354859
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning. The embodiments described herein apply meta-learning (and in particular, meta-gradient reinforcement learning) to learn an optimum return function G so that the training of the system is improved. This provides a more effective and efficient means of training a reinforcement learning system as the system is able to converge on an optimum set of one or more policy parameters ? more quickly by training the return function G as it goes. In particular, the return function G is made dependent on the one or more policy parameters ? and a meta-objective function J? is used that is differentiated with respect to the one or more return parameters ? to improve the training of the return function G.
    Type: Application
    Filed: May 20, 2019
    Publication date: November 21, 2019
    Inventors: Zhongwen Xu, Hado Philip van Hasselt, David Silver
  • Patent number: 10445641
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributed training of reinforcement learning systems. One of the methods includes receiving, by a learner, current values of the parameters of the Q network from a parameter server, wherein each learner maintains a respective learner Q network replica and a respective target Q network replica; updating, by the learner, the parameters of the learner Q network replica maintained by the learner using the current values; selecting, by the learner, an experience tuple from a respective replay memory; computing, by the learner, a gradient from the experience tuple using the learner Q network replica maintained by the learner and the target Q network replica maintained by the learner; and providing, by the learner, the computed gradient to the parameter server.
    Type: Grant
    Filed: February 4, 2016
    Date of Patent: October 15, 2019
    Assignee: Deepmind Technologies Limited
    Inventors: Praveen Deepak Srinivasan, Rory Fearon, Cagdas Alcicek, Arun Sarath Nair, Samuel Blackwell, Vedavyas Panneershelvam, Alessandro De Maria, Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Mustafa Suleyman
  • Publication number: 20190258929
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.
    Type: Application
    Filed: May 3, 2019
    Publication date: August 22, 2019
    Inventors: Volodymyr Mnih, Adria Puigdomenech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
  • Publication number: 20190258938
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. The method includes: training an action selection policy neural network, and during the training of the action selection neural network, training one or more auxiliary control neural networks and a reward prediction neural network. Each of the auxiliary control neural networks is configured to receive a respective intermediate output generated by the action selection policy neural network and generate a policy output for a corresponding auxiliary control task. The reward prediction neural network is configured to receive one or more intermediate outputs generated by the action selection policy neural network and generate a corresponding predicted reward.
    Type: Application
    Filed: May 3, 2019
    Publication date: August 22, 2019
    Inventors: Volodymyr Mnih, Wojciech Czarnecki, Maxwell Elliot Jaderberg, Tom Schaul, David Silver, Koray Kavukcuoglu
  • Publication number: 20190259051
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for prediction of an outcome related to an environment. In one aspect, a system comprises a state representation neural network that is configured to: receive an observation characterizing a state of an environment being interacted with by an agent and process the observation to generate an internal state representation of the environment state; a prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a predicted subsequent state representation of a subsequent state of the environment and a predicted reward for the subsequent state; and a value prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a value prediction.
    Type: Application
    Filed: May 3, 2019
    Publication date: August 22, 2019
    Inventors: David Silver, Tom Schaul, Matteo Hessel, Hado Philip van Hasselt
  • Patent number: 10346741
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.
    Type: Grant
    Filed: May 11, 2018
    Date of Patent: July 9, 2019
    Assignee: DeepMind Technologies Limited
    Inventors: Volodymyr Mnih, Adrià Puigdomènech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
  • Patent number: 10282662
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
    Type: Grant
    Filed: May 11, 2018
    Date of Patent: May 7, 2019
    Assignee: DeepMind Technologies Limited
    Inventors: Tom Schaul, John Quan, David Silver
  • Publication number: 20190086667
    Abstract: A near eye display comprising: a display device; and a waveguide; wherein the waveguide comprises an entry diffractive element and an exit diffractive element; wherein the display device is arranged to direct light into the waveguide via the entry diffractive element and wherein the exit diffractive element is arranged to direct light from the waveguide towards a user's eye; wherein the display device is arranged to output images in a repeating sequence of two or more different colours. Providing the images are transmitted in sufficiently quick succession, the brain will not perceive them as separate images, but will instead essentially merge them (as if they were overlaid on top of one another. As the brain is good at pattern recognition, it can compensate for any minor misalignments that occur between the different images. In this way the time-multiplexing approach avoids the need for multiple parallel waveguides for each specific colour or colour band.
    Type: Application
    Filed: October 6, 2016
    Publication date: March 21, 2019
    Inventor: Joshua David SILVER
  • Patent number: 10095229
    Abstract: Example passenger validation systems and methods are described. In one implementation, a method receives, at a vehicle, a transport request indicating a passenger and a pick-up location. The vehicle drives to the pick-up location and authenticates the passenger at the pick-up location. If the passenger is successfully authenticated, the method unlocks the vehicle doors to allow access to the vehicle, determines a number of people entering the vehicle, and confirms that the number of people entering the vehicle matches a number of passengers associated with the transport request.
    Type: Grant
    Filed: September 13, 2016
    Date of Patent: October 9, 2018
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Scott Vincent Myers, Praveen Narayanan, Harpreetsingh Banvait, Mark Crawford, Alexandru Mihai Gurghian, David Silver
  • Publication number: 20180260707
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
    Type: Application
    Filed: May 11, 2018
    Publication date: September 13, 2018
    Inventors: Tom Schaul, John Quan, David Silver