Patents by Inventor David Silvers
David Silvers has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240144015Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. The method includes: training an action selection policy neural network, and during the training of the action selection neural network, training one or more auxiliary control neural networks and a reward prediction neural network. Each of the auxiliary control neural networks is configured to receive a respective intermediate output generated by the action selection policy neural network and generate a policy output for a corresponding auxiliary control task. The reward prediction neural network is configured to receive one or more intermediate outputs generated by the action selection policy neural network and generate a corresponding predicted reward.Type: ApplicationFiled: November 3, 2023Publication date: May 2, 2024Inventors: Volodymyr Mnih, Wojciech Czarnecki, Maxwell Elliot Jaderberg, Tom Schaul, David Silver, Koray Kavukcuoglu
-
Publication number: 20240138981Abstract: Delivery devices for delivering a self-expanding prosthetic heart valve are disclosed. The delivery device includes at least one cord is configured to selectively cinch the self-expanding prosthetic heart valve. The delivery device further includes at least one self-expanding release pin comprising a proximal segment, a distal segment, and an intermediate segment positioned between the proximal segment and the distal segment. The at least one self-expanding release pin includes a normal, expanded condition wherein the intermediate segment extends radially outward relative to a central axis of the spindle from the proximal segment to the distal segment. The delivery device further includes a capsule configured to be distally extended relative to the spindle to collapse the at least one self-expanding release pin from the normal, expanded condition of the at least one self-expanding release pin to a collapsed condition of the at least one self-expanding release pin.Type: ApplicationFiled: January 11, 2024Publication date: May 2, 2024Inventors: Jill Mendelson, Michele Silver, Michael Gloss, Timothy Groen, Paul Rothstein, Jeffrey Sandstrom, Phil Haarstad, Joel Racchini, David Blaeser
-
Publication number: 20240125619Abstract: Aspects of the disclosure relate to generating scouting objectives in order to update map information used to control a fleet of vehicles in an autonomous driving mode. For instance, a notification from a vehicle of the fleet identifying a feature and a location of the feature may be received. A first bound for a scouting area may be identified based on the location of the feature. A second bound for the scouting area may be identified based on a lane closest to the feature. A scouting objective may be generated for the feature based on the first bound and the second bound.Type: ApplicationFiled: December 11, 2023Publication date: April 18, 2024Inventors: Katharine Patterson, Joshua Herbach, David Silver, David Margines
-
Publication number: 20240127045Abstract: A method performed by one or more computers for obtaining an optimized algorithm that (i) is functionally equivalent to a target algorithm and (ii) optimizes one or more target properties when executed on a target set of one or more hardware devices. The method includes: initializing a target tensor representing the target algorithm; generating, using a neural network having a plurality of network parameters, a tensor decomposition of the target tensor that parametrizes a candidate algorithm; generating target property values for each of the target properties when executing the candidate algorithm on the target set of hardware devices; determining a benchmarking score for the tensor decomposition based on the target property values of the candidate algorithm; generating a training example from the tensor decomposition and the benchmarking score; and storing, in a training data store, the training example for use in updating the network parameters of the neural network.Type: ApplicationFiled: October 3, 2022Publication date: April 18, 2024Inventors: Thomas Keisuke Hubert, Shih-Chieh Huang, Alexander Novikov, Alhussein Fawzi, Bernardino Romera-Paredes, David Silver, Demis Hassabis, Grzegorz Michal Swirszcz, Julian Schrittwieser, Pushmeet Kohli, Mohammadamin Barekatain, Matej Balog, Francisco Jesus Rodriguez Ruiz
-
Publication number: 20240104353Abstract: A computer-implemented method for generating an output token sequence from an input token sequence. The method combines a look ahead tree search, such as a Monte Carlo tree search, with a sequence-to-sequence neural network system. The sequence-to-sequence neural network system has a policy output defining a next token probability distribution, and may include a value neural network providing a value output to evaluate a sequence. An initial partial output sequence is extended using the look ahead tree search guided by the policy output and, in implementations, the value output, of the sequence-to-sequence neural network system until a complete output sequence is obtained.Type: ApplicationFiled: February 8, 2022Publication date: March 28, 2024Inventors: Rémi Bertrand Francis Leblond, Jean-Baptiste Alayrac, Laurent Sifre, Miruna Pîslar, Jean-Baptiste Lespiau, Ioannis Antonoglou, Karen Simonyan, David Silver, Oriol Vinyals
-
Publication number: 20240092392Abstract: Aspects of the disclosure relate to detecting and responding to malfunctioning traffic signals for a vehicle having an autonomous driving mode. For instance, information identifying a detected state of a traffic signal for an intersection. An anomaly for the traffic signal may be detected based on the detected state and prestored information about expected states of the traffic signal. The vehicle may be controlled in the autonomous driving mode based on the detected anomaly.Type: ApplicationFiled: November 29, 2023Publication date: March 21, 2024Inventors: David Silver, Carl Kershaw, Jonathan Hsiao, Edward Hsiao
-
Patent number: 11914078Abstract: Imaging apparatus (22) includes a radiation source (40), which emits pulsed beams (42) of optical radiation toward a target scene (24). An array (52) of sensing elements (78) output signals indicative of respective times of incidence of photons in a first image of the target scene that is formed on the array of sensing elements. An image sensor (64) captures a second image of the target scene in registration with the first image. Processing and control circuitry (56, 58) identifies, responsively to the signals, areas of the array on which the pulses of optical radiation reflected from corresponding regions of the target scene are incident, and processes the signals from the sensing elements in the identified areas in order measure depth coordinates of the corresponding regions of the target scene based on the times of incidence, while identifying, responsively to the second image, one or more of the regions of the target scene as no-depth regions.Type: GrantFiled: September 2, 2019Date of Patent: February 27, 2024Assignee: APPLE INC.Inventors: David Silver, Moshe Laifenfeld, Tal Kaitz
-
Patent number: 11885639Abstract: Aspects of the disclosure relate to generating scouting objectives in order to update map information used to control a fleet of vehicles in an autonomous driving mode. For instance, a notification from a vehicle of the fleet identifying a feature and a location of the feature may be received. A first bound for a scouting area may be identified based on the location of the feature. A second bound for the scouting area may be identified based on a lane closest to the feature. A scouting objective may be generated for the feature based on the first bound and the second bound.Type: GrantFiled: August 10, 2020Date of Patent: January 30, 2024Assignee: Waymo LLCInventors: Katharine Patterson, Joshua Herbach, David Silver, David Margines
-
Patent number: 11866068Abstract: Aspects of the disclosure relate to detecting and responding to malfunctioning traffic signals for a vehicle having an autonomous driving mode. For instance, information identifying a detected state of a traffic signal for an intersection. An anomaly for the traffic signal may be detected based on the detected state and prestored information about expected states of the traffic signal. The vehicle may be controlled in the autonomous driving mode based on the detected anomaly.Type: GrantFiled: June 19, 2020Date of Patent: January 9, 2024Assignee: Waymo LLCInventors: David Silver, Carl Kershaw, Jonathan Hsiao, Edward Hsiao
-
Patent number: 11842281Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. The method includes: training an action selection policy neural network, and during the training of the action selection neural network, training one or more auxiliary control neural networks and a reward prediction neural network. Each of the auxiliary control neural networks is configured to receive a respective intermediate output generated by the action selection policy neural network and generate a policy output for a corresponding auxiliary control task. The reward prediction neural network is configured to receive one or more intermediate outputs generated by the action selection policy neural network and generate a corresponding predicted reward.Type: GrantFiled: February 24, 2021Date of Patent: December 12, 2023Assignee: DeepMind Technologies LimitedInventors: Volodymyr Mnih, Wojciech Czarnecki, Maxwell Elliot Jaderberg, Tom Schaul, David Silver, Koray Kavukcuoglu
-
Patent number: 11836625Abstract: Methods, systems and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network. One of the methods includes receiving an observation characterizing a current state of the environment; determining a target network output for the observation by performing a look ahead search of possible future states of the environment starting from the current state until the environment reaches a possible future state that satisfies one or more termination criteria, wherein the look ahead search is guided by the neural network in accordance with current values of the network parameters; selecting an action to be performed by the agent in response to the observation using the target network output generated by performing the look ahead search; and storing, in an exploration history data store, the target network output in association with the observation for use in updating the current values of the network parameters.Type: GrantFiled: September 19, 2022Date of Patent: December 5, 2023Assignee: DeepMind Technologies LimitedInventors: Karen Simonyan, David Silver, Julian Schrittwieser
-
Patent number: 11836620Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning. The embodiments described herein apply meta-learning (and in particular, meta-gradient reinforcement learning) to learn an optimum return function G so that the training of the system is improved. This provides a more effective and efficient means of training a reinforcement learning system as the system is able to converge on an optimum set of one or more policy parameters ? more quickly by training the return function G as it goes. In particular, the return function G is made dependent on the one or more policy parameters ? and a meta-objective function J? is used that is differentiated with respect to the one or more return parameters ? to improve the training of the return function G.Type: GrantFiled: December 4, 2020Date of Patent: December 5, 2023Assignee: DeepMind Technologies LimitedInventors: Zhongwen Xu, Hado Philip van Hasselt, David Silver
-
Patent number: 11803750Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an actor neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a minibatch of experience tuples; and updating current values of the parameters of the actor neural network, comprising: for each experience tuple in the minibatch: processing the training observation and the training action in the experience tuple using a critic neural network to determine a neural network output for the experience tuple, and determining a target neural network output for the experience tuple; updating current values of the parameters of the critic neural network using errors between the target neural network outputs and the neural network outputs; and updating the current values of the parameters of the actor neural network using the critic neural network.Type: GrantFiled: September 14, 2020Date of Patent: October 31, 2023Assignee: DeepMind Technologies LimitedInventors: Timothy Paul Lillicrap, Jonathan James Hunt, Alexander Pritzel, Nicolas Manfred Otto Heess, Tom Erez, Yuval Tassa, David Silver, Daniel Pieter Wierstra
-
Patent number: 11783182Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.Type: GrantFiled: February 8, 2021Date of Patent: October 10, 2023Assignee: DeepMind Technologies LimitedInventors: Volodymyr Mnih, Adrià Puigdomènech Badia, Alexander Benjamin Graves, Timothy James Alexander Harley, David Silver, Koray Kavukcuoglu
-
Publication number: 20230244936Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network having a plurality of policy parameters and used to select actions to be performed by an agent to control the agent to perform a particular task while interacting with one or more other agents in an environment. In one aspect, the method includes: maintaining data specifying a pool of candidate action selection policies; maintaining data specifying respective matchmaking policy; and training the policy neural network using a reinforcement learning technique to update the policy parameters. The policy parameters define policies to be used in controlling the agent to perform the particular task.Type: ApplicationFiled: April 6, 2023Publication date: August 3, 2023Inventors: David Silver, Oriol Vinyals, Maxwell Elliot Jaderberg
-
Publication number: 20230244933Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.Type: ApplicationFiled: January 30, 2023Publication date: August 3, 2023Inventors: Tom Schaul, John Quan, David Silver
-
Patent number: 11651208Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for reinforcement learning. A reinforcement learning neural network selects actions to be performed by an agent interacting with an environment to perform a task in an attempt to achieve a specified result. The reinforcement learning neural network has at least one input to receive an input observation characterizing a state of the environment and at least one output for determining an action to be performed by the agent in response to the input observation. The system includes a reward function network coupled to the reinforcement learning neural network. The reward function network has an input to receive reward data characterizing a reward provided by one or more states of the environment and is configured to determine a reward function to provide one or more target values for training the reinforcement learning neural network.Type: GrantFiled: May 22, 2018Date of Patent: May 16, 2023Assignee: DeepMind Technologies LimitedInventors: Zhongwen Xu, Hado Phillip van Hasselt, Joseph Varughese Modayil, Andre da Motta Salles Barreto, David Silver
-
Publication number: 20230144995Abstract: A reinforcement learning system, method, and computer program code for controlling an agent to perform a plurality of tasks while interacting with an environment. The system learns options, where an option comprises a sequence of primitive actions performed by the agent under control of an option policy neural network. In implementations the system discovers options which are useful for multiple different tasks by meta-learning rewards for training the option policy neural network whilst the agent is interacting with the environment.Type: ApplicationFiled: June 7, 2021Publication date: May 11, 2023Inventors: Vivek Veeriah Jeya Veeraiah, Tom Ben Zion Zahavy, Matteo Hessel, Zhongwen Xu, Junhyuk Oh, Iurii Kemaev, Hado Philip van Hasselt, David Silver, Satinder Singh Baveja
-
Patent number: 11627165Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network having a plurality of policy parameters and used to select actions to be performed by an agent to control the agent to perform a particular task while interacting with one or more other agents in an environment. In one aspect, the method includes: maintaining data specifying a pool of candidate action selection policies; maintaining data specifying respective matchmaking policy; and training the policy neural network using a reinforcement learning technique to update the policy parameters. The policy parameters define policies to be used in controlling the agent to perform the particular task.Type: GrantFiled: January 24, 2020Date of Patent: April 11, 2023Assignee: DeepMind Technologies LimitedInventors: David Silver, Oriol Vinyals, Maxwell Elliot Jaderberg
-
Patent number: D991413Type: GrantFiled: August 19, 2021Date of Patent: July 4, 2023Assignee: Bubble Bundt, LLCInventors: David Silver, Mercedes Mane, Zachary John Allen