Patents by Inventor Sudarshan Kumaran

Sudarshan Kumaran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11662210
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting actions to be performed by an agent interacting with an environment. In one aspect, a system comprises a grid cell neural network and an action selection neural network. The grid cell network is configured to: receive an input comprising data characterizing a velocity of the agent; process the input to generate a grid cell representation; and process the grid cell representation to generate an estimate of a position of the agent in the environment; the action selection neural network is configured to: receive an input comprising a grid cell representation and an observation characterizing a state of the environment; and process the input to generate an action selection network output.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: May 30, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Andrea Banino, Sudarshan Kumaran, Raia Thais Hadsell, Benigno Uria-Martínez
  • Publication number: 20220276056
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting actions to be performed by an agent interacting with an environment. In one aspect, a system comprises a grid cell neural network and an action selection neural network. The grid cell network is configured to: receive an input comprising data characterizing a velocity of the agent; process the input to generate a grid cell representation; and process the grid cell representation to generate an estimate of a position of the agent in the environment; the action selection neural network is configured to: receive an input comprising a grid cell representation and an observation characterizing a state of the environment; and process the input to generate an action selection network output.
    Type: Application
    Filed: May 18, 2022
    Publication date: September 1, 2022
    Inventors: Andrea Banino, Sudarshan Kumaran, Raia Thais Hadsell, Benigno Uria-Martínez
  • Publication number: 20220253698
    Abstract: A neural network based memory system with external memory for storing representations of knowledge items. The memory can be used to retrieve indirectly related knowledge items by recirculating queries, and is useful for relational reasoning. Implementations of the system control how many times queries are recirculated, and hence the degree of relational reasoning, to minimize computation.
    Type: Application
    Filed: May 22, 2020
    Publication date: August 11, 2022
    Inventors: Andrea Banino, Charles Blundell, Adrià Puigdomènech Badia, Raphael Koster, Sudarshan Kumaran
  • Patent number: 11365972
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting actions to be performed by an agent interacting with an environment. In one aspect, a system comprises a grid cell neural network and an action selection neural network. The grid cell network is configured to: receive an input comprising data characterizing a velocity of the agent; process the input to generate a grid cell representation; and process the grid cell representation to generate an estimate of a position of the agent in the environment; the action selection neural network is configured to: receive an input comprising a grid cell representation and an observation characterizing a state of the environment; and process the input to generate an action selection network output.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: June 21, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Andrea Banino, Sudarshan Kumaran, Raia Thais Hadsell, Benigno Uria-Martínez
  • Patent number: 11074481
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. In one aspect, a method of training an action selection policy neural network for use in selecting actions to be performed by an agent navigating through an environment to accomplish one or more goals comprises: receiving an observation image characterizing a current state of the environment; processing, using the action selection policy neural network, an input comprising the observation image to generate an action selection output; processing, using a geometry-prediction neural network, an intermediate output generated by the action selection policy neural network to predict a value of a feature of a geometry of the environment when in the current state; and backpropagating a gradient of a geometry-based auxiliary loss into the action selection policy neural network to determine a geometry-based auxiliary update for current values of the network parameters.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: July 27, 2021
    Assignee: DeepMind Technologies Limited
    Inventors: Fabio Viola, Piotr Wojciech Mirowski, Andrea Banino, Razvan Pascanu, Hubert Josef Soyer, Andrew James Ballard, Sudarshan Kumaran, Raia Thais Hadsell, Laurent Sifre, Rostislav Goroshin, Koray Kavukcuoglu, Misha Man Ray Denil
  • Publication number: 20200191574
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting actions to be performed by an agent interacting with an environment. In one aspect, a system comprises a grid cell neural network and an action selection neural network. The grid cell network is configured to: receive an input comprising data characterizing a velocity of the agent; process the input to generate a grid cell representation; and process the grid cell representation to generate an estimate of a position of the agent in the environment; the action selection neural network is configured to: receive an input comprising a grid cell representation and an observation characterizing a state of the environment; and process the input to generate an action selection network output.
    Type: Application
    Filed: February 19, 2020
    Publication date: June 18, 2020
    Inventors: Andrea Banino, Sudarshan Kumaran, Raia Thais Hadsell, Benigno Uria-Martínez
  • Publication number: 20200151515
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. In one aspect, a method of training an action selection policy neural network for use in selecting actions to be performed by an agent navigating through an environment to accomplish one or more goals comprises: receiving an observation image characterizing a current state of the environment; processing, using the action selection policy neural network, an input comprising the observation image to generate an action selection output; processing, using a geometry-prediction neural network, an intermediate output generated by the action selection policy neural network to predict a value of a feature of a geometry of the environment when in the current state; and backpropagating a gradient of a geometry-based auxiliary loss into the action selection policy neural network to determine a geometry-based auxiliary update for current values of the network parameters.
    Type: Application
    Filed: January 17, 2020
    Publication date: May 14, 2020
    Inventors: Fabio Viola, Piotr Wojciech Mirowski, Andrea Banino, Razvan Pascanu, Hubert Josef Soyer, Andrew James Ballard, Sudarshan Kumaran, Raia Thais Hadsell, Laurent Sifre, Rostislav Goroshin, Koray Kavukcuoglu, Misha Man Ray Denil
  • Patent number: 10605608
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting actions to be performed by an agent interacting with an environment. In one aspect, a system comprises a grid cell neural network and an action selection neural network. The grid cell network is configured to: receive an input comprising data characterizing a velocity of the agent; process the input to generate a grid cell representation; and process the grid cell representation to generate an estimate of a position of the agent in the environment; the action selection neural network is configured to: receive an input comprising a grid cell representation and an observation characterizing a state of the environment; and process the input to generate an action selection network output.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: March 31, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Andrea Banino, Sudarshan Kumaran, Raia Thais Hadsell, Benigno Uria-Martinez
  • Patent number: 10572776
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. In one aspect, a method of training an action selection policy neural network for use in selecting actions to be performed by an agent navigating through an environment to accomplish one or more goals comprises: receiving an observation image characterizing a current state of the environment; processing, using the action selection policy neural network, an input comprising the observation image to generate an action selection output; processing, using a geometry-prediction neural network, an intermediate output generated by the action selection policy neural network to predict a value of a feature of a geometry of the environment when in the current state; and backpropagating a gradient of a geometry-based auxiliary loss into the action selection policy neural network to determine a geometry-based auxiliary update for current values of the network parameters.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: February 25, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Fabio Viola, Piotr Wojciech Mirowski, Andrea Banino, Razvan Pascanu, Hubert Josef Soyer, Andrew James Ballard, Sudarshan Kumaran, Raia Thais Hadsell, Laurent Sifre, Rostislav Goroshin, Koray Kavukcuoglu, Misha Man Ray Denil
  • Publication number: 20190346272
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting actions to be performed by an agent interacting with an environment. In one aspect, a system comprises a grid cell neural network and an action selection neural network. The grid cell network is configured to: receive an input comprising data characterizing a velocity of the agent; process the input to generate a grid cell representation; and process the grid cell representation to generate an estimate of a position of the agent in the environment; the action selection neural network is configured to: receive an input comprising a grid cell representation and an observation characterizing a state of the environment; and process the input to generate an action selection network output.
    Type: Application
    Filed: May 9, 2019
    Publication date: November 14, 2019
    Inventors: Andrea Banino, Sudarshan Kumaran, Raia Thais Hadsell, Benigno Uria-Martinez
  • Publication number: 20190266449
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. In one aspect, a method of training an action selection policy neural network for use in selecting actions to be performed by an agent navigating through an environment to accomplish one or more goals comprises: receiving an observation image characterizing a current state of the environment; processing, using the action selection policy neural network, an input comprising the observation image to generate an action selection output; processing, using a geometry-prediction neural network, an intermediate output generated by the action selection policy neural network to predict a value of a feature of a geometry of the environment when in the current state; and backpropagating a gradient of a geometry-based auxiliary loss into the action selection policy neural network to determine a geometry-based auxiliary update for current values of the network parameters.
    Type: Application
    Filed: May 3, 2019
    Publication date: August 29, 2019
    Inventors: Fabio Viola, Piotr Wojciech Mirowski, Andrea Banino, Razvan Pascanu, Hubert Josef Soyer, Andrew James Ballard, Sudarshan Kumaran, Raia Thais Hadsell, Laurent Sifre, Rostislav Goroshin, Koray Kavukcuoglu, Misha Man Ray Denil