Patents by Inventor Yee Whye Teh

Yee Whye Teh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240394540
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for scalable continual learning using neural networks. One of the methods includes receiving new training data for a new machine learning task; training an active subnetwork on the new training data to determine trained values of the active network parameters from initial values of the active network parameters while holding current values of the knowledge parameters fixed; and training a knowledge subnetwork on the new training data to determine updated values of the knowledge parameters from the current values of the knowledge parameters by training the knowledge subnetwork to generate knowledge outputs for the new training inputs that match active outputs generated by the trained active subnetwork for the new training inputs.
    Type: Application
    Filed: May 24, 2024
    Publication date: November 28, 2024
    Inventors: Jonathan Schwarz, Razvan Pascanu, Raia Thais Hadsell, Wojciech Czarnecki, Yee Whye Teh, Jelena Luketina
  • Patent number: 12067758
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting objects in images. One of the methods includes obtaining an input image; processing the input image to generate predicted part feature data, the predicted part feature data comprising, for each of a plurality of possible object parts: a part presence probability representing a likelihood that the possible object part is depicted in the input image, a predicted pose of the possible object part in the input image given that the possible object part is depicted in the input image, and an object part feature vector characterizing the depiction of the possible object part given that the possible object part is depicted in the input image; and processing the predicted part feature data for the plurality of possible object parts to generate an object detection output that identifies one or more objects depicted in the input image.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: August 20, 2024
    Assignee: Google LLC
    Inventors: Adam Roman Kosiorek, Geoffrey E. Hinton, Sara Sabour Rouh Aghdam, Yee Whye Teh
  • Patent number: 12020164
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for scalable continual learning using neural networks. One of the methods includes receiving new training data for a new machine learning task; training an active subnetwork on the new training data to determine trained values of the active network parameters from initial values of the active network parameters while holding current values of the knowledge parameters fixed; and training a knowledge subnetwork on the new training data to determine updated values of the knowledge parameters from the current values of the knowledge parameters by training the knowledge subnetwork to generate knowledge outputs for the new training inputs that match active outputs generated by the trained active subnetwork for the new training inputs.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: June 25, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Jonathan Schwarz, Razvan Pascanu, Raia Thais Hadsell, Wojciech Czarnecki, Yee Whye Teh, Jelena Luketina
  • Patent number: 11983634
    Abstract: A method is proposed for training a multitask computer system, such as a multitask neural network system. The system comprises a set of trainable workers and a shared module. The trainable workers and shared module are trained on a plurality of different tasks, such that each worker learns to perform a corresponding one of the tasks according to a respective task policy, and said shared policy network learns a multitask policy which represents common behavior for the tasks. The coordinated training is performed by optimizing an objective function comprising, for each task: a reward term indicative of an expected reward earned by a worker in performing the corresponding task according to the task policy; and at least one entropy term which regularizes the distribution of the task policy towards the distribution of the multitask policy.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: May 14, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Razvan Pascanu, Raia Thais Hadsell, Victor Constant Bapst, Wojciech Czarnecki, James Kirkpatrick, Yee Whye Teh, Nicolas Manfred Otto Heess
  • Publication number: 20220230425
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting objects in images. One of the methods includes obtaining an input image; processing the input image to generate predicted part feature data, the predicted part feature data comprising, for each of a plurality of possible object parts: a part presence probability representing a likelihood that the possible object part is depicted in the input image, a predicted pose of the possible object part in the input image given that the possible object part is depicted in the input image, and an object part feature vector characterizing the depiction of the possible object part given that the possible object part is depicted in the input image; and processing the predicted part feature data for the plurality of possible object parts to generate an object detection output that identifies one or more objects depicted in the input image.
    Type: Application
    Filed: May 22, 2020
    Publication date: July 21, 2022
    Inventors: Adam Roman Kosiorek, Geoffrey E. Hinton, Sara Sabour Rouh Aghdam, Yee Whye Teh
  • Publication number: 20220083869
    Abstract: A method is proposed for training a multitask computer system, such as a multitask neural network system. The system comprises a set of trainable workers and a shared module. The trainable workers and shared module are trained on a plurality of different tasks, such that each worker learns to perform a corresponding one of the tasks according to a respective task policy, and said shared policy network learns a multitask policy which represents common behavior for the tasks. The coordinated training is performed by optimizing an objective function comprising, for each task: a reward term indicative of an expected reward earned by a worker in performing the corresponding task according to the task policy; and at least one entropy term which regularizes the distribution of the task policy towards the distribution of the multitask policy.
    Type: Application
    Filed: September 27, 2021
    Publication date: March 17, 2022
    Inventors: Razvan Pascanu, Raia Thais Hadsell, Victor Constant Bapst, Wojciech Czarnecki, James Kirkpatrick, Yee Whye Teh, Nicolas Manfred Otto Heess
  • Patent number: 11132609
    Abstract: A method is proposed for training a multitask computer system, such as a multitask neural network system. The system comprises a set of trainable workers and a shared module. The trainable workers and shared module are trained on a plurality of different tasks, such that each worker learns to perform a corresponding one of the tasks according to a respective task policy, and said shared policy network learns a multitask policy which represents common behavior for the tasks. The coordinated training is performed by optimizing an objective function comprising, for each task: a reward term indicative of an expected reward earned by a worker in performing the corresponding task according to the task policy; and at least one entropy term which regularizes the distribution of the task policy towards the distribution of the multitask policy.
    Type: Grant
    Filed: November 19, 2019
    Date of Patent: September 28, 2021
    Assignee: DeepMind Technologies Limited
    Inventors: Razvan Pascanu, Raia Thais Hadsell, Victor Constant Bapst, Wojciech Czarnecki, James Kirkpatrick, Yee Whye Teh, Nicolas Manfred Otto Heess
  • Publication number: 20210117786
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for scalable continual learning using neural networks. One of the methods includes receiving new training data for a new machine learning task; training an active subnetwork on the new training data to determine trained values of the active network parameters from initial values of the active network parameters while holding current values of the knowledge parameters fixed; and training a knowledge subnetwork on the new training data to determine updated values of the knowledge parameters from the current values of the knowledge parameters by training the knowledge subnetwork to generate knowledge outputs for the new training inputs that match active outputs generated by the trained active subnetwork for the new training inputs.
    Type: Application
    Filed: April 18, 2019
    Publication date: April 22, 2021
    Inventors: Jonathan Schwarz, Razvan Pascanu, Raia Thais Hadsell, Wojciech Czarnecki, Yee Whye Teh, Jelena Luketina
  • Publication number: 20210097401
    Abstract: According to a first aspect a network system to generate output data values from input data values according to one or In more learned data distributions comprises an input to receive a set of observations, each comprising a respective first data value for a first variable and a respective second data value for a second variable dependent upon the first variable. The system may comprise an encoder neural network system configured to encode each observation of the set of observations to provide an encoded output for each observation. The system may further comprise an aggregator configured to aggregate the encoded outputs for the set of observations and provide an aggregated output. The system may further comprise a decoder neural network system configured to receive a combination of the aggregated output and a target input value and to provide a decoder output. The target input value may comprise a value for the first variable and the decoder output may predict a corresponding value for the second variable.
    Type: Application
    Filed: February 11, 2019
    Publication date: April 1, 2021
    Inventors: Tiago Miguel Sargento Pires Ramalho, Dan Rosenbaum, Marta Garnelo, Christopher Maddison, Seyed Mohammadali Eslami, Yee Whye Teh, Danilo Jimenez Rezende
  • Publication number: 20200090048
    Abstract: A method is proposed for training a multitask computer system, such as a multitask neural network system. The system comprises a set of trainable workers and a shared module. The trainable workers and shared module are trained on a plurality of different tasks, such that each worker learns to perform a corresponding one of the tasks according to a respective task policy, and said shared policy network learns a multitask policy which represents common behavior for the tasks. The coordinated training is performed by optimizing an objective function comprising, for each task: a reward term indicative of an expected reward earned by a worker in performing the corresponding task according to the task policy; and at least one entropy term which regularizes the distribution of the task policy towards the distribution of the multitask policy.
    Type: Application
    Filed: November 19, 2019
    Publication date: March 19, 2020
    Inventors: Razvan Pascanu, Raia Thais Hadsell, Victor Constant Bapst, Wojciech Czarnecki, James Kirkpatrick, Yee Whye Teh, Nicolas Manfred Otto Heess