Patents by Inventor Neil Matthew Tinmouth Houlsby

Neil Matthew Tinmouth Houlsby has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11928854
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for object detection. In one aspect, a method comprises: obtaining: (i) an image, and (ii) a set of one or more query embeddings, wherein each query embedding represents a respective category of object; processing the image and the set of query embeddings using an object detection neural network to generate object detection data for the image, comprising: processing the image using an image encoding subnetwork of the object detection neural network to generate a set of object embeddings; processing each object embedding using a localization subnetwork to generate localization data defining a corresponding region of the image; and processing: (i) the set of object embeddings, and (ii) the set of query embeddings, using a classification subnetwork to generate, for each object embedding, a respective classification score distribution over the set of query embeddings.
    Type: Grant
    Filed: May 5, 2023
    Date of Patent: March 12, 2024
    Assignee: Google LLC
    Inventors: Matthias Johannes Lorenz Minderer, Alexey Alexeevich Gritsenko, Austin Charles Stone, Dirk Weissenborn, Alexey Dosovitskiy, Neil Matthew Tinmouth Houlsby
  • Publication number: 20240062426
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing images using self-attention based neural networks. One of the methods includes obtaining one or more images comprising a plurality of pixels; determining, for each image of the one or more images, a plurality of image patches of the image, wherein each image patch comprises a different subset of the pixels of the image; processing, for each image of the one or more images, the corresponding plurality of image patches to generate an input sequence comprising a respective input element at each of a plurality of input positions, wherein a plurality of the input elements correspond to respective different image patches; and processing the input sequences using a neural network to generate a network output that characterizes the one or more images, wherein the neural network comprises one or more self-attention neural network layers.
    Type: Application
    Filed: November 1, 2023
    Publication date: February 22, 2024
    Inventors: Neil Matthew Tinmouth Houlsby, Sylvain Gelly, Jakob D. Uszkoreit, Xiaohua Zhai, Georg Heigold, Lucas Klaus Beyer, Alexander Kolesnikov, Matthias Johannes Lorenz Minderer, Dirk Weissenborn, Mostafa Dehghani, Alexey Dosovitskiy, Thomas Unterthiner
  • Publication number: 20230360365
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for object detection. In one aspect, a method comprises: obtaining: (i) an image, and (ii) a set of one or more query embeddings, wherein each query embedding represents a respective category of object; processing the image and the set of query embeddings using an object detection neural network to generate object detection data for the image, comprising: processing the image using an image encoding subnetwork of the object detection neural network to generate a set of object embeddings; processing each object embedding using a localization subnetwork to generate localization data defining a corresponding region of the image; and processing: (i) the set of object embeddings, and (ii) the set of query embeddings, using a classification subnetwork to generate, for each object embedding, a respective classification score distribution over the set of query embeddings.
    Type: Application
    Filed: May 5, 2023
    Publication date: November 9, 2023
    Inventors: Matthias Johannes Lorenz Minderer, Alexey Alexeevich Gritsenko, Austin Charles Stone, Dirk Weissenborn, Alexey Dosovitskiy, Neil Matthew Tinmouth Houlsby
  • Publication number: 20230196211
    Abstract: Generally, the present disclosure is directed to systems and methods that provide a simple, scalable, yet effective strategy to perform transfer learning with a mixture of experts (MoE). In particular, the transfer of pre-trained representations can improve sample efficiency and reduce computational requirements for new tasks. However, representations used for transfer are usually generic, and are not tailored to a particular distribution of downstream tasks. In contrast, example systems and methods of the present disclosure use expert representations for transfer with a simple, yet effective, strategy.
    Type: Application
    Filed: June 7, 2021
    Publication date: June 22, 2023
    Inventors: Carlos Riquelme Ruiz, André Susano Pinto, Joan Puigcerver, Basil Mustafa, Neil Matthew Tinmouth Houlsby, Sylvain Gelly, Cedric Benjamin Renggli, Daniel Martin Keysers
  • Publication number: 20230107409
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing a machine learning task on a network input to generate a network output. In one aspect, one of the systems includes a neural network configured to perform the machine learning task, the neural network including one or more expert neural network blocks that each include multiple routers and multiple expert neural networks.
    Type: Application
    Filed: October 5, 2022
    Publication date: April 6, 2023
    Inventors: Rodolphe Jenatton, Carlos Riquelme Ruiz, Dustin Tran, James Urquhart Allingham, Florian Wenzel, Zelda Elaine Mariet, Basil Mustafa, Joan Puigcerver i Perez, Neil Matthew Tinmouth Houlsby, Ghassen Jerfel
  • Publication number: 20220383630
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training Vision Transformer (ViT) neural networks.
    Type: Application
    Filed: May 31, 2022
    Publication date: December 1, 2022
    Inventors: Lucas Klaus Beyer, Neil Matthew Tinmouth Houlsby, Alexander Kolesnikov, Xiaohua Zhai
  • Publication number: 20220375211
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing images using mixer neural networks. One of the methods includes obtaining one or more images comprising a plurality of pixels; determining, for each image of the one or more images, a plurality of image patches of the image, wherein each image patch comprises a different subset of the pixels of the image; processing, for each image of the one or more images, the corresponding plurality of image patches to generate an input sequence comprising a respective input element at each of a plurality of input positions, wherein a plurality of the input elements correspond to respective different image patches; and processing the input sequences using a neural network to generate a network output that characterizes the one or more images, wherein the neural network comprises one or more mixer neural network layers.
    Type: Application
    Filed: May 5, 2022
    Publication date: November 24, 2022
    Inventors: Ilya Tolstikhin, Neil Matthew Tinmouth Houlsby, Alexander Kolesnikov, Lucas Klaus Beyer, Alexey Dosovitskiy, Mario Lucic, Xiaohua Zhai, Thomas Unterthiner, Daniel M. Keysers, Jakob D. Uszkoreit, Yin Ching Jessica Yung, Andreas Steiner
  • Publication number: 20220189612
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network to perform a downstream computer vision task. One of the methods includes pre-training an initial neural network that shares layers with the neural network to perform an initial computer vision task and then training the neural network on the downstream computer vision task.
    Type: Application
    Filed: December 14, 2021
    Publication date: June 16, 2022
    Inventors: Xiaohua Zhai, Sylvain Gelly, Alexander Kolesnikov, Yin Ching Jessica Yung, Joan Puigcerver i Perez, Lucas Klaus Beyer, Neil Matthew Tinmouth Houlsby, Wen Yau Aaron Loh, Alan Prasana Karthikesalingam, Basil Mustafa, Jan Freyberg, Patricia Leigh MacWilliams, Vivek Natarajan
  • Publication number: 20220108478
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing images using self-attention based neural networks. One of the methods includes obtaining one or more images comprising a plurality of pixels; determining, for each image of the one or more images, a plurality of image patches of the image, wherein each image patch comprises a different subset of the pixels of the image; processing, for each image of the one or more images, the corresponding plurality of image patches to generate an input sequence comprising a respective input element at each of a plurality of input positions, wherein a plurality of the input elements correspond to respective different image patches; and processing the input sequences using a neural network to generate a network output that characterizes the one or more images, wherein the neural network comprises one or more self-attention neural network layers.
    Type: Application
    Filed: October 1, 2021
    Publication date: April 7, 2022
    Inventors: Neil Matthew Tinmouth Houlsby, Sylvain Gelly, Jakob D. Uszkoreit, Xiaohua Zhai, Georg Heigold, Lucas Klaus Beyer, Alexander Kolesnikov, Matthias Johannes Lorenz Minderer, Dirk Weissenborn, Mostafa Dehghani, Alexey Dosovitskiy, Thomas Unterthiner
  • Publication number: 20220108171
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training neural networks using transfer learning.
    Type: Application
    Filed: September 28, 2021
    Publication date: April 7, 2022
    Inventors: Joan Puigcerver i Perez, Basil Mustafa, André Susano Pinto, Carlos Riquelme Ruiz, Neil Matthew Tinmouth Houlsby, Daniel M. Keysers
  • Publication number: 20220092416
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining neural network architectures. One of the methods includes receiving training data for training a task neural network to perform a particular machine learning task; and selecting, from a space of possible architectures, an architecture for the task neural network, wherein the space of possible architectures is represented as a graph of nodes connected by edges, each node in the graph representing a decision point in selecting the architecture and each edge in the graph representing an action.
    Type: Application
    Filed: December 27, 2019
    Publication date: March 24, 2022
    Inventors: Neil Matthew Tinmouth Houlsby, Quentin Lascombes de Laroussilhe, Stanislaw Kamil Jastrzebski, Andrea Gesmundo