Patents by Inventor Basil Mustafa

Basil Mustafa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250139432
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing a machine learning task on a network input to generate a network output. In one aspect, one of the systems includes a neural network configured to perform the machine learning task, the neural network including one or more merger neural network blocks that each generate block output sequence that has fewer elements than the block input sequence that is processed by the merger neural network block.
    Type: Application
    Filed: February 6, 2023
    Publication date: May 1, 2025
    Inventors: Cédric Benjamin Renggli, Carlos Riquelme Ruiz, André Susano Pinto, Basil Mustafa, Joan Puigcerver i Perez, Neil Matthew Tinmouth Houlsby
  • Patent number: 12272442
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network to perform a downstream computer vision task. One of the methods includes pre-training an initial neural network that shares layers with the neural network to perform an initial computer vision task and then training the neural network on the downstream computer vision task.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: April 8, 2025
    Assignee: Google LLC
    Inventors: Xiaohua Zhai, Sylvain Gelly, Alexander Kolesnikov, Yin Ching Jessica Yung, Joan Puigcerver i Perez, Lucas Klaus Beyer, Neil Matthew Tinmouth Houlsby, Wen Yau Aaron Loh, Alan Prasana Karthikesalingam, Basil Mustafa, Jan Freyberg, Patricia Leigh MacWilliams, Vivek Natarajan
  • Publication number: 20240354593
    Abstract: HET classifiers, which learn a multivariate Gaussian distribution over prediction logits, perform well on image classification problems with hundreds to thousands of classes. However, compared to standard classifiers (e.g., deterministic (DET) classifiers), they introduce extra parameters that scale linearly with the number of classes. This makes them infeasible to apply to larger-scale problems. In addition, HET classifiers introduce a temperature hyperparameter, which is ordinarily tuned. HET classifiers are disclosed, where the parameter count (when compared to a DET classifier) scales independently of the number of classes. In large-scale settings of the embodiments, the need to tune the temperature hyperparameter is removed, by directly learning it on the training data.
    Type: Application
    Filed: July 20, 2023
    Publication date: October 24, 2024
    Inventors: Rodolphe René Willy Jenatton, Mark Patrick Collier, Effrosyni Kokiopoulou, Basil Mustafa, Neil Matthew Tinmouth Houlsby, Jesse Berent
  • Publication number: 20240289926
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating predictions about images. One of the systems includes a neural network comprising a sequence of one or more network blocks that are each configured to perform operations comprising: obtaining a block input that represents an intermediate representation of an input image; determining a plurality of patches of the block input or of an updated representation of the block input, wherein each patch comprises a different subset of elements of the block input or of the updated representation of the block input; assigning each patch to one or more respective expert modules of a plurality of expert modules of the network block; for each patch of the plurality of patches, processing the patch using the corresponding expert modules to generate respective module outputs; and generating a block output by combining the module outputs.
    Type: Application
    Filed: May 27, 2022
    Publication date: August 29, 2024
    Inventors: Carlos Riquelme Ruiz, André Susano Pinto, Basil Mustafa, Daniel M. Keysers, Joan Puigcerver i Perez, Maxim Neumann, Neil Matthew Tinmouth Houlsby, Rodolphe Jenatton
  • Publication number: 20240256835
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing an input through each of a plurality of layers of a neural network to generate an output using a plurality of hardware accelerators. The plurality of layers comprise a fully connected layer having a plurality of parameters arranged in a row dimension and a column dimension. One of the methods comprises: generating a plurality of parameter blocks by partitioning the plurality of parameters along the row dimension and the column dimension; determining a ratio of a number of parameters along the row dimension relative to a number of parameters along the column dimension; and determining whether to use row sharding or column sharding with the plurality of hardware accelerators to calculate an output for the fully connected layer and then calculating the output for the fully connected layer using either row sharding or column sharding.
    Type: Application
    Filed: January 26, 2024
    Publication date: August 1, 2024
    Inventors: Mostafa Dehghani, Josip Djolonga, Jonathan Heek, Basil Mustafa, Piotr Michal Padlewski, Justin Morgan Gilmer, Neil Matthew Tinmouth Houlsby
  • Publication number: 20240169629
    Abstract: A first image and textual content associated with the first image is obtained. A second image that depicts the textual content associated with the first image is rendered. The first image and the second image are processed with a machine-learned encoding model to respectively obtain a first image embedding and a second image embedding for an image embedding space including a plurality of image embeddings. The machine-learned encoding model is trained based on a difference between the first image embedding and the second image embedding.
    Type: Application
    Filed: November 17, 2023
    Publication date: May 23, 2024
    Inventors: Michael Tobias Tschannen, Neil Matthew Tinmouth Houlsby, Basil Mustafa
  • Publication number: 20240153256
    Abstract: A method may include obtaining a pretrained image encoder and a training sample comprising a training image and a training text string corresponding to the training image. The method may also include initializing a text encoder in an untrained state, determining, using the pretrained image encoder and based on the training image, a first latent representation of the training image, and determining, using the text encoder and based on the training text string, a second latent representation of the training text string. The method may further include determining a loss value based on the first latent representation and the second latent representation, updating, based on the loss value, one or more parameters of the text encoder while holding fixed parameters of the pretrained image encoder, and outputting the text encoder in a trained state.
    Type: Application
    Filed: October 31, 2022
    Publication date: May 9, 2024
    Inventors: Daniel Keysers, Xiaohua Zhai, Xiao Wang, Lucas Beyer, Basil Mustafa, Andreas Steiner, Alexander Kolesnikov
  • Publication number: 20230260652
    Abstract: Systems and methods can perform self-supervised machine learning for improved medical image analysis. As one example, self-supervised learning on ImageNet, followed by additional self-supervised learning on unlabeled medical images from the target domain of interest, followed by fine-tuning on labeled medical images from the target domain significantly improves the accuracy of medical image classifiers such as, for example diagnostic models. Another example aspect of the present disclosure is directed to a novel Multi-Instance Contrastive Learning (MICLe) method that uses multiple different medical images that share one or more attributes (e.g., multiple images that depict the same underlying pathology and/or the same patient) to construct more informative positive pairs for self-supervised learning.
    Type: Application
    Filed: December 10, 2021
    Publication date: August 17, 2023
    Inventors: Shekoofeh Azizi, Wen Yau Aaron Loh, Zachary William Beaver, Ting Chen, Jonathan Paul Deaton, Jan Freyberg, Alan Prasana Karthikesalingam, Simon Kornblith, Basil Mustafa, Mohammad Norouzi, Vivek Natarajan, Fiona Keleher Ryan
  • Publication number: 20230196211
    Abstract: Generally, the present disclosure is directed to systems and methods that provide a simple, scalable, yet effective strategy to perform transfer learning with a mixture of experts (MoE). In particular, the transfer of pre-trained representations can improve sample efficiency and reduce computational requirements for new tasks. However, representations used for transfer are usually generic, and are not tailored to a particular distribution of downstream tasks. In contrast, example systems and methods of the present disclosure use expert representations for transfer with a simple, yet effective, strategy.
    Type: Application
    Filed: June 7, 2021
    Publication date: June 22, 2023
    Inventors: Carlos Riquelme Ruiz, André Susano Pinto, Joan Puigcerver, Basil Mustafa, Neil Matthew Tinmouth Houlsby, Sylvain Gelly, Cedric Benjamin Renggli, Daniel Martin Keysers
  • Publication number: 20230107409
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing a machine learning task on a network input to generate a network output. In one aspect, one of the systems includes a neural network configured to perform the machine learning task, the neural network including one or more expert neural network blocks that each include multiple routers and multiple expert neural networks.
    Type: Application
    Filed: October 5, 2022
    Publication date: April 6, 2023
    Inventors: Rodolphe Jenatton, Carlos Riquelme Ruiz, Dustin Tran, James Urquhart Allingham, Florian Wenzel, Zelda Elaine Mariet, Basil Mustafa, Joan Puigcerver i Perez, Neil Matthew Tinmouth Houlsby, Ghassen Jerfel
  • Publication number: 20220189612
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network to perform a downstream computer vision task. One of the methods includes pre-training an initial neural network that shares layers with the neural network to perform an initial computer vision task and then training the neural network on the downstream computer vision task.
    Type: Application
    Filed: December 14, 2021
    Publication date: June 16, 2022
    Inventors: Xiaohua Zhai, Sylvain Gelly, Alexander Kolesnikov, Yin Ching Jessica Yung, Joan Puigcerver i Perez, Lucas Klaus Beyer, Neil Matthew Tinmouth Houlsby, Wen Yau Aaron Loh, Alan Prasana Karthikesalingam, Basil Mustafa, Jan Freyberg, Patricia Leigh MacWilliams, Vivek Natarajan
  • Publication number: 20220108171
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training neural networks using transfer learning.
    Type: Application
    Filed: September 28, 2021
    Publication date: April 7, 2022
    Inventors: Joan Puigcerver i Perez, Basil Mustafa, André Susano Pinto, Carlos Riquelme Ruiz, Neil Matthew Tinmouth Houlsby, Daniel M. Keysers