Patents by Inventor Simon Kornblith

Simon Kornblith has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240169715
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network that is configured to process an input image to generate a network output for the input image. In one aspect, a method comprises, at each of a plurality of training steps: obtaining a plurality of training images for the training step; obtaining, for each of the plurality of training images, a respective target output; and selecting, from a plurality of image patch generation schemes, an image patch generation scheme for the training step, wherein, given an input image, each of the plurality of image patch generation schemes generates a different number of patches of the input image, and wherein each patch comprises a respective subset of the pixels of the input image.
    Type: Application
    Filed: November 22, 2023
    Publication date: May 23, 2024
    Inventors: Lucas Klaus Beyer, Pavel Izmailov, Simon Kornblith, Alexander Kolesnikov, Mathilde Caron, Xiaohua Zhai, Matthias Johannes Lorenz Minderer, Ibrahim Alabdulmohsin, Michael Tobias Tschannen, Filip Pavetic
  • Patent number: 11847571
    Abstract: Systems, methods, and computer program products for performing semi-supervised contrastive learning of visual representations are provided. For example, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. Further, the present disclosure also provides improvements for semi-supervised contrastive learning.
    Type: Grant
    Filed: July 12, 2022
    Date of Patent: December 19, 2023
    Assignee: GOOGLE LLC
    Inventors: Ting Chen, Geoffrey Everest Hinton, Simon Kornblith, Mohammad Norouzi
  • Publication number: 20230260652
    Abstract: Systems and methods can perform self-supervised machine learning for improved medical image analysis. As one example, self-supervised learning on ImageNet, followed by additional self-supervised learning on unlabeled medical images from the target domain of interest, followed by fine-tuning on labeled medical images from the target domain significantly improves the accuracy of medical image classifiers such as, for example diagnostic models. Another example aspect of the present disclosure is directed to a novel Multi-Instance Contrastive Learning (MICLe) method that uses multiple different medical images that share one or more attributes (e.g., multiple images that depict the same underlying pathology and/or the same patient) to construct more informative positive pairs for self-supervised learning.
    Type: Application
    Filed: December 10, 2021
    Publication date: August 17, 2023
    Inventors: Shekoofeh Azizi, Wen Yau Aaron Loh, Zachary William Beaver, Ting Chen, Jonathan Paul Deaton, Jan Freyberg, Alan Prasana Karthikesalingam, Simon Kornblith, Basil Mustafa, Mohammad Norouzi, Vivek Natarajan, Fiona Keleher Ryan
  • Publication number: 20220374658
    Abstract: Systems, methods, and computer program products for performing semi-supervised contrastive learning of visual representations are provided. For example, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. Further, the present disclosure also provides improvements for semi-supervised contrastive learning.
    Type: Application
    Filed: July 12, 2022
    Publication date: November 24, 2022
    Inventors: Ting Chen, Geoffrey Everest Hinton, Simon Kornblith, Mohammad Norouzi
  • Patent number: 11475277
    Abstract: Generally, the present disclosure is directed to novel machine-learned classification models that operate with hard attention to make discrete attention actions. The present disclosure also provides a self-supervised pre-training procedure that initializes the model to a state with more frequent rewards. Given only the ground truth classification labels for a set of training inputs (e.g., images), the proposed models are able to learn a policy over discrete attention locations that identifies certain portions of the input (e.g., patches of the images) that are relevant to the classification. In such fashion, the models are able to provide high accuracy classifications while also providing an explicit and interpretable basis for the decision.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: October 18, 2022
    Assignee: GOOGLE LLC
    Inventors: Gamaleldin Elsayed, Simon Kornblith, Quoc V. Le
  • Patent number: 11386302
    Abstract: Systems, methods, and computer program products for performing semi-supervised contrastive learning of visual representations are provided. For example, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. Further, the present disclosure also provides improvements for semi-supervised contrastive learning.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: July 12, 2022
    Assignee: GOOGLE LLC
    Inventors: Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Everest Hinton, Kevin Jordan Swersky
  • Patent number: 11354778
    Abstract: Provided are systems and methods for contrastive learning of visual representations. In particular, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. In contrast to certain existing techniques, the contrastive self-supervised learning algorithms described herein do not require specialized architectures or a memory bank. Some example implementations of the proposed approaches can be referred to as a simple framework for contrastive learning of representations or “SimCLR.” Further example aspects are described below and provide the following benefits and insights.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: June 7, 2022
    Assignee: GOOGLE LLC
    Inventors: Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Everest Hinton
  • Publication number: 20210327029
    Abstract: Provided are systems and methods for contrastive learning of visual representations. In particular, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. In contrast to certain existing techniques, the contrastive self-supervised learning algorithms described herein do not require specialized architectures or a memory bank. Some example implementations of the proposed approaches can be referred to as a simple framework for contrastive learning of representations or “SimCLR.” Further example aspects are described below and provide the following benefits and insights.
    Type: Application
    Filed: April 13, 2020
    Publication date: October 21, 2021
    Inventors: Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Everest Hinton
  • Publication number: 20210319266
    Abstract: Systems, methods, and computer program products for performing semi-supervised contrastive learning of visual representations are provided. For example, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. Further, the present disclosure also provides improvements for semi-supervised contrastive learning.
    Type: Application
    Filed: September 11, 2020
    Publication date: October 14, 2021
    Inventors: Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Everest Hinton
  • Publication number: 20210248472
    Abstract: The present disclosure provides a neural network including one or more layers with relaxed spatial invariance. Each of the one or more layers can be configured to receive a respective layer input. Each of the one or more layers can be configured to convolve a plurality of different kernels against the respective layer input to generate a plurality of intermediate outputs, each of the plurality of intermediate outputs having a plurality of portions. Each of the one or more layers can be configured to apply, for each of the plurality of intermediate outputs, a respective plurality of weights respectively associated with the plurality of portions to generate a respective weighted output. Each of the one or more layers can be configured to generate a respective layer output based on the weighted outputs.
    Type: Application
    Filed: December 14, 2020
    Publication date: August 12, 2021
    Inventors: Gamaleldin Elsayed, Prajit Ramachandran, Jon Shlens, Simon Kornblith
  • Publication number: 20200364540
    Abstract: Generally, the present disclosure is directed to novel machine-learned classification models that operate with hard attention to make discrete attention actions. The present disclosure also provides a self-supervised pre-training procedure that initializes the model to a state with more frequent rewards. Given only the ground truth classification labels for a set of training inputs (e.g., images), the proposed models are able to learn a policy over discrete attention locations that identifies certain portions of the input (e.g., patches of the images) that are relevant to the classification. In such fashion, the models are able to provide high accuracy classifications while also providing an explicit and interpretable basis for the decision.
    Type: Application
    Filed: May 13, 2020
    Publication date: November 19, 2020
    Inventors: Gamaleldin Elsayed, Simon Kornblith, Quoc V. Le
  • Publication number: 20200104710
    Abstract: A method for training a target neural network on a target machine learning task is described.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 2, 2020
    Inventors: Vijay Vasudevan, Ruoming Pang, Quoc V. Le, Daiyi Peng, Jiquan Ngiam, Simon Kornblith