Patents by Inventor Martin Stumpe

Martin Stumpe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11994664
    Abstract: An augmented reality (AR) subsystem including one or more machine learning models, automatically overlays an augmented reality image, e.g., a border or outline, that identifies cells of potential interest, in the field of view of the specimen as seen through the eyepiece of an LCM microscope. The operator does not have to manually identify the cells of interest for subsequent LCM, e.g, on a workstation monitor, as in the prior art. The operator is provided with a switch, operator interface tool or other mechanism to select the identification of the cells, that is, indicate approval of the identification of the cells, while they view the specimen through the eyepiece. Activation of the switch or other mechanism invokes laser excising and capture of the cells of interest via a known and conventional LCM subsystem.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: May 28, 2024
    Assignee: Google LLC
    Inventors: Jason Hipp, Martin Stumpe
  • Patent number: 11983912
    Abstract: A method for training a pattern recognizer to identify regions of interest in unstained images of tissue samples is provided. Pairs of images of tissue samples are obtained, each pair including an unstained image of a given tissue sample and a stained image of the given tissue sample. An annotation (e.g., drawing operation) is then performed on the stained image to indicate a region of interest. The annotation information, in the form of a mask surrounding the region of interest, is then applied to the corresponding unstained image. The unstained image and mask are then supplied to train a pattern recognizer. The trained pattern recognizer can then be used to identify regions of interest within novel unstained images.
    Type: Grant
    Filed: September 7, 2018
    Date of Patent: May 14, 2024
    Assignee: VERILY LIFE SCIENCES LLC
    Inventors: Martin Stumpe, Lily Peng
  • Publication number: 20230419694
    Abstract: A machine learning predictor model is trained to generate a prediction of the appearance of a tissue sample stained with a special stain such as an IHC stain from an input image that is either unstained or stained with H&E. Training data takes the form of thousands of pairs of precisely aligned images, one of which is an image of a tissue specimen stained with H&E or unstained, and the other of which is an image of the tissue specimen stained with the special stain. The model can be trained to predict special stain images for a multitude of different tissue types and special stain types, in use, an input image, e.g., an H&E image of a given tissue specimen at a particular magnification level is provided to the model and the model generates a prediction of the appearance of the tissue specimen as if it were stained with the special stain. The predicted image is provided to a user and displayed, e.g., on a pathology workstation.
    Type: Application
    Filed: September 7, 2023
    Publication date: December 28, 2023
    Applicant: Verily Life Sciences LLC
    Inventors: Martin Stumpe, Philip Nelson, Lily Peng
  • Patent number: 11783603
    Abstract: A machine learning predictor model is trained to generate a prediction of the appearance of a tissue sample stained with a special stain such as an IHC stain from an input image that is either unstained or stained with H&E. Training data takes the form of thousands of pairs of precisely aligned images, one of which is an image of a tissue specimen stained with H&E or unstained, and the other of which is an image of the tissue specimen stained with the special stain. The model can be trained to predict special stain images for a multitude of different tissue types and special stain types, in use, an input image, e.g., an H&E image of a given tissue specimen at a particular magnification level is provided to the model and the model generates a prediction of the appearance of the tissue specimen as if it were stained with the special stain. The predicted image is provided to a user and displayed, e.g., on a pathology workstation.
    Type: Grant
    Filed: March 7, 2018
    Date of Patent: October 10, 2023
    Assignee: VERILY LIFE SCIENCES LLC
    Inventors: Martin Stumpe, Philip Nelson, Lily Peng
  • Patent number: 11783604
    Abstract: A method for generating a ground truth mask for a microscope slide having a tissue specimen placed thereon includes a step of staining the tissue specimen with hematoxylin and eosin (H&E) staining agents. A first magnified image of the H&E stained tissue specimen is obtained, e.g., with a whole slide scanner. The H&E staining agents are then washed from the tissue specimen. A second, different stain is applied to the tissue specimen, e.g., a special stain such as an IHC stain. A second magnified image of the tissue specimen stained with the second, different stain is obtained. The first and second magnified images are then registered to each other. An annotation (e.g., drawing operation) is then performed on either the first or the second magnified images so as to form a ground truth mask, the ground truth mask in the form of closed polygon region enclosing tumor cells present in either the first or second magnified image.
    Type: Grant
    Filed: January 11, 2018
    Date of Patent: October 10, 2023
    Assignee: Google LLC
    Inventors: Lily Peng, Martin Stumpe
  • Publication number: 20230207134
    Abstract: One example method includes obtaining one or more histopathology images of a sample from a cancer patient; selecting a plurality of tissue image patches from the one or more histopathology images; determining, by a deep learning system comprising a plurality of trained machine learning (ML) models, a plurality of image features for the plurality of tissue image patch, wherein each tissue image patch is analyzed by one of the trained ML models; determining, by the deep learning system, probabilities of patient survival based on the determined plurality of image features; and generating, by the deep learning system, a prediction of patient survival based on the determined probabilities.
    Type: Application
    Filed: July 7, 2021
    Publication date: June 29, 2023
    Applicant: Verily Life Sciences LLC
    Inventors: Narayan HEGDE, Yun LIU, Craig MERMEL, David F. STEINER, Ellery WULCZYN, Po-Hsuan Cameron CHEN, Martin STUMPE, Zhaoyang XU, Apaar SADHWANI
  • Patent number: 11657487
    Abstract: A method is described for generating a prediction of a disease classification error for a magnified, digital microscope slide image of a tissue sample. The image is composed of a multitude of patches or tiles of pixel image data. An out-of-focus degree per patch is computed using a machine learning out-of-focus classifier. Data representing expected disease classifier error statistics of a machine learning disease classifier for a plurality of out-of-focus degrees is retrieved. A mapping of the expected disease classifier error statistics to each of the patches of the digital microscope slide image based on the computed out-of-focus degree per patch is computed, thereby generating a disease classifier error prediction for each of the patches. The disease classifier error predictions thus generated are aggregated over all of the patches.
    Type: Grant
    Filed: October 4, 2021
    Date of Patent: May 23, 2023
    Assignee: Google LLC
    Inventors: Martin Stumpe, Timo Kohlberger
  • Patent number: 11436741
    Abstract: A method for aligning two different magnified images of the same subject includes a first step of precomputing rigid transformations for the two different images globally (i.e., for all or approximately all regions of the images). Pairs of corresponding features in the two different magnified images are identified and transformation values are assigned to each of the pairs of corresponding features. In a second step, while images are being generated for display to a user, a locally optimal rigid transformation for the current field of view is computed. In a third step the images are aligned as per the locally optimal rigid transformation. Non-zero weight is given to transformation values for pairs of features that are outside the current field of view. Typically, the second and third steps are repeated many times as the images are generated for display and user either changes magnification or pans/navigates to a different location in the images.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: September 6, 2022
    Assignee: Google LLC
    Inventors: Martin Stumpe, Varun Godbole
  • Publication number: 20220261668
    Abstract: An artificial intelligence engine for directed hypothesis generation and ranking uses multiple heterogeneous knowledge graphs integrating disease-specific multi-omic data specific to a patient or cohort of patients. The engine also uses a knowledge graph representation of ‘what the world knows’ in the relevant bio-medical subspace. The engine applies a hypothesis generation module, a semantic search analysis component to allow fast acquiring and construction of cohorts, as well as aggregating, summarizing, visualizing and returning ranked multi-omic alterations in terms of clinical actionability and degree of surprise for individual samples and cohorts. The engine also applies a moderator module that ranks and filters hypotheses, where the most promising hypothesis can be presented to domain experts (e.g., physicians, oncologists, pathologists, radiologists and researchers) for feedback.
    Type: Application
    Filed: February 14, 2022
    Publication date: August 18, 2022
    Inventors: Martin Stumpe, Alena Harley
  • Patent number: 11379516
    Abstract: A system for searching for similar medical images includes a reference library in the form of a multitude of medical images, at least some of which are associated with metadata including clinical information relating to the specimen or patient associated with the medical images. A computer system is configured as a search tool for receiving an input image query from a user. The computer system is trained to find one or more similar medical images in the reference library system which are similar to the input image. The reference library is represented as an embedding of each of the medical images projected in a feature space having a plurality of axes, wherein the embedding is characterized by two aspects of a similarity ranking: (1) visual similarity, and (2) semantic similarity such that neighboring images in the feature space are visually similar and semantic information is represented by the axes of the feature space.
    Type: Grant
    Filed: March 29, 2018
    Date of Patent: July 5, 2022
    Assignee: Google LLC
    Inventors: Lily Peng, Martin Stumpe, Daniel Smilkov, Jason Hipp
  • Publication number: 20220148169
    Abstract: One example method for AI prediction of prostate cancer outcomes involves receiving an image of prostate tissue; assigning Gleason pattern values to one or more regions within the image using an artificial intelligence Gleason grading model, the model trained to identify Gleason patterns on a patch-by-patch basis in a prostate tissue image; determining relative areal proportions of the Gleason patterns within the image; assigning at least one of a risk score or risk group value to the image based on the determined relative areal proportions; and outputting at least one of the risk score or the risk group value.
    Type: Application
    Filed: November 8, 2021
    Publication date: May 12, 2022
    Applicant: Verily Life Sciences LLC
    Inventors: Craig Mermel, Yun Liu, Naren Manoj, Matthew Symonds, Martin Stumpe, Lily Peng, Kunal Nagpal, Ellery Wulczyn, Davis Foote, David F. Steiner, Po-Hsuan Cameron Chen
  • Publication number: 20220027678
    Abstract: A method is described for generating a prediction of a disease classification error for a magnified, digital microscope slide image of a tissue sample. The image is composed of a multitude of patches or tiles of pixel image data. An out-of-focus degree per patch is computed using a machine learning out-of-focus classifier. Data representing expected disease classifier error statistics of a machine learning disease classifier for a plurality of out-of-focus degrees is retrieved. A mapping of the expected disease classifier error statistics to each of the patches of the digital microscope slide image based on the computed out-of-focus degree per patch is computed, thereby generating a disease classifier error prediction for each of the patches. The disease classifier error predictions thus generated are aggregated over all of the patches.
    Type: Application
    Filed: October 4, 2021
    Publication date: January 27, 2022
    Inventors: Martin Stumpe, Timo Kohlberger
  • Publication number: 20220019069
    Abstract: An augmented reality (AR) subsystem including one or more machine learning models, automatically overlays an augmented reality image, e.g., a border or outline, that identifies cells of potential interest, in the field of view of the specimen as seen through the eyepiece of an LCM microscope. The operator does not have to manually identify the cells of interest for subsequent LCM, e.g, on a workstation monitor, as in the prior art. The operator is provided with a switch, operator interface tool or other mechanism to select the identification of the cells, that is, indicate approval of the identification of the cells, while they view the specimen through the eyepiece. Activation of the switch or other mechanism invokes laser excising and capture of the cells of interest via a known and conventional LCM subsystem.
    Type: Application
    Filed: October 31, 2019
    Publication date: January 20, 2022
    Inventors: Jason Hipp, Martin Stumpe
  • Publication number: 20210358140
    Abstract: A method for aligning two different magnified images of the same subject includes a first step of precomputing rigid transformations for the two different images globally (i.e., for all or approximately all regions of the images). Pairs of corresponding features in the two different magnified images are identified and transformation values are assigned to each of the pairs of corresponding features. In a second step, while images are being generated for display to a user, a locally optimal rigid transformation for the current field of view is computed. In a third step the images are aligned as per the locally optimal rigid transformation. Non-zero weight is given to transformation values for pairs of features that are outside the current field of view. Typically, the second and third steps are repeated many times as the images are generated for display and user either changes magnification or pans/navigates to a different location in the images.
    Type: Application
    Filed: March 6, 2018
    Publication date: November 18, 2021
    Inventors: Martin Stumpe, Varun Godbole
  • Patent number: 11164048
    Abstract: A method is described for generating a prediction of a disease classification error for a magnified, digital microscope slide image of a tissue sample. The image is composed of a multitude of patches or tiles of pixel image data. An out-of-focus degree per patch is computed using a machine learning out-of-focus classifier. Data representing expected disease classifier error statistics of a machine learning disease classifier for a plurality of out-of-focus degrees is retrieved. A mapping of the expected disease classifier error statistics to each of the patches of the digital microscope slide image based on the computed out-of-focus degree per patch is computed, thereby generating a disease classifier error prediction for each of the patches. The disease classifier error predictions thus generated are aggregated over all of the patches.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: November 2, 2021
    Assignee: Google LLC
    Inventors: Martin Stumpe, Timo Kohlberger
  • Publication number: 20210064845
    Abstract: A method for training a pattern recognizer to identify regions of interest in unstained images of tissue samples is provided. Pairs of images of tissue samples are obtained, each pair including an unstained image of a given tissue sample and a stained image of the given tissue sample. An annotation (e.g., drawing operation) is then performed on the stained image to indicate a region of interest. The annotation information, in the form of a mask surrounding the region of interest, is then applied to the corresponding unstained image. The unstained image and mask are then supplied to train a pattern recognizer. The trained pattern recognizer can then be used to identify regions of interest within novel unstained images.
    Type: Application
    Filed: September 7, 2018
    Publication date: March 4, 2021
    Inventors: Martin STUMPE, Lily PENG
  • Publication number: 20210018742
    Abstract: A microscope of the type used by a pathologist to view slides containing biological samples such as tissue or blood is provided with the projection of enhancements to the field of view, such as a heatmap, border, or annotations, or quantitative biomarker data, substantially in real time as the slide is moved to new locations or changes in magnification or focus occur. The enhancements assist the pathologist in characterizing or classifying the sample, such as being positive for the presence of cancer cells or pathogens.
    Type: Application
    Filed: March 4, 2019
    Publication date: January 21, 2021
    Inventor: Martin STUMPE
  • Publication number: 20210019342
    Abstract: A system for searching for similar medical images includes a reference library in the form of a multitude of medical images, at least some of which are associated with metadata including clinical information relating to the specimen or patient associated with the medical images. A computer system is configured as a search tool for receiving an input image query from a user. The computer system is trained to find one or more similar medical images in the reference library system which are similar to the input image. The reference library is represented as an embedding of each of the medical images projected in a feature space having a plurality of axes, wherein the embedding is characterized by two aspects of a similarity ranking: (1) visual similarity, and (2) semantic similarity such that neighboring images in the feature space are visually similar and semantic information is represented by the axes of the feature space.
    Type: Application
    Filed: March 29, 2018
    Publication date: January 21, 2021
    Inventors: Lily PENG, Martin STUMPE, Daniel SMILKOV, Jason HIPP
  • Publication number: 20200394825
    Abstract: A machine learning predictor model is trained to generate a prediction of the appearance of a tissue sample stained with a special stain such as an IHC stain from an input image that is either unstained or stained with H&E. Training data takes the form of thousands of pairs of precisely aligned images, one of which is an image of a tissue specimen stained with H&E or unstained, and the other of which is an image of the tissue specimen stained with the special stain. The model can be trained to predict special stain images for a multitude of different tissue types and special stain types, in use, an input image, e.g., an H&E image of a given tissue specimen at a particular magnification level is provided to the model and the model generates a prediction of the appearance of the tissue specimen as if it were stained with the special stain. The predicted image is provided to a user and displayed, e.g., on a pathology workstation.
    Type: Application
    Filed: March 7, 2018
    Publication date: December 17, 2020
    Inventors: Martin STUMPE, Philip NELSON, Lily PENG
  • Publication number: 20200372235
    Abstract: A method for generating a ground truth mask for a microscope slide having a tissue specimen placed thereon includes a step of staining the tissue specimen with hematoxylin and eosin (H&E) staining agents. A first magnified image of the H&E stained tissue specimen is obtained, e.g., with a whole slide scanner. The H&E staining agents are then washed from the tissue specimen. A second, different stain is applied to the tissue specimen, e.g., a special stain such as an IHC stain. A second magnified image of the tissue specimen stained with the second, different stain is obtained. The first and second magnified images are then registered to each other. An annotation (e.g., drawing operation) is then performed on either the first or the second magnified images so as to form a ground truth mask, the ground truth mask in the form of closed polygon region enclosing tumor cells present in either the first or second magnified image.
    Type: Application
    Filed: January 11, 2018
    Publication date: November 26, 2020
    Inventors: Lily PENG, Martin STUMPE