Patents by Inventor Lily Peng
Lily Peng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11983912Abstract: A method for training a pattern recognizer to identify regions of interest in unstained images of tissue samples is provided. Pairs of images of tissue samples are obtained, each pair including an unstained image of a given tissue sample and a stained image of the given tissue sample. An annotation (e.g., drawing operation) is then performed on the stained image to indicate a region of interest. The annotation information, in the form of a mask surrounding the region of interest, is then applied to the corresponding unstained image. The unstained image and mask are then supplied to train a pattern recognizer. The trained pattern recognizer can then be used to identify regions of interest within novel unstained images.Type: GrantFiled: September 7, 2018Date of Patent: May 14, 2024Assignee: VERILY LIFE SCIENCES LLCInventors: Martin Stumpe, Lily Peng
-
Publication number: 20230419694Abstract: A machine learning predictor model is trained to generate a prediction of the appearance of a tissue sample stained with a special stain such as an IHC stain from an input image that is either unstained or stained with H&E. Training data takes the form of thousands of pairs of precisely aligned images, one of which is an image of a tissue specimen stained with H&E or unstained, and the other of which is an image of the tissue specimen stained with the special stain. The model can be trained to predict special stain images for a multitude of different tissue types and special stain types, in use, an input image, e.g., an H&E image of a given tissue specimen at a particular magnification level is provided to the model and the model generates a prediction of the appearance of the tissue specimen as if it were stained with the special stain. The predicted image is provided to a user and displayed, e.g., on a pathology workstation.Type: ApplicationFiled: September 7, 2023Publication date: December 28, 2023Applicant: Verily Life Sciences LLCInventors: Martin Stumpe, Philip Nelson, Lily Peng
-
Patent number: 11783603Abstract: A machine learning predictor model is trained to generate a prediction of the appearance of a tissue sample stained with a special stain such as an IHC stain from an input image that is either unstained or stained with H&E. Training data takes the form of thousands of pairs of precisely aligned images, one of which is an image of a tissue specimen stained with H&E or unstained, and the other of which is an image of the tissue specimen stained with the special stain. The model can be trained to predict special stain images for a multitude of different tissue types and special stain types, in use, an input image, e.g., an H&E image of a given tissue specimen at a particular magnification level is provided to the model and the model generates a prediction of the appearance of the tissue specimen as if it were stained with the special stain. The predicted image is provided to a user and displayed, e.g., on a pathology workstation.Type: GrantFiled: March 7, 2018Date of Patent: October 10, 2023Assignee: VERILY LIFE SCIENCES LLCInventors: Martin Stumpe, Philip Nelson, Lily Peng
-
Patent number: 11783604Abstract: A method for generating a ground truth mask for a microscope slide having a tissue specimen placed thereon includes a step of staining the tissue specimen with hematoxylin and eosin (H&E) staining agents. A first magnified image of the H&E stained tissue specimen is obtained, e.g., with a whole slide scanner. The H&E staining agents are then washed from the tissue specimen. A second, different stain is applied to the tissue specimen, e.g., a special stain such as an IHC stain. A second magnified image of the tissue specimen stained with the second, different stain is obtained. The first and second magnified images are then registered to each other. An annotation (e.g., drawing operation) is then performed on either the first or the second magnified images so as to form a ground truth mask, the ground truth mask in the form of closed polygon region enclosing tumor cells present in either the first or second magnified image.Type: GrantFiled: January 11, 2018Date of Patent: October 10, 2023Assignee: Google LLCInventors: Lily Peng, Martin Stumpe
-
Patent number: 11379516Abstract: A system for searching for similar medical images includes a reference library in the form of a multitude of medical images, at least some of which are associated with metadata including clinical information relating to the specimen or patient associated with the medical images. A computer system is configured as a search tool for receiving an input image query from a user. The computer system is trained to find one or more similar medical images in the reference library system which are similar to the input image. The reference library is represented as an embedding of each of the medical images projected in a feature space having a plurality of axes, wherein the embedding is characterized by two aspects of a similarity ranking: (1) visual similarity, and (2) semantic similarity such that neighboring images in the feature space are visually similar and semantic information is represented by the axes of the feature space.Type: GrantFiled: March 29, 2018Date of Patent: July 5, 2022Assignee: Google LLCInventors: Lily Peng, Martin Stumpe, Daniel Smilkov, Jason Hipp
-
Publication number: 20220148169Abstract: One example method for AI prediction of prostate cancer outcomes involves receiving an image of prostate tissue; assigning Gleason pattern values to one or more regions within the image using an artificial intelligence Gleason grading model, the model trained to identify Gleason patterns on a patch-by-patch basis in a prostate tissue image; determining relative areal proportions of the Gleason patterns within the image; assigning at least one of a risk score or risk group value to the image based on the determined relative areal proportions; and outputting at least one of the risk score or the risk group value.Type: ApplicationFiled: November 8, 2021Publication date: May 12, 2022Applicant: Verily Life Sciences LLCInventors: Craig Mermel, Yun Liu, Naren Manoj, Matthew Symonds, Martin Stumpe, Lily Peng, Kunal Nagpal, Ellery Wulczyn, Davis Foote, David F. Steiner, Po-Hsuan Cameron Chen
-
Method and system for assisting pathologist identification of tumor cells in magnified tissue images
Patent number: 11170897Abstract: A method, system and machine for assisting a pathologist in identifying the presence of tumor cells in lymph node tissue is disclosed. The digital image of lymph node tissue at a first magnification (e.g., 40×) is subdivided into a multitude of rectangular “patches.” A likelihood of malignancy score is then determined for each of the patches. The score is obtained by analyzing pixel data from the patch (e.g., pixel data centered on and including the patch) using a computer system programmed as an ensemble of deep neural network pattern recognizers, each operating on different magnification levels of the patch. A representation or “heatmap” of the slide is generated.Type: GrantFiled: February 23, 2017Date of Patent: November 9, 2021Assignee: Google LLCInventors: Martin Christian Stumpe, Lily Peng, Yun Liu, Krishna K. Gadepalli, Timo Kohlberger -
Publication number: 20210064845Abstract: A method for training a pattern recognizer to identify regions of interest in unstained images of tissue samples is provided. Pairs of images of tissue samples are obtained, each pair including an unstained image of a given tissue sample and a stained image of the given tissue sample. An annotation (e.g., drawing operation) is then performed on the stained image to indicate a region of interest. The annotation information, in the form of a mask surrounding the region of interest, is then applied to the corresponding unstained image. The unstained image and mask are then supplied to train a pattern recognizer. The trained pattern recognizer can then be used to identify regions of interest within novel unstained images.Type: ApplicationFiled: September 7, 2018Publication date: March 4, 2021Inventors: Martin STUMPE, Lily PENG
-
Publication number: 20210019342Abstract: A system for searching for similar medical images includes a reference library in the form of a multitude of medical images, at least some of which are associated with metadata including clinical information relating to the specimen or patient associated with the medical images. A computer system is configured as a search tool for receiving an input image query from a user. The computer system is trained to find one or more similar medical images in the reference library system which are similar to the input image. The reference library is represented as an embedding of each of the medical images projected in a feature space having a plurality of axes, wherein the embedding is characterized by two aspects of a similarity ranking: (1) visual similarity, and (2) semantic similarity such that neighboring images in the feature space are visually similar and semantic information is represented by the axes of the feature space.Type: ApplicationFiled: March 29, 2018Publication date: January 21, 2021Inventors: Lily PENG, Martin STUMPE, Daniel SMILKOV, Jason HIPP
-
Publication number: 20200394825Abstract: A machine learning predictor model is trained to generate a prediction of the appearance of a tissue sample stained with a special stain such as an IHC stain from an input image that is either unstained or stained with H&E. Training data takes the form of thousands of pairs of precisely aligned images, one of which is an image of a tissue specimen stained with H&E or unstained, and the other of which is an image of the tissue specimen stained with the special stain. The model can be trained to predict special stain images for a multitude of different tissue types and special stain types, in use, an input image, e.g., an H&E image of a given tissue specimen at a particular magnification level is provided to the model and the model generates a prediction of the appearance of the tissue specimen as if it were stained with the special stain. The predicted image is provided to a user and displayed, e.g., on a pathology workstation.Type: ApplicationFiled: March 7, 2018Publication date: December 17, 2020Inventors: Martin STUMPE, Philip NELSON, Lily PENG
-
Publication number: 20200372235Abstract: A method for generating a ground truth mask for a microscope slide having a tissue specimen placed thereon includes a step of staining the tissue specimen with hematoxylin and eosin (H&E) staining agents. A first magnified image of the H&E stained tissue specimen is obtained, e.g., with a whole slide scanner. The H&E staining agents are then washed from the tissue specimen. A second, different stain is applied to the tissue specimen, e.g., a special stain such as an IHC stain. A second magnified image of the tissue specimen stained with the second, different stain is obtained. The first and second magnified images are then registered to each other. An annotation (e.g., drawing operation) is then performed on either the first or the second magnified images so as to form a ground truth mask, the ground truth mask in the form of closed polygon region enclosing tumor cells present in either the first or second magnified image.Type: ApplicationFiled: January 11, 2018Publication date: November 26, 2020Inventors: Lily PENG, Martin STUMPE
-
Method and System for Assisting Pathologist Identification of Tumor Cells in Magnified Tissue Images
Publication number: 20200066407Abstract: A method, system and machine for assisting a pathologist in identifying the presence of tumor cells in lymph node tissue is disclosed. The digital image of lymph node tissue at a first magnification (e.g., 40×) is subdivided into a multitude of rectangular “patches.” A likelihood of malignancy score is then determined for each of the patches. The score is obtained by analyzing pixel data from the patch (e.g., pixel data centered on and including the patch) using a computer system programmed as an ensemble of deep neural network pattern recognizers, each operating on different magnification levels of the patch. A representation or “heatmap” of the slide is generated.Type: ApplicationFiled: February 23, 2017Publication date: February 27, 2020Inventors: Martin Christian Stumpe, Lily Peng, Yun Liu, Krishna K. Gadepalli, Timo Kohlberger