Patents by Inventor Mohammad Reza Hosseinzadeh Taher

Mohammad Reza Hosseinzadeh Taher has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240078666
    Abstract: A self-supervised machine learning method and system for learning visual representations in medical images. The system receives a plurality of medical images of similar anatomy, divides each of the plurality of medical images into its own sequence of non-overlapping patches, wherein a unique portion of each medical image appears in each patch in the sequence of non-overlapping patches. The system then randomizes the sequence of non-overlapping patches for each of the plurality of medical images, and randomly distorts the unique portion of each medical image that appears in each patch in the sequence of non-overlapping patches for each of the plurality of medical images. Thereafter, the system learns, via a vision transformer network, patch-wise high-level contextual features in the plurality of medical images, and simultaneously, learns, via the vision transformer network, fine-grained features embedded in the plurality of medical images.
    Type: Application
    Filed: September 1, 2023
    Publication date: March 7, 2024
    Inventors: Jiaxuan PANG, Fatemeh Haghighi, DongAo Ma, Nahid Ui Islam, Mohammad Reza Hosseinzadeh Taher, Jianming Liang
  • Publication number: 20230306723
    Abstract: Described herein are systems, methods, and apparatuses for implementing self-supervised domain-adaptive pre-training via a transformer for use with medical image classification in the context of medical image analysis.
    Type: Application
    Filed: March 24, 2023
    Publication date: September 28, 2023
    Inventors: DongAo Ma, Jiaxuan Pang, Nahid Ul Islam, Mohammad Reza Hosseinzadeh Taher, Fatemeh Haghighi, Jianming Liang
  • Patent number: 11763952
    Abstract: Described herein are means for learning semantics-enriched representations via self-discovery, self-classification, and self-restoration in the context of medical imaging. Embodiments include the training of deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a collection of semantics-enriched pre-trained models, called Semantic Genesis. Other related embodiments are disclosed.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: September 19, 2023
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Zongwei Zhou, Jianming Liang
  • Publication number: 20230281805
    Abstract: A Discriminative, Restorative, and Adversarial (DiRA) learning framework for self-supervised medical image analysis is described. For instance, a pre-trained DiRA framework may be applied to diagnosis and detection of new medical images which form no part of the training data. The exemplary DiRA framework includes means for receiving training data having medical images therein and applying discriminative learning, restorative learning, and adversarial learning via the DiRA framework by cropping patches from the medical images; inputting the cropped patches to the discriminative and restorative learning branches to generate discriminative latent features and synthesized images from each; and applying adversarial learning by executing an adversarial discriminator to perform a min-max function for distinguishing the synthesized restorative image from real medical images. The pre-trained model of the DiRA framework is then provided as output for use in generating predictions of disease within medical images.
    Type: Application
    Filed: February 17, 2023
    Publication date: September 7, 2023
    Inventors: Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Jianming Liang
  • Publication number: 20230196642
    Abstract: A self-supervised learning framework for empowering instance discrimination in medical imaging using Context-Aware instance Discrimination (CAiD), in which the trained deep models are then utilized for the processing of medical imaging. An exemplary system receives a plurality of medical images; trains a self-supervised learning framework to increasing instance discrimination for medical imaging using a Context-Aware instance Discrimination (CAiD) model using the received plurality of medical images; generates multiple cropped image samples and augments samples using image distortion; applies instance discrimination learning a mapping back to a corresponding original image; reconstructs the cropped image samples and applies an auxiliary context-aware learning loss operation; and generates as output, a pre-trained CAiD model based on the application of both (i) the instance discrimination learning and (ii) the auxiliary context-aware learning loss operation.
    Type: Application
    Filed: December 20, 2022
    Publication date: June 22, 2023
    Inventors: Mohammad Reza Hosseinzadeh Taher, Fatemeh Haghighi, Jianming Liang
  • Publication number: 20230116897
    Abstract: Described herein are means for implementing systematic benchmarking analysis to improve transfer learning for medical image analysis.
    Type: Application
    Filed: October 7, 2022
    Publication date: April 13, 2023
    Inventors: Mohammad Reza Hosseinzadeh Taher, Fatemeh Haghighi, Ruibin Feng, Jianming Liang
  • Publication number: 20220309811
    Abstract: Described herein are means for the generation of Transferable Visual Word (TransVW) models through self-supervised learning in the absence of manual labeling, in which the trained TransVW models are then utilized for the processing of medical imaging.
    Type: Application
    Filed: February 19, 2022
    Publication date: September 29, 2022
    Inventors: Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Zongwei Zhou, Jianming Liang
  • Patent number: 11436725
    Abstract: Not only is annotating medical images tedious and time consuming, but it also demands costly, specialty-oriented expertise, which is not easily accessible. To address this challenge, a new self-supervised framework is introduced: TransVW (transferable visual words), exploiting the prowess of transfer learning with convolutional neural networks and the unsupervised nature of visual word extraction with bags of visual words, resulting in an annotation-efficient solution to medical image analysis. TransVW was evaluated using NIH ChestX-ray14 to demonstrate its annotation efficiency. When compared with training from scratch and ImageNet-based transfer learning, TransVW reduces the annotation efforts by 75% and 12%, respectively, in addition to significantly accelerating the convergence speed. More importantly, TransVW sets new records: achieving the best average AUC on all 14 diseases, the best individual AUC scores on 10 diseases, and the second best individual AUC scores on 3 diseases.
    Type: Grant
    Filed: November 15, 2020
    Date of Patent: September 6, 2022
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Mohammad Reza Hosseinzadeh Taher, Fatemeh Haghighi, Jianming Liang
  • Publication number: 20210343014
    Abstract: Described herein are means for the generation of semantic genesis models through self-supervised learning in the absence of manual labeling, in which the trained semantic genesis models are then utilized for the processing of medical imaging.
    Type: Application
    Filed: April 30, 2021
    Publication date: November 4, 2021
    Inventors: Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Zongwei Zhou, Jianming Liang
  • Publication number: 20210265043
    Abstract: Described herein are means for learning semantics-enriched representations via self-discovery, self-classification, and self-restoration in the context of medical imaging. Embodiments include the training of deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a collection of semantics-enriched pre-trained models, called Semantic Genesis. Other related embodiments are disclosed.
    Type: Application
    Filed: February 19, 2021
    Publication date: August 26, 2021
    Inventors: Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Zongwei Zhou, Jianming Liang
  • Publication number: 20210150710
    Abstract: Not only is annotating medical images tedious and time consuming, but it also demands costly, specialty-oriented expertise, which is not easily accessible. To address this challenge, a new self-supervised framework is introduced: TransVW (transferable visual words), exploiting the prowess of transfer learning with convolutional neural networks and the unsupervised nature of visual word extraction with bags of visual words, resulting in an annotation-efficient solution to medical image analysis. TransVW was evaluated using NIH ChestX-ray14 to demonstrate its annotation efficiency. When compared with training from scratch and ImageNet-based transfer learning, TransVW reduces the annotation efforts by 75% and 12%, respectively, in addition to significantly accelerating the convergence speed. More importantly, TransVW sets new records: achieving the best average AUC on all 14 diseases, the best individual AUC scores on 10 diseases, and the second best individual AUC scores on 3 diseases.
    Type: Application
    Filed: November 15, 2020
    Publication date: May 20, 2021
    Inventors: Mohammad Reza Hosseinzadeh Taher, Fatemeh Haghighi, Jianming Liang