Patents by Inventor Zongwei Zhou

Zongwei Zhou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12277687
    Abstract: Described herein are means for the generation of semantic genesis models through self-supervised learning in the absence of manual labeling, in which the trained semantic genesis models are then utilized for the processing of medical imaging.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: April 15, 2025
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Zongwei Zhou, Jianming Liang
  • Publication number: 20250111508
    Abstract: A system comprising a memory to store instructions and a processor to execute the instructions stored in the memory to receive a plurality of Computed Tomography Pulmonary Angiography (CTPA) exams as input image data, each exam in the plurality of exams comprising a varying plurality of individual images. The system annotates each of the slices with one of a pulmonary embolism (PE) present label or PE absent label, annotates each of the exams with one of a plurality of labels each indicating a different PE state and location, performs a slice-level PE classification to determine the presence or absence of PE for each of the slices, and performs and outputs, via an Embedding-based Vision Transformer (E-ViT), an exam-level diagnosis using the slice-level classifications.
    Type: Application
    Filed: October 2, 2024
    Publication date: April 3, 2025
    Inventors: Nahid UI ISLAM, Zongwei ZHOU, Shiv GEHLOT, Jianming LIANG
  • Patent number: 12260622
    Abstract: Described herein are means for generating source models for transfer learning to application specific models used in the processing of medical imaging.
    Type: Grant
    Filed: July 17, 2020
    Date of Patent: March 25, 2025
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Zongwei Zhou, Vatsal Sodha, Md Mahfuzur Rahman Siddiquee, Ruibin Feng, Nima Tajbakhsh, Jianming Liang
  • Patent number: 12236592
    Abstract: Described herein are means for systematically determining an optimal approach for the computer-aided diagnosis of a pulmonary embolism, in the context of processing medical imaging. According to a particular embodiment, there is a system specially configured for diagnosing a Pulmonary Embolism (PE) within new medical images which form no part of the dataset upon which the AI model was trained.
    Type: Grant
    Filed: September 14, 2022
    Date of Patent: February 25, 2025
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Nahid Ul Islam, Shiv Gehlot, Zongwei Zhou, Jianming Liang
  • Patent number: 12229920
    Abstract: Described herein are means for implementing fixed-point image-to-image translation using improved Generative Adversarial Networks (GANs). For instance, an exemplary system is specially configured for implementing a new framework, called a Fixed-Point GAN, which improves upon prior known methodologies by enhancing the quality of the images generated through global, local, and identity transformation. The Fixed-Point GAN as introduced and described herein, improves many applications dependant on image-to-image translation, including those in the field of medical image processing for the purposes of disease detection and localization. Other related embodiments are disclosed.
    Type: Grant
    Filed: September 16, 2021
    Date of Patent: February 18, 2025
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Jianming Liang, Zongwei Zhou, Nima Tajbakhsh, Md Mahfuzur Rahman Siddiquee
  • Patent number: 12216737
    Abstract: Described herein are systems, methods, and apparatuses for actively and continually fine-tuning convolutional neural networks to reduce annotation requirements, in which the trained networks are then utilized in the context of medical imaging. The success of convolutional neural networks (CNNs) in computer vision is largely attributable to the availability of massive annotated datasets, such as ImageNet and Places. However, it is tedious, laborious, and time consuming to create large annotated datasets, and demands costly, specialty-oriented skills. A novel method to naturally integrate active learning and transfer learning (fine-tuning) into a single framework is presented to dramatically reduce annotation cost, starting with a pre-trained CNN to seek “worthy” samples for annotation and gradually enhances the (fine-tuned) CNN via continual fine-tuning.
    Type: Grant
    Filed: March 18, 2022
    Date of Patent: February 4, 2025
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Zongwei Zhou, Jae Shin, Jianming Liang
  • Patent number: 12118455
    Abstract: Systems for selecting candidates for labelling and use in training a convolutional neural network (CNN) are provided, the systems comprising: a memory device; and at least one hardware processor configured to: receive a plurality of input candidates, wherein each candidate includes a plurality of identically labelled patches; and for each of the plurality of candidates: determine a plurality of probabilities, each of the plurality of probabilities being a probability that a unique patch of the plurality of identically labelled patches of the candidate corresponds to a label using a pre-trained CNN; identify a subset of candidates of the plurality of input candidates, wherein the subset does not include all of the plurality of candidates, based on the determined probabilities; query an external source to label the subset of candidates to produce labelled candidates; and train the pre-trained CNN using the labelled candidates.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: October 15, 2024
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Jianming Liang, Zongwei Zhou, Jae Shin
  • Patent number: 11922628
    Abstract: Described herein are means for generation of self-taught generic models, named Models Genesis, without requiring any manual labeling, in which the Models Genesis are then utilized for the processing of medical imaging. For instance, an exemplary system is specially configured for learning general-purpose image representations by recovering original sub-volumes of 3D input images from transformed 3D images. Such a system operates by cropping a sub-volume from each 3D input image; performing image transformations upon each of the sub-volumes cropped from the 3D input images to generate transformed sub-volumes; and training an encoder-decoder architecture with skip connections to learn a common image representation by restoring the original sub-volumes cropped from the 3D input images from the transformed sub-volumes generated via the image transformations.
    Type: Grant
    Filed: April 7, 2021
    Date of Patent: March 5, 2024
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Zongwei Zhou, Vatsal Sodha, Jiaxuan Pang, Jianming Liang
  • Patent number: 11915417
    Abstract: Described herein are means for training a deep model to learn contrastive representations embedded within part-whole semantics via a self-supervised learning framework, in which the trained deep models are then utilized for the processing of medical imaging. For instance, an exemplary system is specifically configured for performing a random cropping operation to crop a 3D cube from each of a plurality of medical images received at the system as input; performing a resize operation of the cropped 3D cubes; performing an image reconstruction operation of the resized and cropped 3D cubes to predict the resized whole image represented by the original medical images received; and generating a reconstructed image which is analyzed for reconstruction loss against the original image representing a known ground truth image to the reconstruction loss function. Other related embodiments are disclosed.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: February 27, 2024
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Ruibin Feng, Zongwei Zhou, Jianming Liang
  • Patent number: 11763952
    Abstract: Described herein are means for learning semantics-enriched representations via self-discovery, self-classification, and self-restoration in the context of medical imaging. Embodiments include the training of deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a collection of semantics-enriched pre-trained models, called Semantic Genesis. Other related embodiments are disclosed.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: September 19, 2023
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Zongwei Zhou, Jianming Liang
  • Publication number: 20230081305
    Abstract: Described herein are means for systematically determining an optimal approach for the computer-aided diagnosis of a pulmonary embolism, in the context of processing medical imaging. According to a particular embodiment, there is a system specially configured for diagnosing a Pulmonary Embolism (PE) within new medical images which form no part of the dataset upon which the AI model was trained.
    Type: Application
    Filed: September 14, 2022
    Publication date: March 16, 2023
    Inventors: Nahid Ul Islam, Shiv Gehlot, Zongwei Zhou, Jianming Liang
  • Publication number: 20220328189
    Abstract: Embodiments described herein include systems for implementing annotation-efficient deep learning in computer-aided diagnosis.
    Type: Application
    Filed: April 8, 2022
    Publication date: October 13, 2022
    Inventors: Zongwei Zhou, Jianming Liang
  • Publication number: 20220309811
    Abstract: Described herein are means for the generation of Transferable Visual Word (TransVW) models through self-supervised learning in the absence of manual labeling, in which the trained TransVW models are then utilized for the processing of medical imaging.
    Type: Application
    Filed: February 19, 2022
    Publication date: September 29, 2022
    Inventors: Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Zongwei Zhou, Jianming Liang
  • Publication number: 20220300769
    Abstract: Described herein are systems, methods, and apparatuses for actively and continually fine-tuning convolutional neural networks to reduce annotation requirements, in which the trained networks are then utilized in the context of medical imaging. The success of convolutional neural networks (CNNs) in computer vision is largely attributable to the availability of massive annotated datasets, such as ImageNet and Places. However, it is tedious, laborious, and time consuming to create large annotated datasets, and demands costly, specialty-oriented skills. A novel method to naturally integrate active learning and transfer learning (fine-tuning) into a single framework is presented to dramatically reduce annotation cost, starting with a pre-trained CNN to seek “worthy” samples for annotation and gradually enhances the (fine-tuned) CNN via continual fine-tuning.
    Type: Application
    Filed: March 18, 2022
    Publication date: September 22, 2022
    Inventors: Zongwei Zhou, Jae Shin, Jianming Liang
  • Publication number: 20220262105
    Abstract: Described herein are means for generating source models for transfer learning to application specific models used in the processing of medical imaging.
    Type: Application
    Filed: July 17, 2020
    Publication date: August 18, 2022
    Inventors: Zongwei Zhou, Vatsal Sodha, Md Mahfuzur Rahman Siddiquee, Ruibin Feng, Nima Tajbakhsh, Jianming Liang
  • Patent number: 11328430
    Abstract: Methods, systems, and media for segmenting images are provided. In some embodiments, the method comprises: generating an aggregate U-Net comprised of a plurality of U-Nets, wherein each U-Net in the plurality of U-Nets has a different depth, wherein each U-Net is comprised of a plurality of nodes Xi,j, wherein i indicates a down-sampling layer the U-Net, and wherein j indicates a convolution layer of the U-Net; training the aggregate U-Net by: for each training sample in a group of training samples, calculating, for each node in the plurality of nodes Xi,j, a feature map xi,j, wherein xi,j is based on a convolution operation performed on a down-sampling of an output from Xi?1,j when j=0, and wherein xi,j is based on a convolution operation performed on an up-sampling operation of an output from Xi+1,j?1 when j>0; and predicting a segmentation of a test image using the trained aggregate U-Net.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: May 10, 2022
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, Jianming Liang
  • Publication number: 20220114733
    Abstract: Described herein are means for implementing contrastive learning via reconstruction within a self-supervised learning framework, in which the trained deep models are then utilized for the processing of medical imaging. For instance, an exemplary system is specially configured for performing a random cropping operation to crop a 3D cube from each of a plurality of medical images received at the system as input; performing a resize operation of the cropped 3D cubes; performing an image reconstruction operation of the resized and cropped 3D cubes to predict the whole image represented by the original medical images received; and generating a reconstructed image which is analyzed for reconstruction loss against the original image representing a known ground truth image to the reconstruction loss function. Other related embodiments are disclosed.
    Type: Application
    Filed: October 8, 2021
    Publication date: April 14, 2022
    Inventors: Ruibin Feng, Zongwei Zhou, Jianming Liang
  • Publication number: 20220084173
    Abstract: Described herein are means for implementing fixed-point image-to-image translation using improved Generative Adversarial Networks (GANs). For instance, an exemplary system is specially configured for implementing a new framework, called a Fixed-Point GAN, which improves upon prior known methodologies by enhancing the quality of the images generated through global, local, and identity transformation. The Fixed-Point GAN as introduced and described herein, improves many applications dependant on image-to-image translation, including those in the field of medical image processing for the purposes of desease detection and localization. Other related embodiments are disclosed.
    Type: Application
    Filed: September 16, 2021
    Publication date: March 17, 2022
    Inventors: Jianming Liang, Zongwei Zhou, Nima Tajbakhsh, Md Mahfuzur Rahman Siddiquee
  • Patent number: 11204912
    Abstract: Techniques for using commit coalescing when performing micro-journal-based transaction logging are provided. In one embodiment a computer system can maintain, in a volatile memory, a globally ascending identifier, a first list of free micro-journals, and a second list of in-flight micro-journals. The computer system can further receive a transaction comprising a plurality of modifications to data or metadata stored in the byte-addressable persistent memory, select a micro-journal from the first list, obtain a lock on the globally ascending identifier, write a current value of the globally ascending identifier as a journal commit identifier into a header of the micro-journal, and write journal entries into the micro-journal corresponding to the plurality of modifications included in the transaction. The computer system can then commit the micro-journal to the byte-addressable persistent memory, increment the current value of the globally ascending identifier, and release the lock.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: December 21, 2021
    Assignee: VMWARE, INC.
    Inventors: Pratap Subrahmanyam, Zongwei Zhou, Xavier Deguillard, Rajesh Venkatasubramanian
  • Patent number: 11200350
    Abstract: This invention provides a method for providing trusted display to security sensitive applications on untrusted computing platforms. This invention has a minimal trusted code base and maintains full compatibility with the computing platforms, including their software and hardware. The core of the invention is a GPU separation kernel that (1) defines different types of GPU objects, (2) mediates access to security-sensitive GPU objects, and (3) emulates accesses to security-sensitive GPU objects whenever required by computing platform compatibility.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: December 14, 2021
    Assignee: CARNEGIE MELLON UNIVERSITY
    Inventors: Virgil D. Gligor, Zongwei Zhou, Miao Yu