Patents by Inventor Nima Tajbakhsh

Nima Tajbakhsh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220405933
    Abstract: Described herein are means for implementing annotation-efficient deep learning models utilizing sparsely-annotated or annotation-free training, in which trained models are then utilized for the processing of medical imaging. An exemplary system includes at least a processor and a memory to execute instructions for learning anatomical embeddings by forcing embeddings learned from multiple modalities; initiating a training sequence of an AI model by learning dense anatomical embeddings from unlabeled date, then deriving application-specific models to diagnose diseases with a small number of examples; executing collaborative learning to generate pretrained multimodal models; training the AI model using zero-shot or few-shot learning; embedding physiological and anatomical knowledge; embedding known physical principles refining the AI model; and outputting a trained AI model for use in diagnosing diseases and abnormal conditions in medical imaging. Other related embodiments are disclosed.
    Type: Application
    Filed: June 17, 2022
    Publication date: December 22, 2022
    Inventors: Nima Tajbakhsh, Jianming Liang
  • Publication number: 20220262105
    Abstract: Described herein are means for generating source models for transfer learning to application specific models used in the processing of medical imaging.
    Type: Application
    Filed: July 17, 2020
    Publication date: August 18, 2022
    Inventors: Zongwei Zhou, Vatsal Sodha, Md Mahfuzur Rahman Siddiquee, Ruibin Feng, Nima Tajbakhsh, Jianming Liang
  • Patent number: 11328430
    Abstract: Methods, systems, and media for segmenting images are provided. In some embodiments, the method comprises: generating an aggregate U-Net comprised of a plurality of U-Nets, wherein each U-Net in the plurality of U-Nets has a different depth, wherein each U-Net is comprised of a plurality of nodes Xi,j, wherein i indicates a down-sampling layer the U-Net, and wherein j indicates a convolution layer of the U-Net; training the aggregate U-Net by: for each training sample in a group of training samples, calculating, for each node in the plurality of nodes Xi,j, a feature map xi,j, wherein xi,j is based on a convolution operation performed on a down-sampling of an output from Xi?1,j when j=0, and wherein xi,j is based on a convolution operation performed on an up-sampling operation of an output from Xi+1,j?1 when j>0; and predicting a segmentation of a test image using the trained aggregate U-Net.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: May 10, 2022
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, Jianming Liang
  • Publication number: 20220084173
    Abstract: Described herein are means for implementing fixed-point image-to-image translation using improved Generative Adversarial Networks (GANs). For instance, an exemplary system is specially configured for implementing a new framework, called a Fixed-Point GAN, which improves upon prior known methodologies by enhancing the quality of the images generated through global, local, and identity transformation. The Fixed-Point GAN as introduced and described herein, improves many applications dependant on image-to-image translation, including those in the field of medical image processing for the purposes of desease detection and localization. Other related embodiments are disclosed.
    Type: Application
    Filed: September 16, 2021
    Publication date: March 17, 2022
    Inventors: Jianming Liang, Zongwei Zhou, Nima Tajbakhsh, Md Mahfuzur Rahman Siddiquee
  • Patent number: 11164021
    Abstract: Methods, systems, and media for discriminating and generating translated images are provided. In some embodiments, the method comprises: identifying a set of training images, wherein each image is associated with at least one domain from a plurality of domains; training a generator network to generate: i) a first fake image that is associated with a first domain; and ii) a second fake image that is associated with a second domain; training a discriminator network, using as inputs to the discriminator network: i) an image from the set of training images; ii) the first fake image; and iii) the second fake image; and using the generator network to generate, for an image not included in the set of training images at least one of: i) a third fake image that is associated with the first domain; and ii) a fourth fake image that is associated with the second domain.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: November 2, 2021
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Md Mahfuzur Rahman Siddiquee, Zongwei Zhou, Ruibin Feng, Nima Tajbakhsh, Jianming Liang
  • Patent number: 11164067
    Abstract: Disclosed are provided systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging. For example, a system having means to execute a neural network model formed from a plurality of layer blocks including an encoder layer block which precedes a plurality of decoder layer blocks includes: means for associating a resolution value with each of the plurality of layer blocks; means for processing via the encoder layer block a respective layer block input including a down-sampled layer block output processing, via decoder layer blocks, a respective layer block input including an up-sampled layer block output and a layer block output of a previous layer block associated with a prior resolution value of a layer block which precedes the respective decoder layer block; and generating the respective layer block output by summing or concatenating the processed layer block inputs.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: November 2, 2021
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Jianming Liang, Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh
  • Patent number: 11100685
    Abstract: Detecting a pulmonary embolism (PE) in an image dataset of a blood vessel involves obtaining a volume of interest (VOI) in the blood vessel, generating a plurality of PE candidates within the VOI, generating a set of voxels for each PE candidate, estimating for each PE candidate an orientation of the blood vessel that contains the PE candidate, given the set of voxels for the PE candidate, and generating a visualization of the blood vessel that contains the PE candidate using the estimated orientation of the blood vessel that contains the PE candidate.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: August 24, 2021
    Assignee: ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY
    Inventors: Jianming Liang, Nima Tajbakhsh, Jaeyul Shin
  • Patent number: 10861151
    Abstract: Mechanisms for simultaneously monitoring colonoscopic video quality and detecting polyps in colonoscopy are provided. In some embodiments, the mechanisms can include a quality monitoring system that uses a first trained classifier to monitor image frames from a colonoscopic video to determine which image frames are informative frames and which image frames are non-informative frames. The informative image frames can be passed to an automatic polyp detection system that uses a second trained classifier to localize and identify whether a polyp or any other suitable object is present in one or more of the informative image frames.
    Type: Grant
    Filed: August 8, 2016
    Date of Patent: December 8, 2020
    Assignee: THE ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY
    Inventors: Jianming Liang, Nima Tajbakhsh
  • Publication number: 20200380695
    Abstract: Methods, systems, and media for segmenting images are provided. In some embodiments, the method comprises: generating an aggregate U-Net comprised of a plurality of U-Nets, wherein each U-Net in the plurality of U-Nets has a different depth, wherein each U-Net is comprised of a plurality of nodes Xi,j, wherein i indicates a down-sampling layer the U-Net, and wherein j indicates a convolution layer of the U-Net; training the aggregate U-Net by: for each training sample in a group of training samples, calculating, for each node in the plurality of nodes Xi,j, a feature map xi,j, wherein xi,j is based on a convolution operation performed on a down-sampling of an output from Xi?1,j when j=0, and wherein xi,j is based on a convolution operation performed on an up-sampling operation of an output from Xi+1,j?1 when j>0; and predicting a segmentation of a test image using the trained aggregate U-Net.
    Type: Application
    Filed: May 28, 2020
    Publication date: December 3, 2020
    Inventors: Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, Jianming Liang
  • Publication number: 20200364477
    Abstract: Methods, systems, and media for discriminating and generating translated images are provided. In some embodiments, the method comprises: identifying a set of training images, wherein each image is associated with at least one domain from a plurality of domains; training a generator network to generate: i) a first fake image that is associated with a first domain; and ii) a second fake image that is associated with a second domain; training a discriminator network, using as inputs to the discriminator network: i) an image from the set of training images; ii) the first fake image; and iii) the second fake image; and using the generator network to generate, for an image not included in the set of training images at least one of: i) a third fake image that is associated with the first domain; and ii) a fourth fake image that is associated with the second domain.
    Type: Application
    Filed: May 15, 2020
    Publication date: November 19, 2020
    Inventors: Md Mahfuzur Rahman Siddiquee, Zongwei Zhou, Ruibin Feng, Nima Tajbakhsh, Jianming Liang
  • Publication number: 20200074701
    Abstract: Detecting a pulmonary embolism (PE) in an image dataset of a blood vessel involves obtaining a volume of interest (VOI) in the blood vessel, generating a plurality of PE candidates within the VOI, generating a set of voxels for each PE candidate, estimating for each PE candidate an orientation of the blood vessel that contains the PE candidate, given the set of voxels for the PE candidate, and generating a visualization of the blood vessel that contains the PE candidate using the estimated orientation of the blood vessel that contains the PE candidate.
    Type: Application
    Filed: August 29, 2019
    Publication date: March 5, 2020
    Inventors: Jianming Liang, Nima Tajbakhsh, Jaeyul Shin
  • Publication number: 20200074271
    Abstract: Disclosed are provided systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging. For example, a system having means to execute a neural network model formed from a plurality of layer blocks including an encoder layer block which precedes a plurality of decoder layer blocks includes: means for associating a resolution value with each of the plurality of layer blocks; means for processing via the encoder layer block a respective layer block input including a down-sampled layer block output processing, via decoder layer blocks, a respective layer block input including an up-sampled layer block output and a layer block output of a previous layer block associated with a prior resolution value of a layer block which precedes the respective decoder layer block; and generating the respective layer block output by summing or concatenating the processed layer block inputs.
    Type: Application
    Filed: August 29, 2019
    Publication date: March 5, 2020
    Inventors: Jianming Liang, Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh
  • Patent number: 10052027
    Abstract: A system and methods for polyp detection using optical colonoscopy images are provided. In some aspects, the system includes an input configured to receive a series of optical images, and a processor configured to process the series of optical images with steps comprising of receiving an optical image from the input, constructing an edge map corresponding to the optical image, the edge map comprising a plurality of edge pixel, and generating a refined edge map by applying a classification scheme based on patterns of intensity variation to the plurality of edge pixels in the edge map. The processor may also process the series with steps of identifying polyp candidates using the refined edge map, computing probabilities that identified polyp candidates are polyps, and generating a report, using the computed probabilities, indicating detected polyps. The system also includes an output for displaying the report.
    Type: Grant
    Filed: June 8, 2017
    Date of Patent: August 21, 2018
    Assignees: Mayo Foundation for Medical Education and Research, Arizona Board of Regents on behalf of Arizona State University
    Inventors: Nima Tajbakhsh, Jianming Liang, Suryakanth R. Gurudu
  • Patent number: 10055843
    Abstract: A system and methods for detecting polyps using optical images acquired during a colonoscopy. In some aspects, a method includes receiving the set of optical images from the input and generating polyp candidates by analyzing the received set of optical images. The method also includes generating a plurality of image patches around locations associated with each polyp candidate, applying a set of convolutional neural networks to the corresponding image patches, and computing probabilities indicative of a maximum response for each convolutional neural network. The method further includes identifying polyps using the computed probabilities for each polyp candidate, and generating a report indicating identified polyps.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: August 21, 2018
    Assignees: Mayo Foundation for Medical Education and Research, Arizona Board of Regents on behalf of Arizona State University
    Inventors: Nima Tajbakhsh, Suryakanth R. Gurudu, Jianming Liang
  • Publication number: 20180225820
    Abstract: Mechanisms for simultaneously monitoring colonoscopic video quality and detecting polyps in colonoscopy are provided. In some embodiments, the mechanisms can include a quality monitoring system that uses a first trained classifier to monitor image frames from a colonoscopic video to determine which image frames are informative frames and which image frames are non-informative frames. The informative image frames can be passed to an automatic polyp detection system that uses a second trained classifier to localize and identify whether a polyp or any other suitable object is present in one or more of the informative image frames.
    Type: Application
    Filed: August 8, 2016
    Publication date: August 9, 2018
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Jianming Liang, Nima Tajbakhsh
  • Patent number: 9978142
    Abstract: A system for quality assessment of optical colonoscopy images includes an input device configured to acquire a series of images during an optical colonoscopy. A computing device is coupled in communication with the input device and configured to acquire from the input device an input image from the series of images captured during the optical colonoscopy; form a cell grid including a plurality of cells on the input image; perform an image transformation onto the input image with each cell of the plurality of cells within the cell grid; reconstruct each cell to form a reconstructed image; compute a difference image of a sum of a plurality of differences between the input image and the reconstructed image; compute a histogram of the difference image; and apply a probabilistic classifier to the histogram to calculate an informativeness score for the input image.
    Type: Grant
    Filed: April 24, 2015
    Date of Patent: May 22, 2018
    Assignees: Arizona Board of Regents on Behalf of Arizona State University, Mayo Foundation for Medical Education and Research
    Inventors: Changching Chi, Nima Tajbakhsh, Haripriya Sharma, Suryakanth Gurudu, Jianming Liang
  • Patent number: 9959615
    Abstract: A system and method for detecting pulmonary embolisms in a subject's vasculature are provided. In some aspects, the method includes acquiring a set of images representing a vasculature of the subject, and analyzing the set of images to identify pulmonary embolism candidates associated with the vasculature. The method also includes generating, for identified pulmonary embolism candidates, image patches based on a vessel-aligned image representation, and applying a set of convolutional neural networks to the generated image patches to identify pulmonary embolisms. The method further includes generating a report indicating identified pulmonary embolisms.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: May 1, 2018
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Jianming Liang, Nima Tajbakhsh
  • Patent number: 9924927
    Abstract: A system for automatically determining a thickness of a wall of an artery of a subject includes an ECG monitoring device that captures an electrocardiogram (ECG) signal from the subject, and an ultrasound video imaging device, coupled to the ECG monitoring device, that receives the ECG signal from the ECG monitoring device, and captures a corresponding ultrasound video of the wall of the artery of the subject. The system produces a plurality of frames of video comprising the ultrasound video of the wall of the artery of the subject and an image of the ECG signal. A processor is configured to select a subset of the plurality of frames of the ultrasound video based on the image of the (ECG) signal, locate automatically a region of interest (ROI) in each frame of the subset of the plurality of frames of the video using a machine-based artificial neural network and measure automatically a thickness of the wall of the artery in each ROI using the machine-based artificial neural network.
    Type: Grant
    Filed: February 22, 2016
    Date of Patent: March 27, 2018
    Assignee: ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY
    Inventors: Jae Yul Shin, Nima Tajbakhsh, Jianming Liang
  • Publication number: 20180075599
    Abstract: A system and methods for detecting polyps using optical images acquired during a colonoscopy. In some aspects, a method includes receiving the set of optical images from the input and generating polyp candidates by analyzing the received set of optical images. The method also includes generating a plurality of image patches around locations associated with each polyp candidate, applying a set of convolutional neural networks to the corresponding image patches, and computing probabilities indicative of a maximum response for each convolutional neural network. The method further includes identifying polyps using the computed probabilities for each polyp candidate, and generating a report indicating identified polyps.
    Type: Application
    Filed: March 31, 2016
    Publication date: March 15, 2018
    Inventors: Nima Tajbakhsh, Suryakanth R. Gurudu, Jianming Liang
  • Publication number: 20170265747
    Abstract: A system and methods for polyp detection using optical colonoscopy images are provided. In some aspects, the system includes an input configured to receive a series of optical images, and a processor configured to process the series of optical images with steps comprising of receiving an optical image from the input, constructing an edge map corresponding to the optical image, the edge map comprising a plurality of edge pixel, and generating a refined edge map by applying a classification scheme based on patterns of intensity variation to the plurality of edge pixels in the edge map. The processor may also process the series with steps of identifying polyp candidates using the refined edge map, computing probabilities that identified polyp candidates are polyps, and generating a report, using the computed probabilities, indicating detected polyps. The system also includes an output for displaying the report.
    Type: Application
    Filed: June 8, 2017
    Publication date: September 21, 2017
    Inventors: Nima Tajbakhsh, Jianming Liang, Suryakanth R. Gurudu