Patents by Inventor Jianming Liang

Jianming Liang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11436725
    Abstract: Not only is annotating medical images tedious and time consuming, but it also demands costly, specialty-oriented expertise, which is not easily accessible. To address this challenge, a new self-supervised framework is introduced: TransVW (transferable visual words), exploiting the prowess of transfer learning with convolutional neural networks and the unsupervised nature of visual word extraction with bags of visual words, resulting in an annotation-efficient solution to medical image analysis. TransVW was evaluated using NIH ChestX-ray14 to demonstrate its annotation efficiency. When compared with training from scratch and ImageNet-based transfer learning, TransVW reduces the annotation efforts by 75% and 12%, respectively, in addition to significantly accelerating the convergence speed. More importantly, TransVW sets new records: achieving the best average AUC on all 14 diseases, the best individual AUC scores on 10 diseases, and the second best individual AUC scores on 3 diseases.
    Type: Grant
    Filed: November 15, 2020
    Date of Patent: September 6, 2022
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Mohammad Reza Hosseinzadeh Taher, Fatemeh Haghighi, Jianming Liang
  • Publication number: 20220270357
    Abstract: Described herein are means for implementing medical image segmentation using interactive refinement, in which the trained deep models are then utilized for the processing of medical imaging.
    Type: Application
    Filed: February 18, 2022
    Publication date: August 25, 2022
    Inventors: Diksha Goyal, Jianming Liang
  • Publication number: 20220262105
    Abstract: Described herein are means for generating source models for transfer learning to application specific models used in the processing of medical imaging.
    Type: Application
    Filed: July 17, 2020
    Publication date: August 18, 2022
    Inventors: Zongwei Zhou, Vatsal Sodha, Md Mahfuzur Rahman Siddiquee, Ruibin Feng, Nima Tajbakhsh, Jianming Liang
  • Patent number: 11328430
    Abstract: Methods, systems, and media for segmenting images are provided. In some embodiments, the method comprises: generating an aggregate U-Net comprised of a plurality of U-Nets, wherein each U-Net in the plurality of U-Nets has a different depth, wherein each U-Net is comprised of a plurality of nodes Xi,j, wherein i indicates a down-sampling layer the U-Net, and wherein j indicates a convolution layer of the U-Net; training the aggregate U-Net by: for each training sample in a group of training samples, calculating, for each node in the plurality of nodes Xi,j, a feature map xi,j, wherein xi,j is based on a convolution operation performed on a down-sampling of an output from Xi?1,j when j=0, and wherein xi,j is based on a convolution operation performed on an up-sampling operation of an output from Xi+1,j?1 when j>0; and predicting a segmentation of a test image using the trained aggregate U-Net.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: May 10, 2022
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, Jianming Liang
  • Publication number: 20220114733
    Abstract: Described herein are means for implementing contrastive learning via reconstruction within a self-supervised learning framework, in which the trained deep models are then utilized for the processing of medical imaging. For instance, an exemplary system is specially configured for performing a random cropping operation to crop a 3D cube from each of a plurality of medical images received at the system as input; performing a resize operation of the cropped 3D cubes; performing an image reconstruction operation of the resized and cropped 3D cubes to predict the whole image represented by the original medical images received; and generating a reconstructed image which is analyzed for reconstruction loss against the original image representing a known ground truth image to the reconstruction loss function. Other related embodiments are disclosed.
    Type: Application
    Filed: October 8, 2021
    Publication date: April 14, 2022
    Inventors: Ruibin Feng, Zongwei Zhou, Jianming Liang
  • Patent number: 11296561
    Abstract: A ceiling fan has a motor and multiple fan blades. The motor has a stator assembly and a rotor assembly. The stator frame has a frame, a stator core, and multiple coils. The frame has multiple branch containers and multiple protrusions on the branch containers. The stator core is securely mounted in the frame. The coils are wound on the branch containers and protrusions. Therefore, the coils may be an ellipse and thus the magnetic flux therein is larger, which converts electric energy torques to drive the rotor assembly in higher efficiency. Besides, the rotor has multiple magnetic components and each magnetic component has a magnetic pole. An amount of the magnetic poles is larger than that of the branch containers. The motor can provide a higher torque even at a lower rotating speed.
    Type: Grant
    Filed: November 11, 2019
    Date of Patent: April 5, 2022
    Assignee: Foshan Carro Electrical Co., Ltd.
    Inventors: Jiansheng Zhang, Ruhui Huang, Hanhua Huang, Jianming Liang
  • Publication number: 20220084173
    Abstract: Described herein are means for implementing fixed-point image-to-image translation using improved Generative Adversarial Networks (GANs). For instance, an exemplary system is specially configured for implementing a new framework, called a Fixed-Point GAN, which improves upon prior known methodologies by enhancing the quality of the images generated through global, local, and identity transformation. The Fixed-Point GAN as introduced and described herein, improves many applications dependant on image-to-image translation, including those in the field of medical image processing for the purposes of desease detection and localization. Other related embodiments are disclosed.
    Type: Application
    Filed: September 16, 2021
    Publication date: March 17, 2022
    Inventors: Jianming Liang, Zongwei Zhou, Nima Tajbakhsh, Md Mahfuzur Rahman Siddiquee
  • Patent number: 11177977
    Abstract: A method for control of a soft generic routing encapsulation (GRE) tunnel based on client activity includes: receiving a data packet from a first external; storing an identifier associated with the first external device in a client table and a corresponding timestamp associated with receipt of the data packet; creating a soft GRE tunnel between a local interface of the computing device and a remote gateway; updating the client table, wherein updating the client table includes adding a new identifier and corresponding timestamp associated with additional external devices upon receipt of respective data packets, and updating the timestamp corresponding to the respective identifier upon receipt of an additional packet from an additional external device; and halting a GRE health-check process associated with the soft GRE tunnel once a predetermined period of time has elapsed since the timestamp corresponding to each identifier stored in the client table.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: November 16, 2021
    Assignee: ARRIS ENTERPRISES LLC
    Inventors: Jianxiang Chen, Jianming Liang
  • Publication number: 20210342646
    Abstract: Described herein are means for training a deep model to learn contrastive representations embedded within part-whole semantics via a self-supervised learning framework, in which the trained deep models are then utilized for the processing of medical imaging. For instance, an exemplary system is specifically configured for performing a random cropping operation to crop a 3D cube from each of a plurality of medical images received at the system as input, performing a resize operation of the cropped 3D cubes, performing an image reconstruction operation of the resized and cropped 3D cubes to predict the resized whole image represented by the original medical images received; and generating a reconstructed image which is analyzed for reconstruction loss against the original image representing a known ground truth image to the reconstruction loss function. Other related embodiments are disclosed.
    Type: Application
    Filed: April 26, 2021
    Publication date: November 4, 2021
    Inventors: Ruibin Feng, Zongwei Zhou, Jianming Liang
  • Publication number: 20210343014
    Abstract: Described herein are means for the generation of semantic genesis models through self-supervised learning in the absence of manual labeling, in which the trained semantic genesis models are then utilized for the processing of medical imaging.
    Type: Application
    Filed: April 30, 2021
    Publication date: November 4, 2021
    Inventors: Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Zongwei Zhou, Jianming Liang
  • Patent number: 11164021
    Abstract: Methods, systems, and media for discriminating and generating translated images are provided. In some embodiments, the method comprises: identifying a set of training images, wherein each image is associated with at least one domain from a plurality of domains; training a generator network to generate: i) a first fake image that is associated with a first domain; and ii) a second fake image that is associated with a second domain; training a discriminator network, using as inputs to the discriminator network: i) an image from the set of training images; ii) the first fake image; and iii) the second fake image; and using the generator network to generate, for an image not included in the set of training images at least one of: i) a third fake image that is associated with the first domain; and ii) a fourth fake image that is associated with the second domain.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: November 2, 2021
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Md Mahfuzur Rahman Siddiquee, Zongwei Zhou, Ruibin Feng, Nima Tajbakhsh, Jianming Liang
  • Patent number: 11164067
    Abstract: Disclosed are provided systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging. For example, a system having means to execute a neural network model formed from a plurality of layer blocks including an encoder layer block which precedes a plurality of decoder layer blocks includes: means for associating a resolution value with each of the plurality of layer blocks; means for processing via the encoder layer block a respective layer block input including a down-sampled layer block output processing, via decoder layer blocks, a respective layer block input including an up-sampled layer block output and a layer block output of a previous layer block associated with a prior resolution value of a layer block which precedes the respective decoder layer block; and generating the respective layer block output by summing or concatenating the processed layer block inputs.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: November 2, 2021
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Jianming Liang, Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh
  • Publication number: 20210326653
    Abstract: Described herein are means for generation of self-taught generic models, named Models Genesis, without requiring any manual labeling, in which the Models Genesis are then utilized for the processing of medical imaging. For instance, an exemplary system is specially configured for learning general-purpose image representations by recovering original sub-volumes of 3D input images from transformed 3D images. Such a system operates by cropping a sub-volume from each 3D input image; performing image transformations upon each of the sub-volumes cropped from the 3D input images to generate transformed sub-volumes; and training an encoder-decoder architecture with skip connections to learn a common image representation by restoring the original sub-volumes cropped from the 3D input images from the transformed sub-volumes generated via the image transformations.
    Type: Application
    Filed: April 7, 2021
    Publication date: October 21, 2021
    Inventors: Zongwei Zhou, Vatsal Sodha, Jiaxuan Pang, Jianming Liang
  • Publication number: 20210265043
    Abstract: Described herein are means for learning semantics-enriched representations via self-discovery, self-classification, and self-restoration in the context of medical imaging. Embodiments include the training of deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a collection of semantics-enriched pre-trained models, called Semantic Genesis. Other related embodiments are disclosed.
    Type: Application
    Filed: February 19, 2021
    Publication date: August 26, 2021
    Inventors: Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Zongwei Zhou, Jianming Liang
  • Patent number: 11100685
    Abstract: Detecting a pulmonary embolism (PE) in an image dataset of a blood vessel involves obtaining a volume of interest (VOI) in the blood vessel, generating a plurality of PE candidates within the VOI, generating a set of voxels for each PE candidate, estimating for each PE candidate an orientation of the blood vessel that contains the PE candidate, given the set of voxels for the PE candidate, and generating a visualization of the blood vessel that contains the PE candidate using the estimated orientation of the blood vessel that contains the PE candidate.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: August 24, 2021
    Assignee: ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY
    Inventors: Jianming Liang, Nima Tajbakhsh, Jaeyul Shin
  • Publication number: 20210150710
    Abstract: Not only is annotating medical images tedious and time consuming, but it also demands costly, specialty-oriented expertise, which is not easily accessible. To address this challenge, a new self-supervised framework is introduced: TransVW (transferable visual words), exploiting the prowess of transfer learning with convolutional neural networks and the unsupervised nature of visual word extraction with bags of visual words, resulting in an annotation-efficient solution to medical image analysis. TransVW was evaluated using NIH ChestX-ray14 to demonstrate its annotation efficiency. When compared with training from scratch and ImageNet-based transfer learning, TransVW reduces the annotation efforts by 75% and 12%, respectively, in addition to significantly accelerating the convergence speed. More importantly, TransVW sets new records: achieving the best average AUC on all 14 diseases, the best individual AUC scores on 10 diseases, and the second best individual AUC scores on 3 diseases.
    Type: Application
    Filed: November 15, 2020
    Publication date: May 20, 2021
    Inventors: Mohammad Reza Hosseinzadeh Taher, Fatemeh Haghighi, Jianming Liang
  • Publication number: 20210091975
    Abstract: A method for control of a soft generic routing encapsulation (GRE) tunnel based on client activity includes: receiving a data packet from a first external; storing an identifier associated with the first external device in a client table and a corresponding timestamp associated with receipt of the data packet; creating a soft GRE tunnel between a local interface of the computing device and a remote gateway; updating the client table, wherein updating the client table includes adding a new identifier and corresponding timestamp associated with additional external devices upon receipt of respective data packets, and updating the time stamp corresponding to the respective identifier upon receipt of an additional packet from an additional external device; and halting a GRE health-check process associated with the soft GRE tunnel once a predetermined period of time has elapsed since the timestamp corresponding to each identifier stored in the client table.
    Type: Application
    Filed: December 21, 2017
    Publication date: March 25, 2021
    Inventors: Jianxiang CHEN, Jianming LIANG
  • Patent number: 10956785
    Abstract: Methods, systems, and media for selecting candidates for annotation for use in training classifiers are provided. In some embodiments, the method comprises: identifying, for a trained Convolutional Neural Network (CNN), a group of candidate training samples, wherein each candidate training sample includes a plurality of patches; for each patch of the plurality of patches, determining a plurality of probabilities, each probability being a probability that the patch corresponds to a label of a plurality of labels; identifying a subset of the patches in the plurality of patches; for each patch in the subset of the patches, calculating a metric that indicates a variance of the probabilities assigned to each patch; selecting a subset of the candidate training samples based on the metric; labeling candidate training samples in the subset of the candidate training samples by querying an external source; and re-training the CNN using the labeled candidate training samples.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: March 23, 2021
    Assignee: ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY
    Inventors: Jianming Liang, Zongwei Zhou, Jae Shin
  • Patent number: 10861151
    Abstract: Mechanisms for simultaneously monitoring colonoscopic video quality and detecting polyps in colonoscopy are provided. In some embodiments, the mechanisms can include a quality monitoring system that uses a first trained classifier to monitor image frames from a colonoscopic video to determine which image frames are informative frames and which image frames are non-informative frames. The informative image frames can be passed to an automatic polyp detection system that uses a second trained classifier to localize and identify whether a polyp or any other suitable object is present in one or more of the informative image frames.
    Type: Grant
    Filed: August 8, 2016
    Date of Patent: December 8, 2020
    Assignee: THE ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY
    Inventors: Jianming Liang, Nima Tajbakhsh
  • Publication number: 20200380695
    Abstract: Methods, systems, and media for segmenting images are provided. In some embodiments, the method comprises: generating an aggregate U-Net comprised of a plurality of U-Nets, wherein each U-Net in the plurality of U-Nets has a different depth, wherein each U-Net is comprised of a plurality of nodes Xi,j, wherein i indicates a down-sampling layer the U-Net, and wherein j indicates a convolution layer of the U-Net; training the aggregate U-Net by: for each training sample in a group of training samples, calculating, for each node in the plurality of nodes Xi,j, a feature map xi,j, wherein xi,j is based on a convolution operation performed on a down-sampling of an output from Xi?1,j when j=0, and wherein xi,j is based on a convolution operation performed on an up-sampling operation of an output from Xi+1,j?1 when j>0; and predicting a segmentation of a test image using the trained aggregate U-Net.
    Type: Application
    Filed: May 28, 2020
    Publication date: December 3, 2020
    Inventors: Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, Jianming Liang