Patents by Inventor Shaohua Kevin Zhou

Shaohua Kevin Zhou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200219259
    Abstract: Methods and apparatus for automated medical image analysis using deep learning networks are disclosed. In a method of automatically performing a medical image analysis task on a medical image of a patient, a medical image of a patient is received. The medical image is input to a trained deep neural network. An output model that provides a result of a target medical image analysis task on the input medical image is automatically estimated using the trained deep neural network. The trained deep neural network is trained in one of a discriminative adversarial network or a deep image-to-image dual inverse network.
    Type: Application
    Filed: March 18, 2020
    Publication date: July 9, 2020
    Inventors: Shaohua Kevin Zhou, Mingqing Chen, Daguang Xu, Zhoubing Xu, Dong Yang
  • Patent number: 10643105
    Abstract: Intelligent multi-scale image parsing determines the optimal size of each observation by an artificial agent at a given point in time while searching for the anatomical landmark. The artificial agent begins searching image data with a coarse field-of-view and iteratively decreases the field-of-view to locate the anatomical landmark. After searching at a coarse field-of view, the artificial agent increases resolution to a finer field-of-view to analyze context and appearance factors to converge on the anatomical landmark. The artificial agent determines applicable context and appearance factors at each effective scale.
    Type: Grant
    Filed: August 29, 2017
    Date of Patent: May 5, 2020
    Assignee: Siemens Healthcare GmbH
    Inventors: Bogdan Georgescu, Florin Cristian Ghesu, Yefeng Zheng, Dominik Neumann, Tommaso Mansi, Dorin Comaniciu, Wen Liu, Shaohua Kevin Zhou
  • Patent number: 10636141
    Abstract: Methods and apparatus for automated medical image analysis using deep learning networks are disclosed. In a method of automatically performing a medical image analysis task on a medical image of a patient, a medical image of a patient is received. The medical image is input to a trained deep neural network. An output model that provides a result of a target medical image analysis task on the input medical image is automatically estimated using the trained deep neural network. The trained deep neural network is trained in one of a discriminative adversarial network or a deep image-to-image dual inverse network.
    Type: Grant
    Filed: January 11, 2018
    Date of Patent: April 28, 2020
    Assignee: Siemens Healthcare GmbH
    Inventors: Shaohua Kevin Zhou, Mingqing Chen, Daguang Xu, Zhoubing Xu, Dong Yang
  • Patent number: 10627470
    Abstract: A learning-based magnetic resonance fingerprinting (MRF) reconstruction method for reconstructing an MR image of a tissue space in an MR scan subject for a particular MR sequence is disclosed. The method involves using a machine-learning algorithm that has been trained to generate a set of tissue parameters from acquired MR signal evolution without using a dictionary or dictionary matching.
    Type: Grant
    Filed: December 8, 2016
    Date of Patent: April 21, 2020
    Assignee: Siemens Healthcare GmbH
    Inventors: Xiao Chen, Boris Mailhe, Qiu Wang, Shaohua Kevin Zhou, Yefeng Zheng, Xiaoguang Lu, Puneet Sharma, Benjamin L. Odry, Bogdan Georgescu, Mariappan S. Nadar
  • Patent number: 10607342
    Abstract: Embodiments can provide a method for atlas-based contouring, comprising constructing a relevant atlas database; selecting one or more optimal atlases from the relevant atlas database; propagating one or more atlases; fusing the one or more atlases; and assessing the quality of one or more propagated contours.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: March 31, 2020
    Assignee: Siemenes Healthcare GmbH
    Inventors: Li Zhang, Shanhui Sun, Shaohua Kevin Zhou, Daguang Xu, Zhoubing Xu, Tommaso Mansi, Ying Chi, Yefeng Zheng, Pavlo Dyban, Nora Hünemohr, Julian Krebs, David Liu
  • Patent number: 10600185
    Abstract: A method and apparatus for automated liver segmentation in a 3D medical image of a patient is disclosed. A 3D medical image, such as a 3D computed tomography (CT) volume, of a patient is received. The 3D medical image of the patient is input to a trained deep image-to-image network. The trained deep image-to-image network is trained in an adversarial network together with a discriminative network that distinguishes between predicted liver segmentation masks generated by the deep image-to-image network from input training volumes and ground truth liver segmentation masks. A liver segmentation mask defining a segmented liver region in the 3D medical image of the patient is generated using the trained deep image-to-image network.
    Type: Grant
    Filed: January 23, 2018
    Date of Patent: March 24, 2020
    Assignee: Siemens Healthcare GmbH
    Inventors: Dong Yang, Daguang Xu, Shaohua Kevin Zhou, Bogdan Georgescu, Mingqing Chen, Dorin Comaniciu
  • Publication number: 20200082525
    Abstract: Systems and methods are provided for automatic detection and quantification for traumatic bleeding. Image data is acquired using a full body dual energy CT scanner. A machine-learned network detects one or more bleeding areas on a bleeding map from the dual energy CT scan image data. A visualization is generated from the bleeding map. The predicted bleeding areas are quantified, and a risk value is generated. The visualization and risk value are presented to an operator.
    Type: Application
    Filed: September 7, 2018
    Publication date: March 12, 2020
    Inventors: Zhoubing Xu, Sasa Grbic, Shaohua Kevin Zhou, Philipp Hölzer, Grzegorz Sosa
  • Patent number: 10582907
    Abstract: A method and apparatus for deep learning based automatic bone removal in medical images, such as computed tomography angiography (CTA) volumes, is disclosed. Bone structures are segmented in a 3D medical image of a patient by classifying voxels of the 3D medical image as bone or non-bone voxels using a deep neural network trained for bone segmentation. A 3D visualization of non-bone structures in the 3D medical image is generated by removing voxels classified as bone voxels from a 3D visualization of the 3D medical image.
    Type: Grant
    Filed: October 9, 2017
    Date of Patent: March 10, 2020
    Assignee: Siemens Healthcare GmbH
    Inventors: Mingqing Chen, Tae Soo Kim, Jan Kretschmer, Sebastian Seifert, Shaohua Kevin Zhou, Max Schöbinger, David Liu, Zhoubing Xu, Sasa Grbic, He Zhang
  • Patent number: 10565707
    Abstract: A computer-implemented method for identifying features in 3D image volumes includes dividing a 3D volume into a plurality of 2D slices and applying a pre-trained 2D multi-channel global convolutional network (MC-GCN) to the plurality of 2D slices until convergence. Following convergence of the 2D MC-GCN, a plurality of parameters are extracted from a first feature encoder network in the 2D MC-GCN. The plurality of parameters are transferred to a second feature encoder network in a 3D Anisotropic Hybrid Network (AH-Net). The 3D AH-Net is applied to the 3D volume to yield a probability map;. Then, using the probability map, one or more of (a) coordinates of the objects with non-maximum suppression or (b) a label map of objects of interest in the 3D volume are generated.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: February 18, 2020
    Assignee: Siemens Healthcare GmbH
    Inventors: Siqi Liu, Daguang Xu, Shaohua Kevin Zhou, Thomas Mertelmeier, Julia Wicklein, Anna Jerebko, Sasa Grbic, Olivier Pauly, Dorin Comaniciu
  • Patent number: 10521911
    Abstract: A method of reviewing neural scans includes receiving at least one landmark corresponding to an anatomical region. A plurality of images of tissue including the anatomical region is received and a neural network configured to differentiate between healthy tissue and unhealthy tissue within the anatomical region is generated. The neural network is generated by a machine learning process configured to receive the plurality of images of tissue and generate a plurality of weighting factors configured to differentiate between healthy tissue and unhealthy tissue. At least one patient image of tissue including the anatomical region is received and a determination is made by the neural network whether the at least one patient image of tissue includes healthy or unhealthy tissue.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: December 31, 2019
    Assignee: Siemens Healtchare GmbH
    Inventors: Benjamin L. Odry, Hasan Ertan Cetingul, Mariappan S. Nadar, Puneet Sharma, Shaohua Kevin Zhou, Dorin Comaniciu
  • Patent number: 10489673
    Abstract: A method and apparatus for detecting vascular landmarks in a 3D image volume, such as a CT volume, is disclosed. One or more guide slices are detected in a 3D image volume. A set of landmark candidates for multiple target vascular landmarks are then detected based on the guide slices. A node potential value for each landmark candidate is generated based on an error value determined using spatial histogram-based error regression, and edge potential values for pairs of landmark candidates are generated based on a bifurcation analysis of the image volume using vessel tracing. The optimal landmark candidate for each target landmark is then determined using a Markov random field model based on the node potential values and the edge potential values.
    Type: Grant
    Filed: March 22, 2010
    Date of Patent: November 26, 2019
    Assignee: Siemens Healthcare GmbH
    Inventors: David Liu, Shaohua Kevin Zhou, Sascha Seifert, Dominik Bernhardt, Dorin Comaniciu
  • Patent number: 10482600
    Abstract: Methods and apparatus for cross-domain medical image analysis and cross-domain medical image synthesis using deep image-to-image networks and adversarial networks are disclosed. In a method for cross-domain medical image analysis a medical image of a patient from a first domain is received. The medical image is input to a first encoder of a cross-domain deep image-to-image network (DI2IN) that includes the first encoder for the first domain, a second encoder for a second domain, and a decoder. The first encoder converts the medical image to a feature map and the decoder generates an output image that provides a result of a medical image analysis task from the feature map.
    Type: Grant
    Filed: January 16, 2018
    Date of Patent: November 19, 2019
    Assignee: Siemens Healthcare GmbH
    Inventors: Shaohua Kevin Zhou, Shun Miao, Rui Liao, Ahmet Tuysuzoglu, Yefeng Zheng
  • Patent number: 10481234
    Abstract: A medical imaging phantom (18) is three-dimensionally printed (36). In one specific approach, three-dimensional printing (36) allows for any number of variations in phantoms (18). A library of different phantoms (18), different inserts, different textures, different densities, different organs, different pathologies, different sizes, different shapes, and/or other differences allows for defining a specific phantom (18) as needed. The defined phantom (18) is then printed (36) for calibration or other use in medical imaging.
    Type: Grant
    Filed: February 23, 2015
    Date of Patent: November 19, 2019
    Assignee: Siemens Healthcare GmbH
    Inventors: Bernhard Geiger, Shaohua Kevin Zhou
  • Publication number: 20190343418
    Abstract: A method of visualizing spinal nerves includes receiving a 3D image volume depicting a spinal cord and a plurality of spinal nerves. For each spinal nerve, a 2D spinal nerve image is generated by defining a surface within the 3D volume comprising the spinal nerve. The surface is curved such that it passes through the spinal cord while encompassing the spinal nerve. Then, the 2D spinal nerve images are generated based on voxels on the surface included in the 3D volume. A visualization of the 2D spinal images is presented in a graphical user interface that allows each 2D spinal image to be viewed simultaneously.
    Type: Application
    Filed: July 12, 2019
    Publication date: November 14, 2019
    Inventors: Atilla Peter Kiraly, David Liu, Shaohua Kevin Zhou, Dorin Comaniciu, Gunnar Krüger
  • Patent number: 10467759
    Abstract: A computer-implemented method for generating contours of anatomy based on user click points includes a computer displaying an image comprising an anatomical structure and receiving a first user selection of a first click point at a first position on an outward facing edge of the anatomical structure. The computer applies a contour inference algorithm to generate an inferred contour around the outward facing edge based on the first position. Following generation of the inferred contour, the computer receives a second user selection of a second click point at a second position on the image. Then, the computer creates a visual indicator on a segment of the inferred contour between the first position and the second position as indicative of the user's confirmation of accuracy of the segment.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: November 5, 2019
    Assignee: Siemens Healthcare GmbH
    Inventors: Shaohua Kevin Zhou, Daguang Xu, Jan Kretschmer, Han Xiao
  • Patent number: 10467495
    Abstract: A method and system for anatomical landmark detection in medical images using deep neural networks is disclosed. For each of a plurality of image patches centered at a respective one of a plurality of voxels in the medical image, a subset of voxels within the image patch is input to a trained deep neural network based on a predetermined sampling pattern. A location of a target landmark in the medical image is detected using the trained deep neural network based on the subset of voxels input to the trained deep neural network from each of the plurality of image patches.
    Type: Grant
    Filed: May 11, 2015
    Date of Patent: November 5, 2019
    Assignee: Siemens Healthcare GmbH
    Inventors: David Liu, Bogdan Georgescu, Yefeng Zheng, Hien Nguyen, Shaohua Kevin Zhou, Vivek Kumar Singh, Dorin Comaniciu
  • Patent number: 10430551
    Abstract: In scan data retrieval, a mesh is fit (32) to surface data of a current patient, such as data from an optical or depth sensor (18). Meshes are also fit (48) to medical scan data, such as fitting (48) to skin surface segments of computed tomography data. The meshes or parameters derived from the meshes may be more efficiently compared (34) to identify (36) a previous patient with similar body shape and/or size. The scan configuration (38) for that patient, or that patient as altered to account for differences from the current patient, is used. In some embodiments, the parameter vector used for searching (34) includes principle component analysis coefficients. In further embodiments, the principle component analysis coefficients may be projected to a more discriminative space using metric learning.
    Type: Grant
    Filed: November 6, 2015
    Date of Patent: October 1, 2019
    Assignee: Siemens Healthcare GmbH
    Inventors: Jiangping Wang, Kai Ma, Vivek Singh, Mingqing Chen, Yao-Jen Chang, Shaohua Kevin Zhou, Terrence Chen, Andreas Krauss
  • Patent number: 10410093
    Abstract: A method of classifying signals using non-linear sparse representations includes learning a plurality of non-linear dictionaries based on a plurality of training signals, each respective nonlinear dictionary corresponding to one of a plurality of class labels. A non-linear sparse coding process is performed on a test signal for each of the plurality of non-linear dictionaries, thereby associating each of the plurality of non-linear dictionaries with a distinct sparse coding of the test signal. For each respective non-linear dictionary included in the plurality of non-linear dictionaries, a reconstruction error is measured using the test signal and the distinct sparse coding corresponding to the respective non-linear dictionary. A particular nonlinear dictionary corresponding to a smallest value for the reconstruction error among the plurality of non-linear dictionaries is identified and a class label corresponding to the particular non-linear dictionary is assigned to the test signal.
    Type: Grant
    Filed: June 4, 2015
    Date of Patent: September 10, 2019
    Assignee: Siemens Healthcare GmbH
    Inventors: Hien Nguyen, Shaohua Kevin Zhou
  • Patent number: 10409235
    Abstract: The process of creating a 3D printer ready model of patient specific anatomy is automated. Instead of manual manipulation of the 3D mesh from imaging to create the 3D printer ready model, an automated manipulation is provided. The segmentation may be automated as well. In one approach, a transform between a predetermined mesh of anatomy and patient specific 3D mesh is calculated. The predetermined mesh has a corresponding 3D printer ready model. By applying the transform to the 3D printer ready model, the 3D printer ready model is altered to become specific to the patient. In addition, target manipulation that alters semantic parts of the anatomical structure may be included in 3D printing.
    Type: Grant
    Filed: November 12, 2014
    Date of Patent: September 10, 2019
    Assignee: Siemens Healthcare GmbH
    Inventors: Shaohua Kevin Zhou, Bernhard Geiger
  • Publication number: 20190261945
    Abstract: For three-dimensional segmentation from two-dimensional intracardiac echocardiography imaging, the three-dimension segmentation is output by a machine-learnt multi-task generator. Rather than the brute force approach of training the generator from 2D ICE images to output a 2D segmentation, the generator is trained from 3D information, such as a sparse ICE volume assembled from the 2D ICE images. Where sufficient ground truth data is not available, computed tomography or magnetic resonance data may be used as the ground truth for the sample sparse ICE volumes. The generator is trained to output both the 3D segmentation and a complete volume (i.e., more voxels represented than in the sparse ICE volume). The 3D segmentation may be further used to project to 2D as an input with an ICE image to another network trained to output a 2D segmentation for the ICE image. Display of the 3D segmentation and/or 2D segmentation may guide ablation of tissue in the patient.
    Type: Application
    Filed: September 13, 2018
    Publication date: August 29, 2019
    Inventors: Gareth Funka-Lea, Haofu Liao, Shaohua Kevin Zhou, Yefeng Zheng, Yucheng Tang