Patents by Inventor Shaohua Kevin Zhou

Shaohua Kevin Zhou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10096107
    Abstract: Intelligent image parsing for anatomical landmarks and/or organs detection and/or segmentation is provided. A state space of an artificial agent is specified for discrete portions of a test image. A set of actions is determined, each specifying a possible change in a parametric space with respect to the test image. A reward system is established based on applying each action of the set of actions and based on at least one target state. The artificial agent learns an optimal action-value function approximator specifying the behavior of the artificial agent to maximize a cumulative future reward value of the reward system. The behavior of the artificial agent is a sequence of actions moving the agent towards at least one target state. The learned artificial agent is applied on a test image to automatically parse image content.
    Type: Grant
    Filed: December 21, 2016
    Date of Patent: October 9, 2018
    Assignee: Siemens Healthcare GmbH
    Inventors: Florin Cristian Ghesu, Bogdan Georgescu, Dominik Neumann, Tommaso Mansi, Dorin Comaniciu, Wen Liu, Shaohua Kevin Zhou
  • Publication number: 20180271460
    Abstract: A computer-implemented method for providing a multi-modality visualization of a patient includes receiving one or more image datasets. Each image dataset corresponds to a distinct image modality. The image datasets are segmented into a plurality of anatomical objects. A list of clinical tasks associated with displaying the one or more image datasets are received. A machine learning model is used to determine visualization parameters for each anatomical object based on the list of clinical tasks. Then, a synthetic display of the image datasets is created by presenting each anatomical object according to its corresponding visualization parameters.
    Type: Application
    Filed: March 27, 2017
    Publication date: September 27, 2018
    Inventors: Bernhard Geiger, Shaohua Kevin Zhou, Carol L. Novak, Daguang Xu, David Liu
  • Publication number: 20180267127
    Abstract: A medical imaging phantom (18) is three-dimensionally printed (36). In one specific approach, three-dimensional printing (36) allows for any number of variations in phantoms (18). A library of different phantoms (18), different inserts, different textures, different densities, different organs, different pathologies, different sizes, different shapes, and/or other differences allows for defining a specific phantom (18) as needed. The defined phantom (18) is then printed (36) for calibration or other use in medical imaging.
    Type: Application
    Filed: February 23, 2015
    Publication date: September 20, 2018
    Inventors: Bernhard GEIGER, Shaohua Kevin ZHOU
  • Patent number: 10079071
    Abstract: A method and apparatus for whole body bone removal and vasculature visualization in medical image data, such as computed tomography angiography (CTA) scans, is disclosed. Bone structures are segmented in the a 3D medical image, resulting in a bone mask of the 3D medical image. Vessel structures are segmented in the 3D medical image, resulting in a vessel mask of the 3D medical image. The bone mask and the vessel mask are refined by fusing information from the bone mask and the vessel mask. Bone voxels are removed from the 3D medical image using the refined bone mask, in order to generate a visualization of the vessel structures in the 3D medical image.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: September 18, 2018
    Assignee: Siemens Healthcare GmbH
    Inventors: Nathan Lay, David Liu, Shaohua Kevin Zhou, Bernhard Geiger, Li Zhang, Vincent Ordy, Daguang Xu, Chris Schwemmer, Philipp Wolber, Noha Youssry El-Zehiry
  • Publication number: 20180260951
    Abstract: A method and apparatus for automated vertebra localization and identification in a 3D computed tomography (CT) volumes is disclosed. Initial vertebra locations in a 3D CT volume of a patient are predicted for a plurality of vertebrae corresponding to a plurality of vertebra labels using a trained deep image-to-image network (DI2IN). The initial vertebra locations for the plurality of vertebrae predicted using the DI2IN are refined using a trained recurrent neural network, resulting in an updated set of vertebra locations for the plurality of vertebrae corresponding to the plurality of vertebrae labels. Final vertebra locations in the 3D CT volume for the plurality of vertebrae corresponding to the plurality of vertebra labels are determined by refining the updated set of vertebra locations using a trained shape-basis deep neural network.
    Type: Application
    Filed: February 2, 2018
    Publication date: September 13, 2018
    Inventors: Dong Yang, Tao Xiong, Daguang Xu, Shaohua Kevin Zhou, Mingqing Chen, Zhoubing Xu, Dorin Comaniciu, Jin-hyeong Park
  • Publication number: 20180260957
    Abstract: A method and apparatus for automated liver segmentation in a 3D medical image of a patient is disclosed. A 3D medical image, such as a 3D computed tomography (CT) volume, of a patient is received. The 3D medical image of the patient is input to a trained deep image-to-image network. The trained deep image-to-image network is trained in an adversarial network together with a discriminative network that distinguishes between predicted liver segmentation masks generated by the deep image-to-image network from input training volumes and ground truth liver segmentation masks. A liver segmentation mask defining a segmented liver region in the 3D medical image of the patient is generated using the trained deep image-to-image network.
    Type: Application
    Filed: January 23, 2018
    Publication date: September 13, 2018
    Inventors: Dong Yang, Daguang Xu, Shaohua Kevin Zhou, Bogdan Georgescu, Mingqing Chen, Dorin Comaniciu
  • Publication number: 20180225823
    Abstract: Methods and apparatus for automated medical image analysis using deep learning networks are disclosed. In a method of automatically performing a medical image analysis task on a medical image of a patient, a medical image of a patient is received. The medical image is input to a trained deep neural network. An output model that provides a result of a target medical image analysis task on the input medical image is automatically estimated using the trained deep neural network. The trained deep neural network is trained in one of a discriminative adversarial network or a deep image-to-image dual inverse network.
    Type: Application
    Filed: January 11, 2018
    Publication date: August 9, 2018
    Inventors: Shaohua Kevin Zhou, Mingqing Chen, Daguang Xu, Zhoubing Xu, Dong Yang
  • Publication number: 20180225822
    Abstract: Systems and methods are provided for performing medical imaging analysis. Input medical imaging data is received for performing a particular one of a plurality of medical imaging analyses. An output that provides a result of the particular medical imaging analysis on the input medical imaging data is generated using a neural network trained to perform the plurality of medical imaging analyses. The neural network is trained by learning one or more weights associated with the particular medical imaging analysis using one or more weights associated with a different one of the plurality of medical imaging analyses. The generated output is outputted for performing the particular medical imaging analysis.
    Type: Application
    Filed: January 9, 2018
    Publication date: August 9, 2018
    Inventors: Shaohua Kevin Zhou, Mingqing Chen, Daguang Xu, Zhoubing Xu, Shun Miao, Dong Yang, He Zhang
  • Publication number: 20180214105
    Abstract: For breast cancer detection with an x-ray scanner, a cascade of multiple classifiers is trained or used. One or more of the classifiers uses a deep-learnt network trained on non-x-ray data, at least initially, to extract features. Alternatively or additionally, one or more of the classifiers is trained using classification of patches rather than pixels and/or classification with regression to create additional cancer-positive partial samples.
    Type: Application
    Filed: January 31, 2017
    Publication date: August 2, 2018
    Inventors: Yaron Anavi, Atilla Peter Kiraly, David Liu, Shaohua Kevin Zhou, Zhoubing Xu, Dorin Comaniciu
  • Patent number: 10037603
    Abstract: A method and apparatus for whole body bone removal and vasculature visualization in medical image data, such as computed tomography angiography (CTA) scans, is disclosed. Bone structures are segmented in the a 3D medical image, resulting in a bone mask of the 3D medical image. Vessel structures are segmented in the 3D medical image, resulting in a vessel mask of the 3D medical image. The bone mask and the vessel mask are refined by fusing information from the bone mask and the vessel mask. Bone voxels are removed from the 3D medical image using the refined bone mask, in order to generate a visualization of the vessel structures in the 3D medical image.
    Type: Grant
    Filed: May 4, 2015
    Date of Patent: July 31, 2018
    Assignee: Siemens Healthcare GmbH
    Inventors: Nathan Lay, David Liu, Shaohua Kevin Zhou, Bernhard Geiger, Li Zhang, Vincent Ordy, Daguang Xu, Chris Schwemmer, Philipp Wolber, Noha Youssry El-Zehiry
  • Publication number: 20180168536
    Abstract: An anatomical structure is detected (110) in a volume of ultrasound data by identifying (150) the anatomical structure in another volume of ultrasound data and generating (155) an image of the anatomical structure and an anatomical landmark. A group of images are generated (130) of the original volume and compared (140) to the image of the other volume. An image of the group of images is selected (150) as including the anatomical structure based on the comparison.
    Type: Application
    Filed: July 2, 2015
    Publication date: June 21, 2018
    Inventors: Jin-hyeong Park, Michal Sofka, Shaohua Kevin Zhou
  • Patent number: 9984493
    Abstract: A method and apparatus for volume rendering based 3D image filtering and real-time cinematic volume rendering is disclosed. A set of 2D projection images of the 3D volume is generated using cinematic volume rendering. A reconstructed 3D volume is generated from the set of 2D projection images using an inverse linear volumetric ray tracing operator. The reconstructed 3D volume inherits noise suppression and structure enhancement from the projection images generated using cinematic rendering, and is thus non-linearly filtered. Real-time volume rendering can be performed on the reconstructed 3D volume using volumetric ray tracing, and each projected image of the reconstructed 3D volume is an approximation of a cinematic rendered image of the original volume.
    Type: Grant
    Filed: December 21, 2015
    Date of Patent: May 29, 2018
    Assignee: Siemens Healthcare GmbH
    Inventors: Shaohua Kevin Zhou, Klaus Engel
  • Publication number: 20180137393
    Abstract: A method of classifying signals using non-linear sparse representations includes learning a plurality of non-linear dictionaries based on a plurality of training signals, each respective nonlinear dictionary corresponding to one of a plurality of class labels. A non-linear sparse coding process is performed on a test signal for each of the plurality of non-linear dictionaries, thereby associating each of the plurality of non-linear dictionaries with a distinct sparse coding of the test signal. For each respective non-linear dictionary included in the plurality of non-linear dictionaries, a reconstruction error is measured using the test signal and the distinct sparse coding corresponding to the respective non-linear dictionary. A particular nonlinear dictionary corresponding to a smallest value for the reconstruction error among the plurality of non-linear dictionaries is identified and a class label corresponding to the particular non-linear dictionary is assigned to the test signal.
    Type: Application
    Filed: June 4, 2015
    Publication date: May 17, 2018
    Inventors: Hien Nguyen, Shaohua Kevin Zhou
  • Publication number: 20180116620
    Abstract: A method and apparatus for deep learning based automatic bone removal in medical images, such as computed tomography angiography (CTA) volumes, is disclosed. Bone structures are segmented in a 3D medical image of a patient by classifying voxels of the 3D medical image as bone or non-bone voxels using a deep neural network trained for bone segmentation. A 3D visualization of non-bone structures in the 3D medical image is generated by removing voxels classified as bone voxels from a 3D visualization of the 3D medical image.
    Type: Application
    Filed: October 9, 2017
    Publication date: May 3, 2018
    Inventors: Mingqing Chen, Tae Soo Kim, Jan Kretschmer, Sebastian Seifert, Shaohua Kevin Zhou, Max Schöbinger, David Liu, Zhoubing Xu, Sasa Grbic, He Zhang
  • Publication number: 20180096478
    Abstract: Embodiments can provide a method for atlas-based contouring, comprising constructing a relevant atlas database; selecting one or more optimal atlases from the relevant atlas database; propagating one or more atlases; fusing the one or more atlases; and assessing the quality of one or more propagated contours.
    Type: Application
    Filed: June 16, 2017
    Publication date: April 5, 2018
    Inventors: Li Zhang, Shanhui Sun, Shaohua Kevin Zhou, Daguang Xu, Zhoubing Xu, Tommaso Mansi, Ying Chi, Yefeng Zheng, Pavlo Dyban, Nora Hünemohr, Julian Krebs, David Liu
  • Publication number: 20180089530
    Abstract: A method and system for anatomical landmark detection in medical images using deep neural networks is disclosed. For each of a plurality of image patches centered at a respective one of a plurality of voxels in the medical image, a subset of voxels within the image patch is input to a trained deep neural network based on a predetermined sampling pattern. A location of a target landmark in the medical image is detected using the trained deep neural network based on the subset of voxels input to the trained deep neural network from each of the plurality of image patches.
    Type: Application
    Filed: May 11, 2015
    Publication date: March 29, 2018
    Inventors: David Liu, Bogdan Georgescu, Yefeng Zheng, Hien Nguyen, Shaohua Kevin Zhou, Vivek Kumar Singh, Dorin Comaniciu
  • Publication number: 20180075597
    Abstract: Tissue is characterized using machine-learnt classification. The prognosis, diagnosis or evidence in the form of a similar case is found by machine-learnt classification from features extracted from frames of medical scan data. The texture features for tissue characterization may be learned using deep learning. Using the features, therapy response is predicted from magnetic resonance functional measures before and after treatment in one example. Using the machine-learnt classification, the number of measures after treatment may be reduced as compared to RECIST for predicting the outcome of the treatment, allowing earlier termination or alteration of the therapy.
    Type: Application
    Filed: September 9, 2016
    Publication date: March 15, 2018
    Inventors: Shaohua Kevin Zhou, David Liu, Berthold Kiefer, Atilla Peter Kiraly, Benjamin L. Odry, Robert Grimm, LI PAN, IHAB KAMEL
  • Publication number: 20180042564
    Abstract: A method and apparatus for medical image synthesis is disclosed, which synthesizes a target medical image based on a source medical image. The method can be used for synthesizing a high dose computed tomography (CT) image or a high kV CT image from a low dose CT image or a low kV image. A plurality of image patches are extracted from a source medical image. A synthesized target medical image is generated from the source medical image by calculating voxel values in the synthesized target medical image based on the image patches extracted from the source medical image using a machine learning based probabilistic model.
    Type: Application
    Filed: April 28, 2015
    Publication date: February 15, 2018
    Inventor: Shaohua Kevin Zhou
  • Patent number: 9892361
    Abstract: A method and apparatus for cross-domain medical image synthesis is disclosed. A source domain medical image is received. A synthesized target domain medical image is generated using a trained contextual deep network (CtDN) to predict intensities of voxels of the target domain medical image based on intensities and contextual information of voxels in the source domain medical image. The contextual deep network is a multi-layer network in which hidden nodes of at least one layer of the contextual deep network are modeled as products of intensity responses and contextual response.
    Type: Grant
    Filed: January 21, 2016
    Date of Patent: February 13, 2018
    Assignee: Siemens Healthcare GmbH
    Inventors: Hien Nguyen, Shaohua Kevin Zhou
  • Publication number: 20180005083
    Abstract: Intelligent multi-scale image parsing determines the optimal size of each observation by an artificial agent at a given point in time while searching for the anatomical landmark. The artificial agent begins searching image data with a coarse field-of-view and iteratively decreases the field-of-view to locate the anatomical landmark. After searching at a coarse field-of view, the artificial agent increases resolution to a finer field-of-view to analyze context and appearance factors to converge on the anatomical landmark. The artificial agent determines applicable context and appearance factors at each effective scale.
    Type: Application
    Filed: August 29, 2017
    Publication date: January 4, 2018
    Inventors: Bogdan Georgescu, Florin Cristian Ghesu, Yefeng Zheng, Dominik Neumann, Tommaso Mansi, Dorin Comaniciu, Wen Liu, Shaohua Kevin Zhou