Patents by Inventor Pascal Ceccaldi

Pascal Ceccaldi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12112844
    Abstract: Systems and method for performing a medical imaging analysis task for making a clinical decision are provided. One or more input medical images of a patient are received. A medical imaging analysis task is performed from the one or more input medical images using a machine learning based network. The machine learning based network generates a probability score associated with the medical imaging analysis task. An uncertainty measure associated with the probability score is determined. A clinical decision is made based on the probability score and the uncertainty measure.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: October 8, 2024
    Assignee: Siemens Healthineers AG
    Inventors: Eli Gibson, Bogdan Georgescu, Pascal Ceccaldi, Youngjin Yoo, Jyotipriya Das, Thomas Re, Eva Eibenberger, Andrei Chekkoury, Barbara Brehm, Thomas Flohr, Dorin Comaniciu, Pierre-Hugo Trigan
  • Patent number: 11861828
    Abstract: Systems and methods for quantifying a shift of an anatomical object of a patient are provided. A 3D medical image of an anatomical object of a patient is received. An initial location of landmarks on the anatomical object in the 3D medical image is determined using a first machine learning network. A 2D slice depicting the initial location of the landmarks is extracted from the 3D medical image. The initial location of the landmarks in the 2D slice is refined using a second machine learning network. A shift of the anatomical object is quantified based on the refined location of the landmarks in the 2D slice. The quantified shift of the anatomical object is output.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: January 2, 2024
    Assignee: Siemens Healthcare GmbH
    Inventors: Nguyen Nguyen, Youngjin Yoo, Pascal Ceccaldi, Eli Gibson, Andrei Chekkoury
  • Patent number: 11783484
    Abstract: For medical imaging such as MRI, machine training is used to train a network for segmentation using both the imaging data and protocol data (e.g., meta-data). The network is trained to segment based, in part, on the configuration and/or scanner, not just the imaging data, allowing the trained network to adapt to the way each image is acquired. In one embodiment, the network architecture includes one or more blocks that receive both types of data as input and output both types of data, preserving relevant features for adaptation through at least part of the trained network.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: October 10, 2023
    Assignee: Siemens Healthcare GmbH
    Inventors: Mahmoud Mostapha, Boris Mailhe, Mariappan S. Nadar, Pascal Ceccaldi, Youngjin Yoo
  • Patent number: 11783485
    Abstract: For medical imaging such as MRI, machine training is used to train a network for segmentation using both the imaging data and protocol data (e.g., meta-data). The network is trained to segment based, in part, on the configuration and/or scanner, not just the imaging data, allowing the trained network to adapt to the way each image is acquired. In one embodiment, the network architecture includes one or more blocks that receive both types of data as input and output both types of data, preserving relevant features for adaptation through at least part of the trained network.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: October 10, 2023
    Assignee: Siemens Healthcare GmbH
    Inventors: Mahmoud Mostapha, Boris Mailhe, Mariappan S. Nadar, Pascal Ceccaldi, Youngjin Yoo
  • Patent number: 11776128
    Abstract: Systems and methods for automatic segmentation of lesions from a 3D input medical image are provided. A 3D input medical image depicting one or more lesions is received. The one or more lesions are segmented from one or more 2D slices extracted from the 3D input medical image using a trained 2D segmentation network. 2D features are extracted from results of the segmentation of the one or more lesions from the one or more 2D slices. The one or more lesions are segmented from a 3D patch extracted from the 3D input medical image using a trained 3D segmentation network. 3D features are extracted from results of the segmentation of the one or more lesions from the 3D patch. The extracted 2D features and the extracted 3D features are fused to generate final segmentation results. The final segmentation results are output.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: October 3, 2023
    Assignee: Siemens Healthcare GmbH
    Inventors: Youngjin Yoo, Pascal Ceccaldi, Eli Gibson
  • Publication number: 20220293247
    Abstract: Systems and method for performing a medical imaging analysis task for making a clinical decision are provided. One or more input medical images of a patient are received. A medical imaging analysis task is performed from the one or more input medical images using a machine learning based network. The machine learning based network generates a probability score associated with the medical imaging analysis task. An uncertainty measure associated with the probability score is determined. A clinical decision is made based on the probability score and the uncertainty measure.
    Type: Application
    Filed: March 12, 2021
    Publication date: September 15, 2022
    Inventors: Eli Gibson, Bogdan Georgescu, Pascal Ceccaldi, Youngjin Yoo, Jyotipriya Das, Thomas Re, Eva Eibenberger, Andrei Chekkoury, Barbara Brehm, Thomas Flohr, Dorin Comaniciu, Pierre-Hugo Trigan
  • Publication number: 20220189028
    Abstract: Systems and methods for automatic segmentation of lesions from a 3D input medical image are provided. A 3D input medical image depicting one or more lesions is received. The one or more lesions are segmented from one or more 2D slices extracted from the 3D input medical image using a trained 2D segmentation network. 2D features are extracted from results of the segmentation of the one or more lesions from the one or more 2D slices. The one or more lesions are segmented from a 3D patch extracted from the 3D input medical image using a trained 3D segmentation network. 3D features are extracted from results of the segmentation of the one or more lesions from the 3D patch. The extracted 2D features and the extracted 3D features are fused to generate final segmentation results. The final segmentation results are output.
    Type: Application
    Filed: December 11, 2020
    Publication date: June 16, 2022
    Inventors: Youngjin Yoo, Pascal Ceccaldi, Eli Gibson
  • Publication number: 20220164959
    Abstract: For medical imaging such as MRI, machine training is used to train a network for segmentation using both the imaging data and protocol data (e.g., meta-data). The network is trained to segment based, in part, on the configuration and/or scanner, not just the imaging data, allowing the trained network to adapt to the way each image is acquired. In one embodiment, the network architecture includes one or more blocks that receive both types of data as input and output both types of data, preserving relevant features for adaptation through at least part of the trained network.
    Type: Application
    Filed: February 8, 2022
    Publication date: May 26, 2022
    Inventors: Mahmoud Mostapha, Boris Mailhe, Mariappan S. Nadar, Pascal Ceccaldi, Youngjin Yoo
  • Publication number: 20220156938
    Abstract: For medical imaging such as MRI, machine training is used to train a network for segmentation using both the imaging data and protocol data (e.g., meta-data). The network is trained to segment based, in part, on the configuration and/or scanner, not just the imaging data, allowing the trained network to adapt to the way each image is acquired. In one embodiment, the network architecture includes one or more blocks that receive both types of data as input and output both types of data, preserving relevant features for adaptation through at least part of the trained network.
    Type: Application
    Filed: February 8, 2022
    Publication date: May 19, 2022
    Inventors: Mahmoud Mostapha, Boris Mailhe, Mariappan S. Nadar, Pascal Ceccaldi, Youngjin Yoo
  • Patent number: 11288806
    Abstract: For medical imaging such as MRI, machine training is used to train a network for segmentation using both the imaging data and protocol data (e.g., meta-data). The network is trained to segment based, in part, on the configuration and/or scanner, not just the imaging data, allowing the trained network to adapt to the way each image is acquired. In one embodiment, the network architecture includes one or more blocks that receive both types of data as input and output both types of data, preserving relevant features for adaptation through at least part of the trained network.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: March 29, 2022
    Assignee: Siemens Healthcare GmbH
    Inventors: Mahmoud Mostapha, Boris Mailhe, Mariappan S. Nadar, Pascal Ceccaldi, Youngjin Yoo
  • Patent number: 11282203
    Abstract: Method and system for image registration or image segmentation. The method includes receiving an image which is to be processed by a first machine-learning model to perform, for example, image registration or segmentation, and using a second machine-learning model to determine if the received image is of a quality suitable for the first machine-learning model to act upon.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: March 22, 2022
    Assignee: Siemens Healthcare GmbH
    Inventors: Pascal Ceccaldi, Serkan Cimen, Peter Mountney
  • Publication number: 20220067929
    Abstract: Systems and methods for quantifying a shift of an anatomical object of a patient are provided. A 3D medical image of an anatomical object of a patient is received. An initial location of landmarks on the anatomical object in the 3D medical image is determined using a first machine learning network. A 2D slice depicting the initial location of the landmarks is extracted from the 3D medical image. The initial location of the landmarks in the 2D slice is refined using a second machine learning network. A shift of the anatomical object is quantified based on the refined location of the landmarks in the 2D slice. The quantified shift of the anatomical object is output.
    Type: Application
    Filed: June 10, 2021
    Publication date: March 3, 2022
    Inventors: Nguyen Nguyen, Youngjin Yoo, Pascal Ceccaldi, Eli Gibson, Andrei Chekkoury
  • Patent number: 11263744
    Abstract: For saliency mapping, a machine-learned classifier is used to classify input data. A perturbation encoder is trained and/or applied for saliency mapping of the machine-learned classifier. The training and/or application (testing) of the perturbation encoder uses less than all feature maps of the machine-learned classifier, such as selecting different feature maps of different hidden layers in a multiscale approach. The subset used is selected based on gradients from back-projection. The training of the perturbation encoder may be unsupervised, such as using an entropy score, or semi-supervised, such as using the entropy score and a difference of a perturbation mask from a ground truth segmentation.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: March 1, 2022
    Assignee: Siemens Healthcare GmbH
    Inventors: Youngjin Yoo, Pascal Ceccaldi, Eli Gibson, Mariappan S. Nadar
  • Publication number: 20210174497
    Abstract: For saliency mapping, a machine-learned classifier is used to classify input data. A perturbation encoder is trained and/or applied for saliency mapping of the machine-learned classifier. The training and/or application (testing) of the perturbation encoder uses less than all feature maps of the machine-learned classifier, such as selecting different feature maps of different hidden layers in a multiscale approach. The subset used is selected based on gradients from back-projection. The training of the perturbation encoder may be unsupervised, such as using an entropy score, or semi-supervised, such as using the entropy score and a difference of a perturbation mask from a ground truth segmentation.
    Type: Application
    Filed: December 9, 2019
    Publication date: June 10, 2021
    Inventors: Youngjin Yoo, Pascal Ceccaldi, Eli Gibson, Mariappan S. Nadar
  • Patent number: 10997717
    Abstract: In a system and method for analyzing images, an input image is provided to a computer and is processed therein with a first deep learning model so as to generate an output result for the input image; and applying a second deep learning model is applied to the input image to generate an output confidence score that is indicative of the reliability of any output result from the first deep learning model for the input image.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: May 4, 2021
    Assignee: Siemens Healthcare GmbH
    Inventors: Pascal Ceccaldi, Peter Mountney, Daniel Toth, Serkan Cimen
  • Publication number: 20210097690
    Abstract: For medical imaging such as MRI, machine training is used to train a network for segmentation using both the imaging data and protocol data (e.g., meta-data). The network is trained to segment based, in part, on the configuration and/or scanner, not just the imaging data, allowing the trained network to adapt to the way each image is acquired. In one embodiment, the network architecture includes one or more blocks that receive both types of data as input and output both types of data, preserving relevant features for adaptation through at least part of the trained network.
    Type: Application
    Filed: May 1, 2020
    Publication date: April 1, 2021
    Inventors: Mahmoud Mostapha, Boris Mailhe, Mariappan S. Nadar, Pascal Ceccaldi, Youngjin Yoo
  • Publication number: 20210042930
    Abstract: Method and system for image registration or image segmentation. The method includes receiving an image which is to be processed by a first machine-learning model to perform, for example, image registration or segmentation, and using a second machine-learning model to determine if the received image is of a quality suitable for the first machine-learning model to act upon.
    Type: Application
    Filed: August 6, 2020
    Publication date: February 11, 2021
    Applicant: Siemens Healthcare GmbH
    Inventors: Pascal Ceccaldi, Serkan Cimen, Peter Mountney
  • Patent number: 10852379
    Abstract: For artifact reduction in a magnetic resonance imaging system, deep learning trains an image-to-image neural network to generate an image with reduced artifact from input, artifacted MR data. For application, the image-to-image network may be applied in real time with a lower computational burden than typical post-processing methods. To handle a range of different imaging situations, the image-to-image network may (a) use an auxiliary map as an input with the MR data from the patient, (b) use sequence metadata as a controller of the encoder of the image-to-image network, and/or (c) be trained to generate contrast invariant features in the encoder using a discriminator that receives encoder features.
    Type: Grant
    Filed: June 7, 2018
    Date of Patent: December 1, 2020
    Assignee: Siemens Healthcare GmbH
    Inventors: Xiao Chen, Boris Mailhe, Benjamin L. Odry, Pascal Ceccaldi, Mariappan S. Nadar
  • Patent number: 10832392
    Abstract: A method of training a computer system for use in determining a transformation between coordinate frames of image data representing an imaged subject. The method trains a learning agent according to a machine learning algorithm, to determine a transformation between respective coordinate frames of a number of different views of an anatomical structure simulated using a 3D model. The views are images containing labels. The learning agent includes a domain classifier comprising a feature map generated by the learning agent during the training operation. The classifier is configured to generate a classification output indicating whether image data is synthesized or real images data. Training includes using unlabeled real image data to training the computer system to determine a transformation between a coordinate frame of a synthesized view of the imaged structure and a view of the structure within a real image.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: November 10, 2020
    Assignee: Siemens Healthcare GmbH
    Inventors: Pascal Ceccaldi, Tanja Kurzendorfer, Tommaso Mansi, Peter Mountney, Sebastien Piat, Daniel Toth
  • Patent number: 10753997
    Abstract: Systems and methods are provided for synthesizing protocol independent magnetic resonance images. A patient is scanned by a magnetic resonance imaging system to acquire magnetic resonance data. The magnetic resonance data is input to a machine learnt generator network trained to extract features from input magnetic resonance data and synthesize protocol independent images using the extracted features. The machine learnt generator network generates a protocol independent segmented magnetic resonance image from the input magnetic resonance data. The protocol independent magnetic resonance image is displayed.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: August 25, 2020
    Assignee: Siemens Healthcare GmbH
    Inventors: Benjamin L. Odry, Boris Mailhe, Mariappan S. Nadar, Pascal Ceccaldi