Patents by Inventor Máté Fejes

Máté Fejes has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240078669
    Abstract: Methods and systems are provided for inferring thickness and volume of one or more object classes of interest in two-dimensional (2D) medical images, using deep neural networks. In an exemplary embodiment, a thickness of an object class of interest may be inferred by acquiring a 2D medical image, extracting features from the 2D medical image, mapping the features to a segmentation mask for an object class of interest using a first convolutional neural network (CNN), mapping the features to a thickness mask for the object class of interest using a second CNN, wherein the thickness mask indicates a thickness of the object class of interest at each pixel of a plurality of pixels of the 2D medical image; and determining a volume of the object class of interest based on the thickness mask and the segmentation mask.
    Type: Application
    Filed: October 30, 2023
    Publication date: March 7, 2024
    Inventors: Tao Tan, Máté Fejes, Gopal Avinash, Ravi Soni, Bipul Das, Rakesh Mullick, Pál Tegzes, Lehel Ferenczi, Vikram Melapudi, Krishna Seetharam Shriram
  • Patent number: 11842485
    Abstract: Methods and systems are provided for inferring thickness and volume of one or more object classes of interest in two-dimensional (2D) medical images, using deep neural networks. In an exemplary embodiment, a thickness of an object class of interest may be inferred by acquiring a 2D medical image, extracting features from the 2D medical image, mapping the features to a segmentation mask for an object class of interest using a first convolutional neural network (CNN), mapping the features to a thickness mask for the object class of interest using a second CNN, wherein the thickness mask indicates a thickness of the object class of interest at each pixel of a plurality of pixels of the 2D medical image; and determining a volume of the object class of interest based on the thickness mask and the segmentation mask.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: December 12, 2023
    Assignee: GE PRECISION HEALTHCARE LLC
    Inventors: Tao Tan, Máté Fejes, Gopal Avinash, Ravi Soni, Bipul Das, Rakesh Mullick, Pál Tegzes, Lehel Ferenczi, Vikram Melapudi, Krishna Seetharam Shriram
  • Publication number: 20230342427
    Abstract: Techniques are described for generating mono-modality training image data from multi-modality image data and using the mono-modality training image data to train and develop mono-modality image inferencing models. A method embodiment comprises generating, by a system comprising a processor, a synthetic 2D image from a 3D image of a first capture modality, wherein the synthetic 2D image corresponds to a 2D version of the 3D image in a second capture modality, and wherein the 3D image and the synthetic 2D image depict a same anatomical region of a same patient. The method further comprises transferring, by the system, ground truth data for the 3D image to the synthetic 2D image. In some embodiments, the method further comprises employing the synthetic 2D image to facilitate transfer of the ground truth data to a native 2D image captured of the same anatomical region of the same patient using the second capture modality.
    Type: Application
    Filed: June 28, 2023
    Publication date: October 26, 2023
    Inventors: Tao Tan, Gopal B. Avinash, Máté Fejes, Ravi Soni, Dániel Attila Szabó, Rakesh Mullick, Vikram Melapudi, Krishna Seetharam Shriram, Sohan Rashmi Ranjan, Bipul Das, Utkarsh Agrawal, László Ruskó, Zita Herczeg, Barbara Darázs
  • Patent number: 11727086
    Abstract: Techniques are described for generating mono-modality training image data from multi-modality image data and using the mono-modality training image data to train and develop mono-modality image inferencing models. A method embodiment comprises generating, by a system comprising a processor, a synthetic 2D image from a 3D image of a first capture modality, wherein the synthetic 2D image corresponds to a 2D version of the 3D image in a second capture modality, and wherein the 3D image and the synthetic 2D image depict a same anatomical region of a same patient. The method further comprises transferring, by the system, ground truth data for the 3D image to the synthetic 2D image. In some embodiments, the method further comprises employing the synthetic 2D image to facilitate transfer of the ground truth data to a native 2D image captured of the same anatomical region of the same patient using the second capture modality.
    Type: Grant
    Filed: November 10, 2020
    Date of Patent: August 15, 2023
    Assignee: GE PRECISION HEALTHCARE LLC
    Inventors: Tao Tan, Gopal B. Avinash, Máté Fejes, Ravi Soni, Dániel Attila Szabó, Rakesh Mullick, Vikram Melapudi, Krishna Seetharam Shriram, Sohan Rashmi Ranjan, Bipul Das, Utkarsh Agrawal, László Ruskó, Zita Herczeg, Barbara Darázs
  • Publication number: 20220284570
    Abstract: Methods and systems are provided for inferring thickness and volume of one or more object classes of interest in two-dimensional (2D) medical images, using deep neural networks. In an exemplary embodiment, a thickness of an object class of interest may be inferred by acquiring a 2D medical image, extracting features from the 2D medical image, mapping the features to a segmentation mask for an object class of interest using a first convolutional neural network (CNN), mapping the features to a thickness mask for the object class of interest using a second CNN, wherein the thickness mask indicates a thickness of the object class of interest at each pixel of a plurality of pixels of the 2D medical image; and determining a volume of the object class of interest based on the thickness mask and the segmentation mask.
    Type: Application
    Filed: March 4, 2021
    Publication date: September 8, 2022
    Inventors: Tao Tan, Máté Fejes, Gopal Avinash, Ravi Soni, Bipul Das, Rakesh Mullick, Pál Tegzes, Lehel Ferenczi, Vikram Melapudi, Krishna Seetharam Shriram
  • Publication number: 20220101048
    Abstract: Techniques are described for generating mono-modality training image data from multi-modality image data and using the mono-modality training image data to train and develop mono-modality image inferencing models. A method embodiment comprises generating, by a system comprising a processor, a synthetic 2D image from a 3D image of a first capture modality, wherein the synthetic 2D image corresponds to a 2D version of the 3D image in a second capture modality, and wherein the 3D image and the synthetic 2D image depict a same anatomical region of a same patient. The method further comprises transferring, by the system, ground truth data for the 3D image to the synthetic 2D image. In some embodiments, the method further comprises employing the synthetic 2D image to facilitate transfer of the ground truth data to a native 2D image captured of the same anatomical region of the same patient using the second capture modality.
    Type: Application
    Filed: November 10, 2020
    Publication date: March 31, 2022
    Inventors: Tao Tan, Gopal B. Avinash, Máté Fejes, Ravi Soni, Dániel Attila Szabó, Rakesh Mullick, Vikram Melapudi, Krishna Seetharam Shriram, Sohan Rashmi Ranjan, Bipul Das, Utkarsh Agrawal, László Ruskó, Zita Herczeg, Barbara Darázs