Patents by Inventor Bipul Das

Bipul Das has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240078669
    Abstract: Methods and systems are provided for inferring thickness and volume of one or more object classes of interest in two-dimensional (2D) medical images, using deep neural networks. In an exemplary embodiment, a thickness of an object class of interest may be inferred by acquiring a 2D medical image, extracting features from the 2D medical image, mapping the features to a segmentation mask for an object class of interest using a first convolutional neural network (CNN), mapping the features to a thickness mask for the object class of interest using a second CNN, wherein the thickness mask indicates a thickness of the object class of interest at each pixel of a plurality of pixels of the 2D medical image; and determining a volume of the object class of interest based on the thickness mask and the segmentation mask.
    Type: Application
    Filed: October 30, 2023
    Publication date: March 7, 2024
    Inventors: Tao Tan, Máté Fejes, Gopal Avinash, Ravi Soni, Bipul Das, Rakesh Mullick, Pál Tegzes, Lehel Ferenczi, Vikram Melapudi, Krishna Seetharam Shriram
  • Publication number: 20240062331
    Abstract: Systems/techniques that facilitate deep learning robustness against display field of view (DFOV) variations are provided. In various embodiments, a system can access a deep learning neural network and a medical image. In various aspects, a first DFOV, and thus a first spatial resolution, on which the deep learning neural network is trained can fail to match a second DFOV, and thus a second spatial resolution, exhibited by the medical image. In various instances, the system can execute the deep learning neural network on a resampled version of the medical image, where the resampled version of the medical image can exhibit the first DFOV and thus the first spatial resolution. In various cases, the system can generate the resampled version of the medical image by up-sampling or down-sampling the medical image until it exhibits the first DFOV and thus the first spatial resolution.
    Type: Application
    Filed: August 19, 2022
    Publication date: February 22, 2024
    Inventors: Rajesh Langoju, Prasad Sudhakara Murthy, Utkarsh Agrawal, Risa Shigemasa, Bhushan Patil, Bipul Das, Yasuhiro Imai
  • Patent number: 11842485
    Abstract: Methods and systems are provided for inferring thickness and volume of one or more object classes of interest in two-dimensional (2D) medical images, using deep neural networks. In an exemplary embodiment, a thickness of an object class of interest may be inferred by acquiring a 2D medical image, extracting features from the 2D medical image, mapping the features to a segmentation mask for an object class of interest using a first convolutional neural network (CNN), mapping the features to a thickness mask for the object class of interest using a second CNN, wherein the thickness mask indicates a thickness of the object class of interest at each pixel of a plurality of pixels of the 2D medical image; and determining a volume of the object class of interest based on the thickness mask and the segmentation mask.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: December 12, 2023
    Assignee: GE PRECISION HEALTHCARE LLC
    Inventors: Tao Tan, Máté Fejes, Gopal Avinash, Ravi Soni, Bipul Das, Rakesh Mullick, Pál Tegzes, Lehel Ferenczi, Vikram Melapudi, Krishna Seetharam Shriram
  • Patent number: 11823354
    Abstract: A computer-implemented method for correcting artifacts in computed tomography data is provided. The method includes inputting a sinogram into a trained sinogram correction network, wherein the sinogram is missing a pixel value for at least one pixel. The method also includes processing the sinogram via one or more layers of the trained sinogram correction network, wherein processing the sinogram includes deriving complementary information from the sinogram and estimating the pixel value for the at least one pixel based on the complementary information. The method further includes outputting from the trained sinogram correction network a corrected sinogram having the estimated pixel value.
    Type: Grant
    Filed: April 8, 2021
    Date of Patent: November 21, 2023
    Assignee: GE Precision Healthcare LLC
    Inventors: Bhushan Dayaram Patil, Rajesh Langoju, Utkarsh Agrawal, Bipul Das, Jiang Hsieh
  • Publication number: 20230342427
    Abstract: Techniques are described for generating mono-modality training image data from multi-modality image data and using the mono-modality training image data to train and develop mono-modality image inferencing models. A method embodiment comprises generating, by a system comprising a processor, a synthetic 2D image from a 3D image of a first capture modality, wherein the synthetic 2D image corresponds to a 2D version of the 3D image in a second capture modality, and wherein the 3D image and the synthetic 2D image depict a same anatomical region of a same patient. The method further comprises transferring, by the system, ground truth data for the 3D image to the synthetic 2D image. In some embodiments, the method further comprises employing the synthetic 2D image to facilitate transfer of the ground truth data to a native 2D image captured of the same anatomical region of the same patient using the second capture modality.
    Type: Application
    Filed: June 28, 2023
    Publication date: October 26, 2023
    Inventors: Tao Tan, Gopal B. Avinash, Máté Fejes, Ravi Soni, Dániel Attila Szabó, Rakesh Mullick, Vikram Melapudi, Krishna Seetharam Shriram, Sohan Rashmi Ranjan, Bipul Das, Utkarsh Agrawal, László Ruskó, Zita Herczeg, Barbara Darázs
  • Publication number: 20230298136
    Abstract: Systems/techniques that facilitate deep learning multi-planar reformatting of medical images are provided. In various embodiments, a system can access a three-dimensional medical image. In various aspects, the system can localize, via execution of a machine learning model, a set of landmarks depicted in the three-dimensional medical image, a set of principal anatomical planes depicted in the three-dimensional medical image, and a set of organs depicted in the three-dimensional medical image. In various instances, the system can determine an anatomical orientation exhibited by the three-dimensional medical image, based on the set of landmarks, the set of principal anatomical planes, or the set of organs. In various cases, the system can rotate the three-dimensional medical image, such that the anatomical orientation now matches a predetermined anatomical orientation.
    Type: Application
    Filed: March 15, 2022
    Publication date: September 21, 2023
    Inventors: Bipul Das, Rakesh Mullick, Deepa Anand, Sandeep Dutta, Uday Damodar Patil, Maud Bonnard
  • Patent number: 11727086
    Abstract: Techniques are described for generating mono-modality training image data from multi-modality image data and using the mono-modality training image data to train and develop mono-modality image inferencing models. A method embodiment comprises generating, by a system comprising a processor, a synthetic 2D image from a 3D image of a first capture modality, wherein the synthetic 2D image corresponds to a 2D version of the 3D image in a second capture modality, and wherein the 3D image and the synthetic 2D image depict a same anatomical region of a same patient. The method further comprises transferring, by the system, ground truth data for the 3D image to the synthetic 2D image. In some embodiments, the method further comprises employing the synthetic 2D image to facilitate transfer of the ground truth data to a native 2D image captured of the same anatomical region of the same patient using the second capture modality.
    Type: Grant
    Filed: November 10, 2020
    Date of Patent: August 15, 2023
    Assignee: GE PRECISION HEALTHCARE LLC
    Inventors: Tao Tan, Gopal B. Avinash, Máté Fejes, Ravi Soni, Dániel Attila Szabó, Rakesh Mullick, Vikram Melapudi, Krishna Seetharam Shriram, Sohan Rashmi Ranjan, Bipul Das, Utkarsh Agrawal, László Ruskó, Zita Herczeg, Barbara Darázs
  • Patent number: 11704804
    Abstract: Techniques are described for domain adaptation of image processing models using post-processing model correction According to an embodiment, a method comprises training, by a system operatively coupled to a processor, a post-processing model to correct an image-based inference output of a source image processing model that results from application of the source image processing model to a target image from a target domain that differs from a source domain, wherein the source image processing model was trained on source images from the source domain. In one or more implementations, the source imaging processing model comprises an organ segmentation model and the post-processing model can comprise a shape-autoencoder. The method further comprises applying, by the system, the source image processing model and the post-processing model to target images from the target domain to generate optimized image-based inference outputs for the target images.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: July 18, 2023
    Assignee: GE PRECISION HEALTHCARE LLC
    Inventors: Sidharth Abrol, Bipul Das, Sandeep Dutta, Saad A. Sirohey
  • Publication number: 20230177747
    Abstract: Systems/techniques that facilitate machine learning generation of low-noise and high structural conspicuity images are provided. In various embodiments, a system can access an image and can apply at least one of image denoising or image resolution enhancement to the image, thereby yielding a first intermediary image. In various instances, the system can generate, via execution of a plurality of machine learning models, a plurality of second intermediary images based on the first intermediary image, wherein a given machine learning model in the plurality of machine learning models receives as input the first intermediary image, wherein the given machine learning model produces as output a given second intermediary image in the plurality of second intermediary images, and wherein the given second intermediary image represents a kernel-transformed version of the first intermediary image. In various cases, the system can generate a blended image based on the plurality of second intermediary images.
    Type: Application
    Filed: December 6, 2021
    Publication date: June 8, 2023
    Inventors: Rajesh Veera Venkata Lakshmi Langoju, Utkarsh Agrawal, Bipul Das, Risa Shigemasa, Yasuhiro Imai, Jiang Hsieh
  • Patent number: 11657501
    Abstract: Techniques are provided for generating enhanced image representations from original X-ray images using deep learning techniques. In one embodiment, a system is provided that includes a memory storing computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can include a reception component, an analysis component, and an artificial intelligence component. The analysis component analyzes the original X-ray image using an AI-based model with respect to a set of features of interest. The AI component generates a plurality of enhanced image representations. Each enhanced image representation highlights a subset of the features of interest and suppresses remaining features of interest in the set that are external to the subset.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: May 23, 2023
    Assignee: GE PRECISION HEALTHCARE LLC
    Inventors: Vikram Melapudi, Bipul Das, Krishna Seetharam Shriram, Prasad Sudhakar, Rakesh Mullick, Sohan Rashmi Ranjan, Utkarsh Agarwal
  • Publication number: 20230084202
    Abstract: Techniques are described that that facilitate securely deploying artificial intelligence (AI) models and distributing inferences generated therefrom. According to an embodiment, a system is provided that comprises a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory. The computer executable components comprise an algorithm execution component that applies an AI model to input data and generates output data, and an encryption component that encrypts the output data using a proprietary encryption mechanism, resulting in encrypted output data. The proprietary encryption mechanism can include a mechanism that prevents usage and rendering of the encrypted output data without decryption of the encrypted output data using a proprietary decryption mechanism.
    Type: Application
    Filed: September 14, 2021
    Publication date: March 16, 2023
    Inventors: Abhijit Patil, Rakesh Mullick, Bipul Das
  • Publication number: 20230071535
    Abstract: Systems/techniques that facilitate learning-based domain transformation for medical images are provided. In various embodiments, a system can access a medical image. In various aspects, the medical image can depict an anatomical structure according to a first medical scanning domain. In various instances, the system can generate, via execution of a machine learning model, a predicted image based on the medical image. In various aspects, the predicted image can depict the anatomical structure according to a second medical scanning domain that is different from the first medical scanning domain. In some cases, the first and second medical scanning domains can be first and second energy levels of a computed tomography (CT) scanning modality. In other cases, the first and second medical scanning domains can be first and second contrast phases of the CT scanning modality.
    Type: Application
    Filed: September 9, 2021
    Publication date: March 9, 2023
    Inventors: Sidharth Abrol, Bipul Das, Vanika Singhal, Amy Deubig, Sandeep Dutta, Daphné GERBAUD, Bianca Sintini, Ronny BÜCHEL, Philipp KAUFMANN
  • Publication number: 20230048231
    Abstract: Various methods and systems are provided for computed tomography imaging. In one embodiment, a method includes acquiring, with an x-ray detector and an x-ray source coupled to a gantry, a three-dimensional image volume of a subject while the subject moves through a bore of the gantry and the gantry rotates the x-ray detector and the x-ray source around the subject, inputting the three-dimensional image volume to a trained deep neural network to generate a corrected three-dimensional image volume with a reduction in aliasing artifacts present in the three-dimensional image volume, and outputting the corrected three-dimensional image volume. In this way, aliasing artifacts caused by sub-sampling may be removed from computed tomography images while preserving details, texture, and sharpness in the computed tomography images.
    Type: Application
    Filed: August 11, 2021
    Publication date: February 16, 2023
    Inventors: Rajesh Langoju, Utkarsh Agrawal, Risa Shigemasa, Bipul Das, Yasuhiro Imai, Jiang Hsieh
  • Publication number: 20230052595
    Abstract: Techniques are described for enhancing the quality of three-dimensional (3D) anatomy scan images using deep learning. According to an embodiment, a system is provided that comprises a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory. The computer executable components comprise a reception component that receives a scan image generated from 3D scan data relative to a first axis of a 3D volume, and an enhancement component that applies an enhancement model to the scan image to generate an enhanced scan image having a higher resolution relative to the scan image. The enhancement model comprises a deep learning neural network model trained on training image pairs respectively comprising a low-resolution scan image and a corresponding high-resolution scan image respectively generated relative to a second axis of the 3D volume.
    Type: Application
    Filed: August 16, 2021
    Publication date: February 16, 2023
    Inventors: Rajesh Veera Venkata Lakshmi Langoju, Utkarsh Agrawal, Bipul Das, Risa Shigemasa, Yasuhiro Imai, Jiang Hsieh
  • Publication number: 20230029188
    Abstract: The current disclosure provides methods and systems to reduce an amount of structured and unstructured noise in image data. Specifically, a multi-stage deep learning method is provided, comprising training a deep learning network using a set of training pairs interchangeably including input data from a first noisy dataset with a first noise level and target data from a second noisy dataset with a second noise level, and input data from the second noisy dataset and target data from the first noisy dataset; generating an ultra-low noise data equivalent based on a low noise data fed into the trained deep learning network; and retraining the deep learning network on the set of training pairs using the target data of the set of training pairs in a first retraining step, and using the ultra-low noise data equivalent as target data in a second retraining step.
    Type: Application
    Filed: July 26, 2021
    Publication date: January 26, 2023
    Inventors: Rajesh Langoju, Utkarsh Agrawal, Bhushan Patil, Vanika Singhal, Bipul Das, Jiang Hsieh
  • Publication number: 20230018833
    Abstract: Techniques are described for generating multimodal training data cohorts tailored to specific clinical machine learning (ML) model inferencing tasks. In an embodiment, a method comprises accessing, by a system comprising a processor, multimodal clinical data for a plurality of subjects included in one or more clinical data sources. The method further comprises selecting, by the system, datasets from the multimodal clinical data based on the datasets respectively comprising subsets of the multimodal clinical data that satisfy criteria determined to be relevant to a clinical processing task. The method further comprises generating, by the system, a training data cohort comprising the datasets for training a clinical inferencing model to perform the clinical processing task.
    Type: Application
    Filed: July 19, 2021
    Publication date: January 19, 2023
    Inventors: Bipul Das, Rakesh Mullick, Utkarsh Agrawal, KS Shriram, Sohan Ranjan, Tao Tan
  • Publication number: 20230013779
    Abstract: Systems/techniques that facilitate self-supervised deblurring are provided. In various embodiments, a system can access an input image generated by an imaging device. In various aspects, the system can train, in a self-supervised manner based on a point spread function of the imaging device, a machine learning model to deblur the input image. More specifically, the system can append to the model one or more non-trainable convolution layers having a blur kernel that is based on the point spread function of the imaging device. In various aspects, the system can feed the input image to the model, the model can generate a first output image based on the input image, the one or more non-trainable convolution layers can generate a second output image by convolving the first output image with the blur kernel, and the system can update parameters of the model based on a difference between the input image and the second output image.
    Type: Application
    Filed: July 6, 2021
    Publication date: January 19, 2023
    Inventors: Rajesh Veera Venkata Lakshmi Langoju, Prasad Sudhakara Murthy, Utkarsh Agrawal, Bhushan D. Patil, Bipul Das
  • Publication number: 20220327664
    Abstract: A computer-implemented method for correcting artifacts in computed tomography data is provided. The method includes inputting a sinogram into a trained sinogram correction network, wherein the sinogram is missing a pixel value for at least one pixel. The method also includes processing the sinogram via one or more layers of the trained sinogram correction network, wherein processing the sinogram includes deriving complementary information from the sinogram and estimating the pixel value for the at least one pixel based on the complementary information. The method further includes outputting from the trained sinogram correction network a corrected sinogram having the estimated pixel value.
    Type: Application
    Filed: April 8, 2021
    Publication date: October 13, 2022
    Inventors: Bhushan Dayaram Patil, Rajesh Langoju, Utkarsh Agrawal, Bipul Das, Jiang Hsieh
  • Publication number: 20220284570
    Abstract: Methods and systems are provided for inferring thickness and volume of one or more object classes of interest in two-dimensional (2D) medical images, using deep neural networks. In an exemplary embodiment, a thickness of an object class of interest may be inferred by acquiring a 2D medical image, extracting features from the 2D medical image, mapping the features to a segmentation mask for an object class of interest using a first convolutional neural network (CNN), mapping the features to a thickness mask for the object class of interest using a second CNN, wherein the thickness mask indicates a thickness of the object class of interest at each pixel of a plurality of pixels of the 2D medical image; and determining a volume of the object class of interest based on the thickness mask and the segmentation mask.
    Type: Application
    Filed: March 4, 2021
    Publication date: September 8, 2022
    Inventors: Tao Tan, Máté Fejes, Gopal Avinash, Ravi Soni, Bipul Das, Rakesh Mullick, Pál Tegzes, Lehel Ferenczi, Vikram Melapudi, Krishna Seetharam Shriram
  • Publication number: 20220101048
    Abstract: Techniques are described for generating mono-modality training image data from multi-modality image data and using the mono-modality training image data to train and develop mono-modality image inferencing models. A method embodiment comprises generating, by a system comprising a processor, a synthetic 2D image from a 3D image of a first capture modality, wherein the synthetic 2D image corresponds to a 2D version of the 3D image in a second capture modality, and wherein the 3D image and the synthetic 2D image depict a same anatomical region of a same patient. The method further comprises transferring, by the system, ground truth data for the 3D image to the synthetic 2D image. In some embodiments, the method further comprises employing the synthetic 2D image to facilitate transfer of the ground truth data to a native 2D image captured of the same anatomical region of the same patient using the second capture modality.
    Type: Application
    Filed: November 10, 2020
    Publication date: March 31, 2022
    Inventors: Tao Tan, Gopal B. Avinash, Máté Fejes, Ravi Soni, Dániel Attila Szabó, Rakesh Mullick, Vikram Melapudi, Krishna Seetharam Shriram, Sohan Rashmi Ranjan, Bipul Das, Utkarsh Agrawal, László Ruskó, Zita Herczeg, Barbara Darázs