Patents by Inventor Krishna Seetharam Shriram

Krishna Seetharam Shriram has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240078669
    Abstract: Methods and systems are provided for inferring thickness and volume of one or more object classes of interest in two-dimensional (2D) medical images, using deep neural networks. In an exemplary embodiment, a thickness of an object class of interest may be inferred by acquiring a 2D medical image, extracting features from the 2D medical image, mapping the features to a segmentation mask for an object class of interest using a first convolutional neural network (CNN), mapping the features to a thickness mask for the object class of interest using a second CNN, wherein the thickness mask indicates a thickness of the object class of interest at each pixel of a plurality of pixels of the 2D medical image; and determining a volume of the object class of interest based on the thickness mask and the segmentation mask.
    Type: Application
    Filed: October 30, 2023
    Publication date: March 7, 2024
    Inventors: Tao Tan, Máté Fejes, Gopal Avinash, Ravi Soni, Bipul Das, Rakesh Mullick, Pál Tegzes, Lehel Ferenczi, Vikram Melapudi, Krishna Seetharam Shriram
  • Patent number: 11903760
    Abstract: The current disclosure provides systems and methods for providing guidance information to an operator of a medical imaging device. In an embodiment, a method is provided, comprising training a deep learning neural network on training pairs including a first medical image of an anatomical neighborhood and a second medical image of the anatomical neighborhood as input data, and a ground truth displacement between a first scan plane of the first medical image and a second scan plane of the second medical image as target data; using the neural network to predict a displacement between a first scan plane of a new medical image of the anatomical neighborhood and a target scan plane of a reference medical image of the anatomical neighborhood; and displaying guidance information for an imaging device used to acquire the new medical image on a display screen.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: February 20, 2024
    Assignee: GE PRECISION HEALTHCARE LLC
    Inventors: Chandan Kumar Aladahalli, Krishna Seetharam Shriram, Vikram Melapudi
  • Patent number: 11903768
    Abstract: A system and method for automatically adjusting beamformer parameters based on ultrasound image analysis to enhance ultrasound image acquisition is provided. The method includes acquiring, by an ultrasound system, an ultrasound image. The method includes segmenting, by at least one processor, the ultrasound image to identify anatomical structure(s) and/or image artifact(s) in the ultrasound image. The method includes detecting, by the at least one processor, a location of each of the identified anatomical structure(s) and/or image artifact(s). The method includes automatically adjusting, by the at least one processor, at least one beamformer parameter based on the detected location of one or more of the identified anatomical structure(s) and/or the image artifact(s). The method includes acquiring, by the ultrasound system, an enhanced ultrasound image based on the automatically adjusted at least one beamformer parameter. The method includes presenting, at a display system, the enhanced ultrasound image.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: February 20, 2024
    Assignee: GE Precision Healthcare LLC
    Inventors: Abhijit Patil, Vikram Melapudi, Krishna Seetharam Shriram, Chandan Kumar Mallappa Aladahalli
  • Patent number: 11850090
    Abstract: Systems and methods for guided lung coverage and automated detection using ultrasound devices are disclosed. The method for guided coverage and automated detection of pathologies of a subject includes positioning an ultrasound probe on a region of the subject body to be imaged. The method includes capturing a video of the subject and processing the video to generate a torso image of the subject and identify location of the ultrasound probe on the subject body. The method includes registering the video to an anatomical atlas to generate a mask of the region of the subject body comprising a plurality of sub-regions of the subject body to be imaged and superimposing the mask over the torso image. The method further includes displaying an indicator corresponding to a location of the each of the plurality of the sub-regions on the torso image.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: December 26, 2023
    Assignee: GE Precision Healthcare LLC
    Inventors: Vikram Melapudi, Chandan Kumar Mallappa Aladahalli, Krishna Seetharam Shriram
  • Patent number: 11842485
    Abstract: Methods and systems are provided for inferring thickness and volume of one or more object classes of interest in two-dimensional (2D) medical images, using deep neural networks. In an exemplary embodiment, a thickness of an object class of interest may be inferred by acquiring a 2D medical image, extracting features from the 2D medical image, mapping the features to a segmentation mask for an object class of interest using a first convolutional neural network (CNN), mapping the features to a thickness mask for the object class of interest using a second CNN, wherein the thickness mask indicates a thickness of the object class of interest at each pixel of a plurality of pixels of the 2D medical image; and determining a volume of the object class of interest based on the thickness mask and the segmentation mask.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: December 12, 2023
    Assignee: GE PRECISION HEALTHCARE LLC
    Inventors: Tao Tan, Máté Fejes, Gopal Avinash, Ravi Soni, Bipul Das, Rakesh Mullick, Pál Tegzes, Lehel Ferenczi, Vikram Melapudi, Krishna Seetharam Shriram
  • Patent number: 11810294
    Abstract: Various methods and systems are provided for individually analyzing a plurality of subregions within an ultrasound image for acoustic shadowing. In one embodiment, a method includes acquiring ultrasound data along a plurality of receive lines, generating an ultrasound image based on the ultrasound data, dividing the ultrasound image into a plurality of subregions, and individually analyzing each of the plurality of subregions for acoustic shadowing. The method includes detecting acoustic shadowing in one or more of the plurality of subregions, displaying the ultrasound image, and graphically indicating the one or more of the plurality of subregions in which the acoustic shadowing was detected on the ultrasound image while the ultrasound image is displayed on a display device.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: November 7, 2023
    Assignee: GE Precision Healthcare LLC
    Inventors: Krishna Seetharam Shriram, Chandan Kumar Aladahalli, Vikram Melapudi
  • Publication number: 20230342427
    Abstract: Techniques are described for generating mono-modality training image data from multi-modality image data and using the mono-modality training image data to train and develop mono-modality image inferencing models. A method embodiment comprises generating, by a system comprising a processor, a synthetic 2D image from a 3D image of a first capture modality, wherein the synthetic 2D image corresponds to a 2D version of the 3D image in a second capture modality, and wherein the 3D image and the synthetic 2D image depict a same anatomical region of a same patient. The method further comprises transferring, by the system, ground truth data for the 3D image to the synthetic 2D image. In some embodiments, the method further comprises employing the synthetic 2D image to facilitate transfer of the ground truth data to a native 2D image captured of the same anatomical region of the same patient using the second capture modality.
    Type: Application
    Filed: June 28, 2023
    Publication date: October 26, 2023
    Inventors: Tao Tan, Gopal B. Avinash, Máté Fejes, Ravi Soni, Dániel Attila Szabó, Rakesh Mullick, Vikram Melapudi, Krishna Seetharam Shriram, Sohan Rashmi Ranjan, Bipul Das, Utkarsh Agrawal, László Ruskó, Zita Herczeg, Barbara Darázs
  • Publication number: 20230260142
    Abstract: Systems/techniques that facilitate multi-modal image registration via modality-neutral machine learning transformation are provided. In various embodiments, a system can access a first image and a second image, where the first image can depict an anatomical structure according to a first imaging modality, and where the second image can depict the anatomical structure according to a second imaging modality that is different from the first imaging modality. In various aspects, the system can generate, via execution of a machine learning model on the first image and the second image, a modality-neutral version of the first image and a modality-neutral version of the second image. In various instances, the system can register the first image with the second image, based on the modality-neutral version of the first image and the modality-neutral version of the second image.
    Type: Application
    Filed: January 24, 2022
    Publication date: August 17, 2023
    Inventors: Sudhanya Chatterjee, Dattesh Dayanand Shanbhag, Krishna Seetharam Shriram
  • Patent number: 11727086
    Abstract: Techniques are described for generating mono-modality training image data from multi-modality image data and using the mono-modality training image data to train and develop mono-modality image inferencing models. A method embodiment comprises generating, by a system comprising a processor, a synthetic 2D image from a 3D image of a first capture modality, wherein the synthetic 2D image corresponds to a 2D version of the 3D image in a second capture modality, and wherein the 3D image and the synthetic 2D image depict a same anatomical region of a same patient. The method further comprises transferring, by the system, ground truth data for the 3D image to the synthetic 2D image. In some embodiments, the method further comprises employing the synthetic 2D image to facilitate transfer of the ground truth data to a native 2D image captured of the same anatomical region of the same patient using the second capture modality.
    Type: Grant
    Filed: November 10, 2020
    Date of Patent: August 15, 2023
    Assignee: GE PRECISION HEALTHCARE LLC
    Inventors: Tao Tan, Gopal B. Avinash, Máté Fejes, Ravi Soni, Dániel Attila Szabó, Rakesh Mullick, Vikram Melapudi, Krishna Seetharam Shriram, Sohan Rashmi Ranjan, Bipul Das, Utkarsh Agrawal, László Ruskó, Zita Herczeg, Barbara Darázs
  • Patent number: 11712224
    Abstract: Various methods and systems are provided for generating a context awareness graph for a medical scan image. In one example, the context awareness graph includes relative size and relative position annotations with regard to one or more internal anatomical features in the scan image to enable a user to determine a current scan plane and further, to guide the user to a target scan plane.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: August 1, 2023
    Assignee: GE Precision Healthcare LLC
    Inventors: Chandan Kumar Mallappa Aladahalli, Krishna Seetharam Shriram, Vikram Melapudi
  • Publication number: 20230200778
    Abstract: Methods and systems are provided for generating ultrasound probe motion recommendations. In one example, a method includes obtaining an ultrasound image of a source scan plane, the ultrasound image acquired with an ultrasound probe at a first location relative to a patient, entering the ultrasound image as input to a probe recommendation model trained to output a set of recommendations to move the ultrasound probe from the first location to a plurality of additional locations at which a plurality of target scan planes can be imaged, and displaying the set of recommendations on a display device.
    Type: Application
    Filed: December 27, 2021
    Publication date: June 29, 2023
    Inventors: Rahul Venkataramani, Chandan Aladahalli, Krishna Seetharam Shriram, Vikram Melapudi
  • Patent number: 11657501
    Abstract: Techniques are provided for generating enhanced image representations from original X-ray images using deep learning techniques. In one embodiment, a system is provided that includes a memory storing computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can include a reception component, an analysis component, and an artificial intelligence component. The analysis component analyzes the original X-ray image using an AI-based model with respect to a set of features of interest. The AI component generates a plurality of enhanced image representations. Each enhanced image representation highlights a subset of the features of interest and suppresses remaining features of interest in the set that are external to the subset.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: May 23, 2023
    Assignee: GE PRECISION HEALTHCARE LLC
    Inventors: Vikram Melapudi, Bipul Das, Krishna Seetharam Shriram, Prasad Sudhakar, Rakesh Mullick, Sohan Rashmi Ranjan, Utkarsh Agarwal
  • Publication number: 20230075063
    Abstract: The current disclosure provides systems and methods for providing guidance information to an operator of a medical imaging device. In an embodiment, a method is provided, comprising training a deep learning neural network on training pairs including a first medical image of an anatomical neighborhood and a second medical image of the anatomical neighborhood as input data, and a ground truth displacement between a first scan plane of the first medical image and a second scan plane of the second medical image as target data; using the neural network to predict a displacement between a first scan plane of a new medical image of the anatomical neighborhood and a target scan plane of a reference medical image of the anatomical neighborhood; and displaying guidance information for an imaging device used to acquire the new medical image on a display screen.
    Type: Application
    Filed: September 8, 2021
    Publication date: March 9, 2023
    Inventors: Chandan Kumar Aladahalli, Krishna Seetharam Shriram, Vikram Melapudi
  • Patent number: 11593933
    Abstract: Methods and systems are provided for assessing image quality of ultrasound images. In one example, a method includes determining a probe position quality parameter of an ultrasound image, the probe position quality parameter representative of a level of quality of the ultrasound image with respect to a position of an ultrasound probe used to acquire the ultrasound image, determining one or more acquisition settings quality parameters of the ultrasound image, each acquisition settings quality parameter representative of a respective level of quality of the ultrasound image with respect to a respective acquisition setting used to acquire the ultrasound image, and providing feedback to a user of the ultrasound system based on the probe position quality parameter and/or the one or more acquisition settings quality parameters, the probe position quality parameter and each acquisition settings quality parameter determined based on output from separate image quality assessment models.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: February 28, 2023
    Assignee: GE Precision Healthcare LLC
    Inventors: Krishna Seetharam Shriram, Rahul Venkataramani, Aditi Garg, Chandan Kumar Mallappa Aladahalli
  • Publication number: 20220414449
    Abstract: Systems, computer-implemented methods, and computer program products that facilitate temporalizing and/or spatializing a machine learning and/or artificial intelligence network are provided. In various embodiments, a processor can combine output data from different layers of an artificial neural network trained on static image data. In various embodiments, the processor can employ the artificial neural network to infer an outcome from an image instance in a sequence of images based on combined output data from the different layers of the artificial neural network.
    Type: Application
    Filed: June 25, 2021
    Publication date: December 29, 2022
    Inventors: Chandan Aladahalli, Krishna Seetharam Shriram, Vikram Melapudi
  • Publication number: 20220309649
    Abstract: Various methods and systems are provided for individually analyzing a plurality of subregions within an ultrasound image for acoustic shadowing. In one embodiment, a method includes acquiring ultrasound data along a plurality of receive lines, generating an ultrasound image based on the ultrasound data, dividing the ultrasound image into a plurality of subregions, and individually analyzing each of the plurality of subregions for acoustic shadowing. The method includes detecting acoustic shadowing in one or more of the plurality of subregions, displaying the ultrasound image, and graphically indicating the one or more of the plurality of subregions in which the acoustic shadowing was detected on the ultrasound image while the ultrasound image is displayed on a display device.
    Type: Application
    Filed: March 26, 2021
    Publication date: September 29, 2022
    Inventors: Krishna Seetharam Shriram, Chandan Kumar Aladahalli, Vikram Melapudi
  • Publication number: 20220309315
    Abstract: Systems and techniques that facilitate extension of existing neural networks without affecting existing outputs are provided. In various embodiments, a receiver component can access a neural network, wherein the neural network includes a first set of layers trained to perform a first computing task. In various instances, an extension component can insert a second set of layers into the neural network, wherein the second set of layers receive as input latent activations from the first set of layers. In various aspects, a training component can train, without changing the first set of layers, the second set of layers to perform a second computing task that is different from the first computing task.
    Type: Application
    Filed: March 25, 2021
    Publication date: September 29, 2022
    Inventors: Chandan Aladahalli, Vikram Melapudi, Krishna Seetharam Shriram
  • Patent number: 11452494
    Abstract: Systems and methods are provided for projection profile enabled computer aided detection (CAD). Volumetric ultrasound dataset may be generated, based on echo ultrasound signals, and based on the volumetric ultrasound dataset, a three-dimensional (3D) ultrasound volume may generated. Selective structure detection may be applied to the three-dimensional (3D) ultrasound volume.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: September 27, 2022
    Assignee: GE PRECISION HEALTHCARE LLC
    Inventors: Krishna Seetharam Shriram, Arathi Sreekumari, Rakesh Mullick
  • Publication number: 20220296219
    Abstract: Methods and systems are provided for generating user guidance for ultrasound imaging. In one example, a method includes determining, with a probe recommendation model, a user action to an ultrasound probe prior to and/or during acquisition of a current ultrasound image frame, one or more anatomical features in the current ultrasound image frame, and an anatomy view of the current ultrasound image frame, and outputting, for display on a display device, a probe motion recommendation based on the user action, the one or more anatomical features, and the anatomy view.
    Type: Application
    Filed: March 22, 2021
    Publication date: September 22, 2022
    Inventors: Chandan Kumar Mallappa Aladahalli, Krishna Seetharam Shriram, Vikram Melapudi
  • Publication number: 20220284570
    Abstract: Methods and systems are provided for inferring thickness and volume of one or more object classes of interest in two-dimensional (2D) medical images, using deep neural networks. In an exemplary embodiment, a thickness of an object class of interest may be inferred by acquiring a 2D medical image, extracting features from the 2D medical image, mapping the features to a segmentation mask for an object class of interest using a first convolutional neural network (CNN), mapping the features to a thickness mask for the object class of interest using a second CNN, wherein the thickness mask indicates a thickness of the object class of interest at each pixel of a plurality of pixels of the 2D medical image; and determining a volume of the object class of interest based on the thickness mask and the segmentation mask.
    Type: Application
    Filed: March 4, 2021
    Publication date: September 8, 2022
    Inventors: Tao Tan, Máté Fejes, Gopal Avinash, Ravi Soni, Bipul Das, Rakesh Mullick, Pál Tegzes, Lehel Ferenczi, Vikram Melapudi, Krishna Seetharam Shriram