Patents by Inventor Utkarsh Agrawal
Utkarsh Agrawal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240160915Abstract: Systems/techniques that facilitate explainable deep interpolation are provided. In various embodiments, a system can access a data candidate, wherein a set of numerical elements of the data candidate are missing. In various aspects, the system can generate, via execution of a deep learning neural network on the data candidate, a set of weight maps for the set of missing numerical elements. In various instances, the system can compute the set of missing numerical elements by respectively combining, according to the set of weight maps, available interpolation neighbors of the set of missing numerical elements.Type: ApplicationFiled: November 15, 2022Publication date: May 16, 2024Inventors: Prasad Sudhakara Murthy, Utkarsh Agrawal, Bipul Das
-
Patent number: 11929141Abstract: Sparsity-aware reconfiguration compute-in-memory (CIM) static random access memory (SRAM) systems are disclosed. In one aspect, a reconfigurable precision succession approximation register (SAR) analog-to-digital converter (ADC) that has the ability to form (n+m) bit precision using n-bit and m-bit sub-ADCs is provided. By controlling which sub-ADCs are used based on data sparsity, precision may be maintained as needed while providing a more energy efficient design.Type: GrantFiled: December 8, 2021Date of Patent: March 12, 2024Assignee: Purdue Research FoundationInventors: Kaushik Roy, Amogh Agrawal, Mustafa Fayez Ahmed Ali, Indranil Chakraborty, Aayush Ankit, Utkarsh Saxena
-
Publication number: 20240062331Abstract: Systems/techniques that facilitate deep learning robustness against display field of view (DFOV) variations are provided. In various embodiments, a system can access a deep learning neural network and a medical image. In various aspects, a first DFOV, and thus a first spatial resolution, on which the deep learning neural network is trained can fail to match a second DFOV, and thus a second spatial resolution, exhibited by the medical image. In various instances, the system can execute the deep learning neural network on a resampled version of the medical image, where the resampled version of the medical image can exhibit the first DFOV and thus the first spatial resolution. In various cases, the system can generate the resampled version of the medical image by up-sampling or down-sampling the medical image until it exhibits the first DFOV and thus the first spatial resolution.Type: ApplicationFiled: August 19, 2022Publication date: February 22, 2024Inventors: Rajesh Langoju, Prasad Sudhakara Murthy, Utkarsh Agrawal, Risa Shigemasa, Bhushan Patil, Bipul Das, Yasuhiro Imai
-
Publication number: 20230409673Abstract: Systems/techniques that facilitate improved uncertainty scoring for neural networks via stochastic weight perturbations are provided. In various embodiments, a system can access a trained neural network and/or a data candidate on which the trained neural network is to be executed. In various aspects, the system can generate an uncertainty indicator representing how confidently executable or how unconfidently executable the trained neural network is with respect to the data candidate, based on a set of perturbed instantiations of the trained neural network.Type: ApplicationFiled: June 20, 2022Publication date: December 21, 2023Inventors: Ravishankar Hariharan, Rohan Keshav Patil, Rahul Venkataramani, Prasad Sudhakara Murthy, Deepa Anand, Utkarsh Agrawal
-
Patent number: 11823354Abstract: A computer-implemented method for correcting artifacts in computed tomography data is provided. The method includes inputting a sinogram into a trained sinogram correction network, wherein the sinogram is missing a pixel value for at least one pixel. The method also includes processing the sinogram via one or more layers of the trained sinogram correction network, wherein processing the sinogram includes deriving complementary information from the sinogram and estimating the pixel value for the at least one pixel based on the complementary information. The method further includes outputting from the trained sinogram correction network a corrected sinogram having the estimated pixel value.Type: GrantFiled: April 8, 2021Date of Patent: November 21, 2023Assignee: GE Precision Healthcare LLCInventors: Bhushan Dayaram Patil, Rajesh Langoju, Utkarsh Agrawal, Bipul Das, Jiang Hsieh
-
Publication number: 20230342427Abstract: Techniques are described for generating mono-modality training image data from multi-modality image data and using the mono-modality training image data to train and develop mono-modality image inferencing models. A method embodiment comprises generating, by a system comprising a processor, a synthetic 2D image from a 3D image of a first capture modality, wherein the synthetic 2D image corresponds to a 2D version of the 3D image in a second capture modality, and wherein the 3D image and the synthetic 2D image depict a same anatomical region of a same patient. The method further comprises transferring, by the system, ground truth data for the 3D image to the synthetic 2D image. In some embodiments, the method further comprises employing the synthetic 2D image to facilitate transfer of the ground truth data to a native 2D image captured of the same anatomical region of the same patient using the second capture modality.Type: ApplicationFiled: June 28, 2023Publication date: October 26, 2023Inventors: Tao Tan, Gopal B. Avinash, Máté Fejes, Ravi Soni, Dániel Attila Szabó, Rakesh Mullick, Vikram Melapudi, Krishna Seetharam Shriram, Sohan Rashmi Ranjan, Bipul Das, Utkarsh Agrawal, László Ruskó, Zita Herczeg, Barbara Darázs
-
Patent number: 11727086Abstract: Techniques are described for generating mono-modality training image data from multi-modality image data and using the mono-modality training image data to train and develop mono-modality image inferencing models. A method embodiment comprises generating, by a system comprising a processor, a synthetic 2D image from a 3D image of a first capture modality, wherein the synthetic 2D image corresponds to a 2D version of the 3D image in a second capture modality, and wherein the 3D image and the synthetic 2D image depict a same anatomical region of a same patient. The method further comprises transferring, by the system, ground truth data for the 3D image to the synthetic 2D image. In some embodiments, the method further comprises employing the synthetic 2D image to facilitate transfer of the ground truth data to a native 2D image captured of the same anatomical region of the same patient using the second capture modality.Type: GrantFiled: November 10, 2020Date of Patent: August 15, 2023Assignee: GE PRECISION HEALTHCARE LLCInventors: Tao Tan, Gopal B. Avinash, Máté Fejes, Ravi Soni, Dániel Attila Szabó, Rakesh Mullick, Vikram Melapudi, Krishna Seetharam Shriram, Sohan Rashmi Ranjan, Bipul Das, Utkarsh Agrawal, László Ruskó, Zita Herczeg, Barbara Darázs
-
Publication number: 20230177747Abstract: Systems/techniques that facilitate machine learning generation of low-noise and high structural conspicuity images are provided. In various embodiments, a system can access an image and can apply at least one of image denoising or image resolution enhancement to the image, thereby yielding a first intermediary image. In various instances, the system can generate, via execution of a plurality of machine learning models, a plurality of second intermediary images based on the first intermediary image, wherein a given machine learning model in the plurality of machine learning models receives as input the first intermediary image, wherein the given machine learning model produces as output a given second intermediary image in the plurality of second intermediary images, and wherein the given second intermediary image represents a kernel-transformed version of the first intermediary image. In various cases, the system can generate a blended image based on the plurality of second intermediary images.Type: ApplicationFiled: December 6, 2021Publication date: June 8, 2023Inventors: Rajesh Veera Venkata Lakshmi Langoju, Utkarsh Agrawal, Bipul Das, Risa Shigemasa, Yasuhiro Imai, Jiang Hsieh
-
Publication number: 20230048231Abstract: Various methods and systems are provided for computed tomography imaging. In one embodiment, a method includes acquiring, with an x-ray detector and an x-ray source coupled to a gantry, a three-dimensional image volume of a subject while the subject moves through a bore of the gantry and the gantry rotates the x-ray detector and the x-ray source around the subject, inputting the three-dimensional image volume to a trained deep neural network to generate a corrected three-dimensional image volume with a reduction in aliasing artifacts present in the three-dimensional image volume, and outputting the corrected three-dimensional image volume. In this way, aliasing artifacts caused by sub-sampling may be removed from computed tomography images while preserving details, texture, and sharpness in the computed tomography images.Type: ApplicationFiled: August 11, 2021Publication date: February 16, 2023Inventors: Rajesh Langoju, Utkarsh Agrawal, Risa Shigemasa, Bipul Das, Yasuhiro Imai, Jiang Hsieh
-
Publication number: 20230052595Abstract: Techniques are described for enhancing the quality of three-dimensional (3D) anatomy scan images using deep learning. According to an embodiment, a system is provided that comprises a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory. The computer executable components comprise a reception component that receives a scan image generated from 3D scan data relative to a first axis of a 3D volume, and an enhancement component that applies an enhancement model to the scan image to generate an enhanced scan image having a higher resolution relative to the scan image. The enhancement model comprises a deep learning neural network model trained on training image pairs respectively comprising a low-resolution scan image and a corresponding high-resolution scan image respectively generated relative to a second axis of the 3D volume.Type: ApplicationFiled: August 16, 2021Publication date: February 16, 2023Inventors: Rajesh Veera Venkata Lakshmi Langoju, Utkarsh Agrawal, Bipul Das, Risa Shigemasa, Yasuhiro Imai, Jiang Hsieh
-
Publication number: 20230029188Abstract: The current disclosure provides methods and systems to reduce an amount of structured and unstructured noise in image data. Specifically, a multi-stage deep learning method is provided, comprising training a deep learning network using a set of training pairs interchangeably including input data from a first noisy dataset with a first noise level and target data from a second noisy dataset with a second noise level, and input data from the second noisy dataset and target data from the first noisy dataset; generating an ultra-low noise data equivalent based on a low noise data fed into the trained deep learning network; and retraining the deep learning network on the set of training pairs using the target data of the set of training pairs in a first retraining step, and using the ultra-low noise data equivalent as target data in a second retraining step.Type: ApplicationFiled: July 26, 2021Publication date: January 26, 2023Inventors: Rajesh Langoju, Utkarsh Agrawal, Bhushan Patil, Vanika Singhal, Bipul Das, Jiang Hsieh
-
Publication number: 20230018833Abstract: Techniques are described for generating multimodal training data cohorts tailored to specific clinical machine learning (ML) model inferencing tasks. In an embodiment, a method comprises accessing, by a system comprising a processor, multimodal clinical data for a plurality of subjects included in one or more clinical data sources. The method further comprises selecting, by the system, datasets from the multimodal clinical data based on the datasets respectively comprising subsets of the multimodal clinical data that satisfy criteria determined to be relevant to a clinical processing task. The method further comprises generating, by the system, a training data cohort comprising the datasets for training a clinical inferencing model to perform the clinical processing task.Type: ApplicationFiled: July 19, 2021Publication date: January 19, 2023Inventors: Bipul Das, Rakesh Mullick, Utkarsh Agrawal, KS Shriram, Sohan Ranjan, Tao Tan
-
Publication number: 20230013779Abstract: Systems/techniques that facilitate self-supervised deblurring are provided. In various embodiments, a system can access an input image generated by an imaging device. In various aspects, the system can train, in a self-supervised manner based on a point spread function of the imaging device, a machine learning model to deblur the input image. More specifically, the system can append to the model one or more non-trainable convolution layers having a blur kernel that is based on the point spread function of the imaging device. In various aspects, the system can feed the input image to the model, the model can generate a first output image based on the input image, the one or more non-trainable convolution layers can generate a second output image by convolving the first output image with the blur kernel, and the system can update parameters of the model based on a difference between the input image and the second output image.Type: ApplicationFiled: July 6, 2021Publication date: January 19, 2023Inventors: Rajesh Veera Venkata Lakshmi Langoju, Prasad Sudhakara Murthy, Utkarsh Agrawal, Bhushan D. Patil, Bipul Das
-
Publication number: 20220327664Abstract: A computer-implemented method for correcting artifacts in computed tomography data is provided. The method includes inputting a sinogram into a trained sinogram correction network, wherein the sinogram is missing a pixel value for at least one pixel. The method also includes processing the sinogram via one or more layers of the trained sinogram correction network, wherein processing the sinogram includes deriving complementary information from the sinogram and estimating the pixel value for the at least one pixel based on the complementary information. The method further includes outputting from the trained sinogram correction network a corrected sinogram having the estimated pixel value.Type: ApplicationFiled: April 8, 2021Publication date: October 13, 2022Inventors: Bhushan Dayaram Patil, Rajesh Langoju, Utkarsh Agrawal, Bipul Das, Jiang Hsieh
-
Publication number: 20220101048Abstract: Techniques are described for generating mono-modality training image data from multi-modality image data and using the mono-modality training image data to train and develop mono-modality image inferencing models. A method embodiment comprises generating, by a system comprising a processor, a synthetic 2D image from a 3D image of a first capture modality, wherein the synthetic 2D image corresponds to a 2D version of the 3D image in a second capture modality, and wherein the 3D image and the synthetic 2D image depict a same anatomical region of a same patient. The method further comprises transferring, by the system, ground truth data for the 3D image to the synthetic 2D image. In some embodiments, the method further comprises employing the synthetic 2D image to facilitate transfer of the ground truth data to a native 2D image captured of the same anatomical region of the same patient using the second capture modality.Type: ApplicationFiled: November 10, 2020Publication date: March 31, 2022Inventors: Tao Tan, Gopal B. Avinash, Máté Fejes, Ravi Soni, Dániel Attila Szabó, Rakesh Mullick, Vikram Melapudi, Krishna Seetharam Shriram, Sohan Rashmi Ranjan, Bipul Das, Utkarsh Agrawal, László Ruskó, Zita Herczeg, Barbara Darázs
-
Publication number: 20210406681Abstract: Techniques are provided for learning loss functions using DL networks and integrating these loss functions into DL based image transformation architectures. In one embodiment, a method is provided that comprising facilitating training, by a system operatively coupled to a processor, a first deep learning network to predict a loss function metric value of a loss function. The method further comprises employing, by the system, the first deep learning network to predict the loss function metric value in association with training a second deep learning network that to perform a defined deep learning task. In various embodiments, the loss function comprises a computationally complex loss function that is not easily implementable in existing deep learning packages, such as a non-differentiable loss function, a feature similarity index match (FSIM) loss function, a system transfer function, a visual information fidelity (VIF) loss function and the like.Type: ApplicationFiled: August 7, 2020Publication date: December 30, 2021Inventors: Dattesh Shanbhag, Hariharan Ravishankar, Utkarsh Agrawal