Patents by Inventor Gopalkrishna Veni
Gopalkrishna Veni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11551034Abstract: Described herein are systems, methods, and other techniques for training a generative adversarial network (GAN) to perform an image-to-image transformation for recognizing text. A pair of training images are provided to the GAN. The pair of training images include a training image containing a set of characters in handwritten form and a reference training image containing the set of characters in machine-recognizable form. The GAN includes a generator and a discriminator. The generated image is generated using the generator based on the training image. Update data is generated using the discriminator based on the generated image and the reference training image. The GAN is trained by modifying one or both of the generator and the discriminator using the update data.Type: GrantFiled: October 8, 2020Date of Patent: January 10, 2023Assignee: Ancestry.com Operations Inc.Inventors: Mostafa Karimi, Gopalkrishna Veni, Yen-Yun Yu
-
Patent number: 11282206Abstract: Image segmentation based on the combination of a deep learning network and a shape-guided deformable model is provided. In various embodiments, a time sequence of images is received. The sequence of images is provided to a convolutional network to obtain a sequence of preliminary segmentations. The sequence of preliminary segmentations labels a region of interest in each of the images of the sequence. A reference and auxiliary mask are generated from the sequence of preliminary segmentations. The reference mask corresponds to the region of interest. The auxiliary mask corresponds to areas outside the region of interest. A final segmentation corresponding to the region of interest is generated for each of the sequence of images by applying a deformable model to the composite mask with reference to the auxiliary mask.Type: GrantFiled: June 3, 2020Date of Patent: March 22, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Gopalkrishna Veni, Mehdi Moradi
-
Patent number: 11094069Abstract: A mechanism is provided in a data processing system comprising a processor and a memory, the memory comprising instructions executed by the processor to specifically configure the processor to implement a multi-atlas segmentation engine. An offline registration component performs registration of a plurality of atlases with a set of image templates to thereby generate and store, in a first registration storage device, a plurality of offline registrations. The atlases are annotated training medical images and the image templates are non-annotated medical images. The multi-atlas segmentation engine receives a target image. An image selection component selects a subset of image templates in the set of image templates based on the target image. An online registration component performs registration of the subset of image templates with the target image to generate a plurality of online registrations.Type: GrantFiled: January 23, 2019Date of Patent: August 17, 2021Assignee: International Business Machines CorporationInventors: Deepika Kakrania, Tanveer F. Syeda-Mahmood, Gopalkrishna Veni, Hongzhi Wang, Rui Zhang
-
Publication number: 20210110205Abstract: Described herein are systems, methods, and other techniques for training a generative adversarial network (GAN) to perform an image-to-image transformation for recognizing text. A pair of training images are provided to the GAN. The pair of training images include a training image containing a set of characters in handwritten form and a reference training image containing the set of characters in machine-recognizable form. The GAN includes a generator and a discriminator. The generated image is generated using the generator based on the training image. Update data is generated using the discriminator based on the generated image and the reference training image. The GAN is trained by modifying one or both of the generator and the discriminator using the update data.Type: ApplicationFiled: October 8, 2020Publication date: April 15, 2021Applicant: Ancestry.com Operations Inc.Inventors: Mostafa Karimi, Gopalkrishna Veni, Yen-Yun Yu
-
Patent number: 10937172Abstract: A mechanism is provided in a data processing system comprising a processor and a memory, the memory comprising instructions executed by the processor to specifically configure the processor to implement a multi-atlas segmentation engine. An offline registration component performs registration of a plurality of atlases with a set of image templates to thereby generate and store, in a first registration storage device, a plurality of offline registrations. The atlases are annotated training medical images and the image templates are non-annotated medical images. The multi-atlas segmentation engine receives a target image. An image selection component selects a subset of image templates in the set of image templates based on the target image. An online registration component performs registration of the subset of image templates with the target image to generate a plurality of online registrations.Type: GrantFiled: July 10, 2018Date of Patent: March 2, 2021Assignee: International Business Machines CorporationInventors: Deepika Kakrania, Tanveer F. Syeda-Mahmood, Gopalkrishna Veni, Hongzhi Wang, Rui Zhang
-
Patent number: 10896508Abstract: A method comprises (a) collecting (i) a set of chest computed tomography angiography (CTA) images scanned in the axial view and (ii) a manual segmentation of the images, for each one of multiple organs; (b) preprocessing the images such that they share the same field of view (FOV); (c) using both the images and their manual segmentation to train a supervised deep learning segmentation network, wherein loss is determined from a multi-dice score that is the summation of the dice scores for all the multiple organs, each dice score being computed as the similarity between the manual segmentation and the output of the network for one of the organs; (d) testing a given (input) pre-processed image on the trained network, thereby obtaining segmented output of the given image; and (e) smoothing the segmented output of the given image.Type: GrantFiled: April 26, 2018Date of Patent: January 19, 2021Assignee: International Business Machines CorporationInventors: Ahmed El Harouni, Mehdi Moradi, Prasanth Prasanna, Tanveer F. Syeda-Mahmood, Hui Tang, Gopalkrishna Veni, Hongzhi Wang
-
Publication number: 20200294242Abstract: Image segmentation based on the combination of a deep learning network and a shape-guided deformable model is provided. In various embodiments, a time sequence of images is received. The sequence of images is provided to a convolutional network to obtain a sequence of preliminary segmentations. The sequence of preliminary segmentations labels a region of interest in each of the images of the sequence. A reference and auxiliary mask are generated from the sequence of preliminary segmentations. The reference mask corresponds to the region of interest. The auxiliary mask corresponds to areas outside the region of interest. A final segmentation corresponding to the region of interest is generated for each of the sequence of images by applying a deformable model to the composite mask with reference to the auxiliary mask.Type: ApplicationFiled: June 3, 2020Publication date: September 17, 2020Inventors: Gopalkrishna Veni, Mehdi Moradi
-
Patent number: 10699414Abstract: Image segmentation based on the combination of a deep learning network and a shape-guided deformable model is provided. In various embodiments, a time sequence of images is received. The sequence of images is provided to a convolutional network to obtain a sequence of preliminary segmentations. The sequence of preliminary segmentations labels a region of interest in each of the images of the sequence. A reference and auxiliary mask are generated from the sequence of preliminary segmentations. The reference mask corresponds to the region of interest. The auxiliary mask corresponds to areas outside the region of interest. A final segmentation corresponding to the region of interest is generated for each of the sequence of images by applying a deformable model to the composite mask with reference to the auxiliary mask.Type: GrantFiled: April 3, 2018Date of Patent: June 30, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Gopalkrishna Veni, Mehdi Moradi
-
Publication number: 20200020106Abstract: A mechanism is provided in a data processing system comprising a processor and a memory, the memory comprising instructions executed by the processor to specifically configure the processor to implement a multi-atlas segmentation engine. An offline registration component performs registration of a plurality of atlases with a set of image templates to thereby generate and store, in a first registration storage device, a plurality of offline registrations. The atlases are annotated training medical images and the image templates are non-annotated medical images. The multi-atlas segmentation engine receives a target image. An image selection component selects a subset of image templates in the set of image templates based on the target image. An online registration component performs registration of the subset of image templates with the target image to generate a plurality of online registrations.Type: ApplicationFiled: July 10, 2018Publication date: January 16, 2020Inventors: Deepika Kakrania, Tanveer F. Syeda-Mahmood, Gopalkrishna Veni, Hongzhi Wang, Rui Zhang
-
Publication number: 20200020107Abstract: A mechanism is provided in a data processing system comprising a processor and a memory, the memory comprising instructions executed by the processor to specifically configure the processor to implement a multi-atlas segmentation engine. An offline registration component performs registration of a plurality of atlases with a set of image templates to thereby generate and store, in a first registration storage device, a plurality of offline registrations. The atlases are annotated training medical images and the image templates are non-annotated medical images. The multi-atlas segmentation engine receives a target image. An image selection component selects a subset of image templates in the set of image templates based on the target image. An online registration component performs registration of the subset of image templates with the target image to generate a plurality of online registrations.Type: ApplicationFiled: January 23, 2019Publication date: January 16, 2020Inventors: Deepika Kakrania, Tanveer F. Syeda-Mahmood, Gopalkrishna Veni, Hongzhi Wang, Rui Zhang
-
Publication number: 20190304095Abstract: Image segmentation based on the combination of a deep learning network and a shape-guided deformable model is provided. In various embodiments, a time sequence of images is received. The sequence of images is provided to a convolutional network to obtain a sequence of preliminary segmentations. The sequence of preliminary segmentations labels a region of interest in each of the images of the sequence. A reference and auxiliary mask are generated from the sequence of preliminary segmentations. The reference mask corresponds to the region of interest. The auxiliary mask corresponds to areas outside the region of interest. A final segmentation corresponding to the region of interest is generated for each of the sequence of images by applying a deformable model to the composite mask with reference to the auxiliary mask.Type: ApplicationFiled: April 3, 2018Publication date: October 3, 2019Inventors: Gopalkrishna Veni, Mehdi Moradi
-
Publication number: 20190244357Abstract: A method comprises (a) collecting (i) a set of chest computed tomography angiography (CTA) images scanned in the axial view and (ii) a manual segmentation of the images, for each one of multiple organs; (b) preprocessing the images such that they share the same field of view (FOV); (c) using both the images and their manual segmentation to train a supervised deep learning segmentation network, wherein loss is determined from a multi-dice score that is the summation of the dice scores for all the multiple organs, each dice score being computed as the similarity between the manual segmentation and the output of the network for one of the organs; (d) testing a given (input) pre-processed image on the trained network, thereby obtaining segmented output of the given image; and (e) smoothing the segmented output of the given image.Type: ApplicationFiled: April 26, 2018Publication date: August 8, 2019Inventors: Ahmed El Harouni, Mehdi Moradi, Prasanth Prasanna, Tanveer F. Syeda-Mahmood, Hui Tang, Gopalkrishna Veni, Hongzhi Wang