Patents by Inventor Bhavna Josephine Antony
Bhavna Josephine Antony has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11847764Abstract: A generative adversarial network including a generator portion and a discriminator portion is constructed. The network is configured such that the network operates to enhance intensity images, wherein an intensity image is obtained by illuminating an object with an energy pulse and measuring the return strength of the energy pulse, and wherein a pixel of the intensity image corresponds to the return strength. As a part of the configuring, a loss function of the generative adversarial network is minimized, the loss function comprising a mean square error loss measurement of a noisy intensity image relative to a mean square error loss measurement of a corresponding clean intensity image. An enhanced intensity image is generated by applying the minimized loss function of the network to an original intensity image, the applying improving an image quality measurement of the enhanced intensity image relative to the original intensity image.Type: GrantFiled: December 21, 2020Date of Patent: December 19, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kerry Halupka, Bhavna Josephine Antony, Suman Sedai, Rahil Garnavi, Hiroshi Ishikawa
-
Patent number: 11727534Abstract: In an aspect for generating device-specific OCT image, one or more processors may be configured for receiving, at a unified domain generator, first image data corresponding to OCT image scans captured by one or more OCT devices; processing, by the unified domain generator, the first image data to generate second image data corresponding to a unified representation of the OCT image scans; determining by a unified discriminator, third image data corresponding to a quality subset of the unified representation of the OCT image scans having a base resolution satisfying a first condition and a base noise type satisfying a second condition; and processing, using a conditional generator, the third image data to generate fourth image data corresponding to device-specific OCT image scans having a device-specific resolution satisfying a third condition and a device-specific noise type satisfying a fourth condition.Type: GrantFiled: December 8, 2020Date of Patent: August 15, 2023Assignee: International Business Machines CorporationInventors: Suman Sedai, Stefan Renard Maetschke, Bhavna Josephine Antony, Hsin-Hao Yu, Rahil Garnavi
-
Patent number: 11416986Abstract: Aspects of the invention include a computer implemented method for simulating visual field test results from structural scans, the method includes processing eye image data to extract visual functioning related features. Additionally, generating a representation of a visual function of the eye that is independent of a visual field test (VFT) configuration. Then generating a simulated VFT configuration specific test result based at least in part on the representation.Type: GrantFiled: April 13, 2020Date of Patent: August 16, 2022Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW YORK UNIVERSITYInventors: Hsin-Hao Yu, Stefan Renard Maetschke, Suman Sedai, Bhavna Josephine Antony, Rahil Garnavi, Hiroshi Ishikawa
-
Patent number: 11386298Abstract: Aspects of the invention include systems and methods that train a teacher neural network using labeled images to obtain a trained teacher neural network, each pixel of each of the labeled images being assigned a label that indicates one of a set of classifications. A method includes providing a set of unlabeled images to the trained teacher neural network to generate a set of soft-labeled images, each pixel of each of the soft-labeled images being assigned a soft label that indicates one of the set of classifications and an uncertainty value associated with the soft label, and training a student neural network with a subset of the labeled images and the set of soft-labeled images to obtain a trained student neural network. Student-labeled images are obtained from unlabeled images using the trained student neural network.Type: GrantFiled: January 9, 2020Date of Patent: July 12, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Suman Sedai, Bhavna Josephine Antony, Rahil Garnavi
-
Publication number: 20220180479Abstract: In an aspect for generating device-specific OCT image, one or more processors may be configured for receiving, at a unified domain generator, first image data corresponding to OCT image scans captured by one or more OCT devices; processing, by the unified domain generator, the first image data to generate second image data corresponding to a unified representation of the OCT image scans; determining by a unified discriminator, third image data corresponding to a quality subset of the unified representation of the OCT image scans having a base resolution satisfying a first condition and a base noise type satisfying a second condition; and processing, using a conditional generator, the third image data to generate fourth image data corresponding to device-specific OCT image scans having a device-specific resolution satisfying a third condition and a device-specific noise type satisfying a fourth condition.Type: ApplicationFiled: December 8, 2020Publication date: June 9, 2022Inventors: Suman SEDAI, Stefan Renard Maetschke, Bhavna Josephine Antony, Hsin-Hao Yu, Rahil Garnavi
-
Patent number: 11191492Abstract: A retinal structure and function forecasting method, system, and computer program product include producing an enriched feature representation of clinical measurements and clinical data combined with optical coherence tomography (OCT) data, training a forecasting model with the enriched feature representation, and forecasting a retinal structure at a forecast date based on the trained forecasting model.Type: GrantFiled: January 18, 2019Date of Patent: December 7, 2021Assignees: International Business Machines Corporation, New York UniversityInventors: Suman Sedai, Bhavna Josephine Antony, Rahil Garnavi, Hiroshi Ishikawa
-
Publication number: 20210319552Abstract: Aspects of the invention include a computer implemented method for simulating visual field test results from structural scans, the method includes processing eye image data to extract visual functioning related features. Additionally, generating a representation of a visual function of the eye that is independent of a visual field test (VFT) configuration. Then generating a simulated VFT configuration specific test result based at least in part on the representation.Type: ApplicationFiled: April 13, 2020Publication date: October 14, 2021Inventors: Hsin-Hao Yu, Stefan Renard Maetschke, Suman SEDAI, Bhavna Josephine Antony, Rahil Garnavi, Hiroshi Ishikawa
-
Publication number: 20210216825Abstract: Aspects of the invention include systems and methods that train a teacher neural network using labeled images to obtain a trained teacher neural network, each pixel of each of the labeled images being assigned a label that indicates one of a set of classifications. A method includes providing a set of unlabeled images to the trained teacher neural network to generate a set of soft-labeled images, each pixel of each of the soft-labeled images being assigned a soft label that indicates one of the set of classifications and an uncertainty value associated with the soft label, and training a student neural network with a subset of the labeled images and the set of soft-labeled images to obtain a trained student neural network. Student-labeled images are obtained from unlabeled images using the trained student neural network.Type: ApplicationFiled: January 9, 2020Publication date: July 15, 2021Inventors: Suman Sedai, Bhavna Josephine Antony, Rahil Garnavi
-
Patent number: 11051689Abstract: A method, computer system, and computer program product for real-time pediatric eye health monitoring and assessment are provided. The embodiment may include receiving a plurality of real-time data related to an individual's eye health from a user device. The embodiment may also include assessing biometric indications relating to eye health based on the plurality of real-time data. The embodiment may further include generating a report on the assessed biometric indications. The embodiment may also include collecting clinical information from one or more databases. The embodiment may further include determining whether the assessed biometric indications reach pre-configured threshold conditions. The embodiment may also include generating alerts and recommendations based on analysis of the collected clinical information and the assessed biometric indications based on the assessed biometric indications satisfying the pre-configured threshold conditions.Type: GrantFiled: November 2, 2018Date of Patent: July 6, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Bhavna Josephine Antony, Suman Sedai, Dwarikanath Mahapatra, Rahil Garnavi
-
Patent number: 11024013Abstract: A generative adversarial network including a generator portion and a discriminator portion is constructed. The network is configured such that the network operates to enhance intensity images, wherein an intensity image is obtained by illuminating an object with an energy pulse and measuring the return strength of the energy pulse, and wherein a pixel of the intensity image corresponds to the return strength. As a part of the configuring, a loss function of the generative adversarial network is minimized, the loss function comprising a mean square error loss measurement of a noisy intensity image relative to a mean square error loss measurement of a corresponding clean intensity image. An enhanced intensity image is generated by applying the minimized loss function of the network to an original intensity image, the applying improving an image quality measurement of the enhanced intensity image relative to the original intensity image.Type: GrantFiled: March 8, 2019Date of Patent: June 1, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kerry Halupka, Bhavna Josephine Antony, Suman Sedai, Rahil Garnavi, Hiroshi Ishikawa
-
Publication number: 20210150675Abstract: A generative adversarial network including a generator portion and a discriminator portion is constructed. The network is configured such that the network operates to enhance intensity images, wherein an intensity image is obtained by illuminating an object with an energy pulse and measuring the return strength of the energy pulse, and wherein a pixel of the intensity image corresponds to the return strength. As a part of the configuring, a loss function of the generative adversarial network is minimized, the loss function comprising a mean square error loss measurement of a noisy intensity image relative to a mean square error loss measurement of a corresponding clean intensity image. An enhanced intensity image is generated by applying the minimized loss function of the network to an original intensity image, the applying improving an image quality measurement of the enhanced intensity image relative to the original intensity image.Type: ApplicationFiled: December 21, 2020Publication date: May 20, 2021Applicants: International Business Machines Corporation, New York UniversityInventors: Kerry Halupka, Bhavna Josephine Antony, Suman Sedai, Rahil Gamavi, Hiroshi Ishikawa
-
Patent number: 10832074Abstract: From a first image using a model, a first uncertainty map is generated. An uncertainty level of a location in the first uncertainty map corresponds to a detection of a known structure in a portion of the first image. A first weighted image corresponding to the first uncertainty map is generated, the generating including assigning a first weight to a pixel of the first image, the first weight corresponding to the uncertainty level of a location in the first uncertainty map corresponding to the pixel. From a second image using a model, a second uncertainty map is generated. A second weighted image corresponding to the second uncertainty map is generated. The first image and the second image are combined to form a composite image, each image participating in the composite image according to the corresponding weighted image.Type: GrantFiled: March 8, 2019Date of Patent: November 10, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Suman Sedai, Bhavna Josephine Antony, Kerry Halupka, Dwarikanath Mahapatra, Rahil Garnavi
-
Publication number: 20200286208Abstract: A generative adversarial network including a generator portion and a discriminator portion is constructed. The network is configured such that the network operates to enhance intensity images, wherein an intensity image is obtained by illuminating an object with an energy pulse and measuring the return strength of the energy pulse, and wherein a pixel of the intensity image corresponds to the return strength. As a part of the configuring, a loss function of the generative adversarial network is minimized, the loss function comprising a mean square error loss measurement of a noisy intensity image relative to a mean square error loss measurement of a corresponding clean intensity image. An enhanced intensity image is generated by applying the minimized loss function of the network to an original intensity image, the applying improving an image quality measurement of the enhanced intensity image relative to the original intensity image.Type: ApplicationFiled: March 8, 2019Publication date: September 10, 2020Applicants: International Business Machines Corporation, New York UniversityInventors: Kerry Halupka, Bhavna Josephine Antony, Suman Sedai, Rahil Garnavi, Hiroshi Ishikawa
-
Publication number: 20200285880Abstract: From a first image using a model, a first uncertainty map is generated. An uncertainty level of a location in the first uncertainty map corresponds to a detection of a known structure in a portion of the first image. A first weighted image corresponding to the first uncertainty map is generated, the generating including assigning a first weight to a pixel of the first image, the first weight corresponding to the uncertainty level of a location in the first uncertainty map corresponding to the pixel. From a second image using a model, a second uncertainty map is generated. A second weighted image corresponding to the second uncertainty map is generated. The first image and the second image are combined to form a composite image, each image participating in the composite image according to the corresponding weighted image.Type: ApplicationFiled: March 8, 2019Publication date: September 10, 2020Applicant: International Business Machines CorporationInventors: Suman Sedai, Bhavna Josephine Antony, Kerry Halupka, Dwarikanath Mahapatra, Rahil Garnavi
-
Publication number: 20200229770Abstract: A retinal structure and function forecasting method, system, and computer program product include producing an enriched feature representation of clinical measurements and clinical data combined with optical coherence tomography (OCT) data, training a forecasting model with the enriched feature representation, and forecasting a retinal structure at a forecast date based on the trained forecasting model.Type: ApplicationFiled: January 18, 2019Publication date: July 23, 2020Inventors: Suman Sedai, Bhavna Josephine Antony, Rahil Garnavi, Hiroshi Ishikawa
-
Publication number: 20200138285Abstract: A method, computer system, and computer program product for real-time pediatric eye health monitoring and assessment are provided. The embodiment may include receiving a plurality of real-time data related to an individual's eye health from a user device. The embodiment may also include assessing biometric indications relating to eye health based on the plurality of real-time data. The embodiment may further include generating a report on the assessed biometric indications. The embodiment may also include collecting clinical information from one or more databases. The embodiment may further include determining whether the assessed biometric indications reach pre-configured threshold conditions. The embodiment may also include generating alerts and recommendations based on analysis of the collected clinical information and the assessed biometric indications based on the assessed biometric indications satisfying the pre-configured threshold conditions.Type: ApplicationFiled: November 2, 2018Publication date: May 7, 2020Inventors: Bhavna Josephine Antony, Suman Sedai, Dwarikanath MAHAPATRA, Rahil Garnavi