Patents by Inventor SUMAN SEDAI

SUMAN SEDAI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11847764
    Abstract: A generative adversarial network including a generator portion and a discriminator portion is constructed. The network is configured such that the network operates to enhance intensity images, wherein an intensity image is obtained by illuminating an object with an energy pulse and measuring the return strength of the energy pulse, and wherein a pixel of the intensity image corresponds to the return strength. As a part of the configuring, a loss function of the generative adversarial network is minimized, the loss function comprising a mean square error loss measurement of a noisy intensity image relative to a mean square error loss measurement of a corresponding clean intensity image. An enhanced intensity image is generated by applying the minimized loss function of the network to an original intensity image, the applying improving an image quality measurement of the enhanced intensity image relative to the original intensity image.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: December 19, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kerry Halupka, Bhavna Josephine Antony, Suman Sedai, Rahil Garnavi, Hiroshi Ishikawa
  • Patent number: 11756567
    Abstract: In an approach to generating conversational image representations, one or more computer processors detect one or more utterances by a user, wherein utterances are either textual or acoustic. The one or more computer processors generate one or more image representations of the one or more detected utterances utilizing a generative adversarial network restricted by one or more user privacy parameters, wherein the generative adversarial network is fed with an extracted sentiment, a generated avatar, an identified topic, an extracted location, and one or more user preferences. The one or more computer processors display the generated one or more image representations on one or more devices associated with respective one or more recipients of the one or more utterances.
    Type: Grant
    Filed: August 26, 2020
    Date of Patent: September 12, 2023
    Assignee: International Business Machines Corporation
    Inventors: Kimiko Wilson, Jorge Andres Moros Ortiz, Suman Sedai, Khoi-Nguyen Dao Tran
  • Patent number: 11727534
    Abstract: In an aspect for generating device-specific OCT image, one or more processors may be configured for receiving, at a unified domain generator, first image data corresponding to OCT image scans captured by one or more OCT devices; processing, by the unified domain generator, the first image data to generate second image data corresponding to a unified representation of the OCT image scans; determining by a unified discriminator, third image data corresponding to a quality subset of the unified representation of the OCT image scans having a base resolution satisfying a first condition and a base noise type satisfying a second condition; and processing, using a conditional generator, the third image data to generate fourth image data corresponding to device-specific OCT image scans having a device-specific resolution satisfying a third condition and a device-specific noise type satisfying a fourth condition.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: August 15, 2023
    Assignee: International Business Machines Corporation
    Inventors: Suman Sedai, Stefan Renard Maetschke, Bhavna Josephine Antony, Hsin-Hao Yu, Rahil Garnavi
  • Patent number: 11665381
    Abstract: Content of entertainment media that is being consumed by a user is analyzed. An element of the content that is of a first character is identified. A preference associated with the user to consume entertainment media that contains elements of a second character is identified. An updated version of the element is generated. The updated version of the element is of the second character, such that the media is consumed by the user with the element in the updated version.
    Type: Grant
    Filed: December 2, 2020
    Date of Patent: May 30, 2023
    Assignee: KYNDRYL, INC.
    Inventors: Jorge Andres Moros Ortiz, Sree Harish Maathu, Suman Sedai, Turgay Tyler Kay
  • Publication number: 20220353258
    Abstract: In an approach to improve multi-factor authentication embodiments generate an evaluation-mask over one or more modified items on a modified image created by a generative adversarial network (GAN). Further, embodiments create a scoring grid by comparing an original image with the modified image to identify different pixels between the original image and the modified image, and overlay the evaluation-mask over the identified different pixels on the modified image. Embodiments display the modified image as a multi-factor authentication prompt to a user and prompt the user to provide a response that identifies one or more modifications in the modified image. Additionally, embodiments compute an evaluation score based on a comparison of the response from the user with the evaluation-mask, to validate the response from the user, and authenticate and grant the user access to data or other resources if the evaluation score meets or exceeds a predetermined threshold.
    Type: Application
    Filed: July 19, 2022
    Publication date: November 3, 2022
    Inventors: Jorge Andres Moros Ortiz, Bruno de Assis Marques, Suman Sedai
  • Patent number: 11425121
    Abstract: In an approach to improve multi-factor authentication embodiments generate an evaluation-mask over one or more modified items on a modified image created by a generative adversarial network (GAN). Further, embodiments create a scoring grid by comparing an original image with the modified image to identify different pixels between the original image and the modified image, and overlay the evaluation-mask over the identified different pixels on the modified image. Embodiments display the modified image as a multi-factor authentication prompt to a user and prompt the user to provide a response that identifies one or more modifications in the modified image. Additionally, embodiments compute an evaluation score based on a comparison of the response from the user with the evaluation-mask, to validate the response from the user, and authenticate and grant the user access to data or other resources if the evaluation score meets or exceeds a predetermined threshold.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: August 23, 2022
    Assignee: International Business Machines Corporation
    Inventors: Jorge Andres Moros Ortiz, Bruno de Assis Marques, Suman Sedai
  • Patent number: 11416986
    Abstract: Aspects of the invention include a computer implemented method for simulating visual field test results from structural scans, the method includes processing eye image data to extract visual functioning related features. Additionally, generating a representation of a visual function of the eye that is independent of a visual field test (VFT) configuration. Then generating a simulated VFT configuration specific test result based at least in part on the representation.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: August 16, 2022
    Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW YORK UNIVERSITY
    Inventors: Hsin-Hao Yu, Stefan Renard Maetschke, Suman Sedai, Bhavna Josephine Antony, Rahil Garnavi, Hiroshi Ishikawa
  • Patent number: 11386298
    Abstract: Aspects of the invention include systems and methods that train a teacher neural network using labeled images to obtain a trained teacher neural network, each pixel of each of the labeled images being assigned a label that indicates one of a set of classifications. A method includes providing a set of unlabeled images to the trained teacher neural network to generate a set of soft-labeled images, each pixel of each of the soft-labeled images being assigned a soft label that indicates one of the set of classifications and an uncertainty value associated with the soft label, and training a student neural network with a subset of the labeled images and the set of soft-labeled images to obtain a trained student neural network. Student-labeled images are obtained from unlabeled images using the trained student neural network.
    Type: Grant
    Filed: January 9, 2020
    Date of Patent: July 12, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Suman Sedai, Bhavna Josephine Antony, Rahil Garnavi
  • Publication number: 20220191195
    Abstract: In an approach to improve multi-factor authentication embodiments generate an evaluation-mask over one or more modified items on a modified image created by a generative adversarial network (GAN). Further, embodiments create a scoring grid by comparing an original image with the modified image to identify different pixels between the original image and the modified image, and overlay the evaluation-mask over the identified different pixels on the modified image. Embodiments display the modified image as a multi-factor authentication prompt to a user and prompt the user to provide a response that identifies one or more modifications in the modified image. Additionally, embodiments compute an evaluation score based on a comparison of the response from the user with the evaluation-mask, to validate the response from the user, and authenticate and grant the user access to data or other resources if the evaluation score meets or exceeds a predetermined threshold.
    Type: Application
    Filed: December 15, 2020
    Publication date: June 16, 2022
    Inventors: Jorge Andres Moros Ortiz, Bruno de Assis Marques, Suman Sedai
  • Publication number: 20220180479
    Abstract: In an aspect for generating device-specific OCT image, one or more processors may be configured for receiving, at a unified domain generator, first image data corresponding to OCT image scans captured by one or more OCT devices; processing, by the unified domain generator, the first image data to generate second image data corresponding to a unified representation of the OCT image scans; determining by a unified discriminator, third image data corresponding to a quality subset of the unified representation of the OCT image scans having a base resolution satisfying a first condition and a base noise type satisfying a second condition; and processing, using a conditional generator, the third image data to generate fourth image data corresponding to device-specific OCT image scans having a device-specific resolution satisfying a third condition and a device-specific noise type satisfying a fourth condition.
    Type: Application
    Filed: December 8, 2020
    Publication date: June 9, 2022
    Inventors: Suman SEDAI, Stefan Renard Maetschke, Bhavna Josephine Antony, Hsin-Hao Yu, Rahil Garnavi
  • Publication number: 20220174339
    Abstract: Content of entertainment media that is being consumed by a user is analyzed. An element of the content that is of a first character is identified. A preference associated with the user to consume entertainment media that contains elements of a second character is identified. An updated version of the element is generated. The updated version of the element is of the second character, such that the media is consumed by the user with the element in the updated version.
    Type: Application
    Filed: December 2, 2020
    Publication date: June 2, 2022
    Inventors: Jorge Andres Moros Ortiz, Sree Harish Maathu, Suman Sedai, Turgay Tyler Kay
  • Publication number: 20220068296
    Abstract: In an approach to generating conversational image representations, one or more computer processors detect one or more utterances by a user, wherein utterances are either textual or acoustic. The one or more computer processors generate one or more image representations of the one or more detected utterances utilizing a generative adversarial network restricted by one or more user privacy parameters, wherein the generative adversarial network is fed with an extracted sentiment, a generated avatar, an identified topic, an extracted location, and one or more user preferences. The one or more computer processors display the generated one or more image representations on one or more devices associated with respective one or more recipients of the one or more utterances.
    Type: Application
    Filed: August 26, 2020
    Publication date: March 3, 2022
    Inventors: KIMIKO WILSON, Jorge Andres Moros Ortiz, Suman SEDAI, Khoi-Nguyen Dao Tran
  • Patent number: 11191492
    Abstract: A retinal structure and function forecasting method, system, and computer program product include producing an enriched feature representation of clinical measurements and clinical data combined with optical coherence tomography (OCT) data, training a forecasting model with the enriched feature representation, and forecasting a retinal structure at a forecast date based on the trained forecasting model.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: December 7, 2021
    Assignees: International Business Machines Corporation, New York University
    Inventors: Suman Sedai, Bhavna Josephine Antony, Rahil Garnavi, Hiroshi Ishikawa
  • Publication number: 20210319552
    Abstract: Aspects of the invention include a computer implemented method for simulating visual field test results from structural scans, the method includes processing eye image data to extract visual functioning related features. Additionally, generating a representation of a visual function of the eye that is independent of a visual field test (VFT) configuration. Then generating a simulated VFT configuration specific test result based at least in part on the representation.
    Type: Application
    Filed: April 13, 2020
    Publication date: October 14, 2021
    Inventors: Hsin-Hao Yu, Stefan Renard Maetschke, Suman SEDAI, Bhavna Josephine Antony, Rahil Garnavi, Hiroshi Ishikawa
  • Patent number: 11076192
    Abstract: Aspects of the invention include obtaining data regarding a plurality of devices in a viewing environment and analyzing a content item to be displayed in the viewing environment. Aspects also include identifying an interaction between a scene of the content item and at least one of the plurality of devices based at least in part upon the analyzing and identifying a viewer in the viewing environment and obtaining a user profile for the viewer. Based upon the interaction and the user profile, aspect include activating the at least one of the plurality of devices during playback of the scene. Aspects further include monitoring one or more characteristics of the viewer during playback of the scene and updating the user profile for the viewer based on the one or more characteristics.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: July 27, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jorge Andres Moros Ortiz, Suman Sedai, Noel Faux, Hidemasa Muta
  • Publication number: 20210227280
    Abstract: Aspects of the invention include obtaining data regarding a plurality of devices in a viewing environment and analyzing a content item to be displayed in the viewing environment. Aspects also include identifying an interaction between a scene of the content item and at least one of the plurality of devices based at least in part upon the analyzing and identifying a viewer in the viewing environment and obtaining a user profile for the viewer. Based upon the interaction and the user profile, aspect include activating the at least one of the plurality of devices during playback of the scene. Aspects further include monitoring one or more characteristics of the viewer during playback of the scene and updating the user profile for the viewer based on the one or more characteristics.
    Type: Application
    Filed: January 16, 2020
    Publication date: July 22, 2021
    Inventors: JORGE ANDRES MOROS ORTIZ, SUMAN SEDAI, NOEL FAUX, HIDEMASA MUTA
  • Publication number: 20210216825
    Abstract: Aspects of the invention include systems and methods that train a teacher neural network using labeled images to obtain a trained teacher neural network, each pixel of each of the labeled images being assigned a label that indicates one of a set of classifications. A method includes providing a set of unlabeled images to the trained teacher neural network to generate a set of soft-labeled images, each pixel of each of the soft-labeled images being assigned a soft label that indicates one of the set of classifications and an uncertainty value associated with the soft label, and training a student neural network with a subset of the labeled images and the set of soft-labeled images to obtain a trained student neural network. Student-labeled images are obtained from unlabeled images using the trained student neural network.
    Type: Application
    Filed: January 9, 2020
    Publication date: July 15, 2021
    Inventors: Suman Sedai, Bhavna Josephine Antony, Rahil Garnavi
  • Patent number: 11051689
    Abstract: A method, computer system, and computer program product for real-time pediatric eye health monitoring and assessment are provided. The embodiment may include receiving a plurality of real-time data related to an individual's eye health from a user device. The embodiment may also include assessing biometric indications relating to eye health based on the plurality of real-time data. The embodiment may further include generating a report on the assessed biometric indications. The embodiment may also include collecting clinical information from one or more databases. The embodiment may further include determining whether the assessed biometric indications reach pre-configured threshold conditions. The embodiment may also include generating alerts and recommendations based on analysis of the collected clinical information and the assessed biometric indications based on the assessed biometric indications satisfying the pre-configured threshold conditions.
    Type: Grant
    Filed: November 2, 2018
    Date of Patent: July 6, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bhavna Josephine Antony, Suman Sedai, Dwarikanath Mahapatra, Rahil Garnavi
  • Patent number: 11024013
    Abstract: A generative adversarial network including a generator portion and a discriminator portion is constructed. The network is configured such that the network operates to enhance intensity images, wherein an intensity image is obtained by illuminating an object with an energy pulse and measuring the return strength of the energy pulse, and wherein a pixel of the intensity image corresponds to the return strength. As a part of the configuring, a loss function of the generative adversarial network is minimized, the loss function comprising a mean square error loss measurement of a noisy intensity image relative to a mean square error loss measurement of a corresponding clean intensity image. An enhanced intensity image is generated by applying the minimized loss function of the network to an original intensity image, the applying improving an image quality measurement of the enhanced intensity image relative to the original intensity image.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: June 1, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kerry Halupka, Bhavna Josephine Antony, Suman Sedai, Rahil Garnavi, Hiroshi Ishikawa
  • Publication number: 20210150675
    Abstract: A generative adversarial network including a generator portion and a discriminator portion is constructed. The network is configured such that the network operates to enhance intensity images, wherein an intensity image is obtained by illuminating an object with an energy pulse and measuring the return strength of the energy pulse, and wherein a pixel of the intensity image corresponds to the return strength. As a part of the configuring, a loss function of the generative adversarial network is minimized, the loss function comprising a mean square error loss measurement of a noisy intensity image relative to a mean square error loss measurement of a corresponding clean intensity image. An enhanced intensity image is generated by applying the minimized loss function of the network to an original intensity image, the applying improving an image quality measurement of the enhanced intensity image relative to the original intensity image.
    Type: Application
    Filed: December 21, 2020
    Publication date: May 20, 2021
    Applicants: International Business Machines Corporation, New York University
    Inventors: Kerry Halupka, Bhavna Josephine Antony, Suman Sedai, Rahil Gamavi, Hiroshi Ishikawa