Patents by Inventor Dwarikanath Mahapatra

Dwarikanath Mahapatra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11497455
    Abstract: A method and system of diagnosing a medical condition of a target area of a patient using a mobile device are provided. One or more magnetic field images of a target area of a patient are received. One or more hyperspectral images of the target area of the patient are received. For each of the one or more magnetic field images and the one or more hyperspectral images, a three-dimensional (3D) position of the mobile device is tracked with respect to the target are of the patient. A 3D image of the target area is generated based on the received one or more magnetic field images, one or more hyperspectral images, and the corresponding tracked 3D position of the phone with respect to each image. A medical condition of the target area is diagnosed or monitored based on the generated 3D image.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: November 15, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Julian de Hoog, Dwarikanath Mahapatra, Rahil Garnavi, Fatemeh Jalali
  • Patent number: 11051689
    Abstract: A method, computer system, and computer program product for real-time pediatric eye health monitoring and assessment are provided. The embodiment may include receiving a plurality of real-time data related to an individual's eye health from a user device. The embodiment may also include assessing biometric indications relating to eye health based on the plurality of real-time data. The embodiment may further include generating a report on the assessed biometric indications. The embodiment may also include collecting clinical information from one or more databases. The embodiment may further include determining whether the assessed biometric indications reach pre-configured threshold conditions. The embodiment may also include generating alerts and recommendations based on analysis of the collected clinical information and the assessed biometric indications based on the assessed biometric indications satisfying the pre-configured threshold conditions.
    Type: Grant
    Filed: November 2, 2018
    Date of Patent: July 6, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bhavna Josephine Antony, Suman Sedai, Dwarikanath Mahapatra, Rahil Garnavi
  • Patent number: 10984674
    Abstract: A learning sub-system models search patterns of multiple experts in analyzing an image using a recurrent neural network (RNN) architecture, creates a knowledge base that models expert knowledge. A teaching sub-system teaches the search pattern captured by the RNN model and presents to a learning user the information for analyzing an image. The teaching sub-system determines the teaching image sequence based on a difficulty level identified using image features, audio cues, expert confidence and time taken by experts. An evaluation sub-system measures the learning user's performance in terms of search strategy that is evaluated against the RNN model and provides feedback on overall sequence followed by the learning user and time spent by the learning user on each region in the image.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: April 20, 2021
    Assignee: International Business Machines Corporation
    Inventors: Rahil Garnavi, Dwarikanath Mahapatra, Pallab K. Roy, Ruwan B. Tennakoon
  • Publication number: 20210093257
    Abstract: A method and system of diagnosing a medical condition of a target area of a patient using a mobile device are provided. One or more magnetic field images of a target area of a patient are received. One or more hyperspectral images of the target area of the patient are received. For each of the one or more magnetic field images and the one or more hyperspectral images, a three-dimensional (3D) position of the mobile device is tracked with respect to the target are of the patient. A 3D image of the target area is generated based on the received one or more magnetic field images, one or more hyperspectral images, and the corresponding tracked 3D position of the phone with respect to each image. A medical condition of the target area is diagnosed or monitored based on the generated 3D image.
    Type: Application
    Filed: September 30, 2019
    Publication date: April 1, 2021
    Inventors: Julian de Hoog, Dwarikanath Mahapatra, Rahil Garnavi, Fatemeh Jalali
  • Patent number: 10832074
    Abstract: From a first image using a model, a first uncertainty map is generated. An uncertainty level of a location in the first uncertainty map corresponds to a detection of a known structure in a portion of the first image. A first weighted image corresponding to the first uncertainty map is generated, the generating including assigning a first weight to a pixel of the first image, the first weight corresponding to the uncertainty level of a location in the first uncertainty map corresponding to the pixel. From a second image using a model, a second uncertainty map is generated. A second weighted image corresponding to the second uncertainty map is generated. The first image and the second image are combined to form a composite image, each image participating in the composite image according to the corresponding weighted image.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: November 10, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Suman Sedai, Bhavna Josephine Antony, Kerry Halupka, Dwarikanath Mahapatra, Rahil Garnavi
  • Patent number: 10832094
    Abstract: Color images of food a user consumes, text information associated with the food and audio information associated with the food may be received. Color images are converted into hyperspectral images. A machine learning model classifies the hyperspectral images into features comprising at least taste, nutrient content and chemical composition. A database of food consumption pattern associated with the user is created based on classification features associated with the hyperspectral images, the text information and the audio information. A color image of local food may be received and converted into a hyperspectral image. The machine learning model is run with the hyperspectral image as input, and outputs classification features associated with the local food. Based on whether the classification features associated with the local food satisfies the food consumption pattern associated with the user, the local food may be recommended.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Dwarikanath Mahapatra, Susmita Saha, Arun Vishwanath, Paul R. Bastide
  • Publication number: 20200285880
    Abstract: From a first image using a model, a first uncertainty map is generated. An uncertainty level of a location in the first uncertainty map corresponds to a detection of a known structure in a portion of the first image. A first weighted image corresponding to the first uncertainty map is generated, the generating including assigning a first weight to a pixel of the first image, the first weight corresponding to the uncertainty level of a location in the first uncertainty map corresponding to the pixel. From a second image using a model, a second uncertainty map is generated. A second weighted image corresponding to the second uncertainty map is generated. The first image and the second image are combined to form a composite image, each image participating in the composite image according to the corresponding weighted image.
    Type: Application
    Filed: March 8, 2019
    Publication date: September 10, 2020
    Applicant: International Business Machines Corporation
    Inventors: Suman Sedai, Bhavna Josephine Antony, Kerry Halupka, Dwarikanath Mahapatra, Rahil Garnavi
  • Patent number: 10726555
    Abstract: A system for registering and segmenting images includes an image scanner configured to acquire an image pair including a first image at a first time and a second image at a second time that is after the first time. A joint registration and segmentation server receives the image pair from the image scanner and simultaneously performs joint registration and segmentation on the image pair using a single deep learning framework. A computer vision processor receives an output of the joint registration and segmentation server and characterizes how a condition has progressed from the first time to the second time therefrom. A user terminal presents the characterization to a user.
    Type: Grant
    Filed: June 6, 2018
    Date of Patent: July 28, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Rahil Garnavi, Zongyuan Ge, Dwarikanath Mahapatra, Suman Sedai
  • Patent number: 10657838
    Abstract: A learning sub-system models search patterns of multiple experts in analyzing an image using a recurrent neural network (RNN) architecture, creates a knowledge base that models expert knowledge. A teaching sub-system teaches the search pattern captured by the RNN model and presents to a learning user the information for analyzing an image. The teaching sub-system determines the teaching image sequence based on a difficulty level identified using image features, audio cues, expert confidence and time taken by experts. An evaluation sub-system measures the learning user's performance in terms of search strategy that is evaluated against the RNN model and provides feedback on overall sequence followed by the learning user and time spent by the learning user on each region in the image.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: May 19, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rahil Garnavi, Dwarikanath Mahapatra, Pallab K. Roy, Ruwan B. Tennakoon
  • Patent number: 10650918
    Abstract: A computer-implemented method for determining routes includes receiving a user's location and health measures, for a plurality of users, analyzing the plurality of users and their corresponding locations to determine health behaviors at a given location and time, receiving a request from a user for a route, developing a route to the user's destination from a current location to obtain a target health behavior within a given threshold or constraint based on the plurality of user's healthy behaviors, and presenting the a route on a computing device.
    Type: Grant
    Filed: June 1, 2017
    Date of Patent: May 12, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Paul R. Bastide, Isabell Filiz Kiral-Kornek, Dwarikanath Mahapatra, Susmita Saha, Arun Vishwanath, Stefan Von Cavallar
  • Publication number: 20200138285
    Abstract: A method, computer system, and computer program product for real-time pediatric eye health monitoring and assessment are provided. The embodiment may include receiving a plurality of real-time data related to an individual's eye health from a user device. The embodiment may also include assessing biometric indications relating to eye health based on the plurality of real-time data. The embodiment may further include generating a report on the assessed biometric indications. The embodiment may also include collecting clinical information from one or more databases. The embodiment may further include determining whether the assessed biometric indications reach pre-configured threshold conditions. The embodiment may also include generating alerts and recommendations based on analysis of the collected clinical information and the assessed biometric indications based on the assessed biometric indications satisfying the pre-configured threshold conditions.
    Type: Application
    Filed: November 2, 2018
    Publication date: May 7, 2020
    Inventors: Bhavna Josephine Antony, Suman Sedai, Dwarikanath MAHAPATRA, Rahil Garnavi
  • Patent number: 10617362
    Abstract: Providing an activity for a participant may include receiving at least location data specifying a location of the participant. An engagement level of the participant may be predicted based on the location data. Sensor data associated with the participant may be received, the sensor data comprising at least current physiological data associated with the participant. Based at least on the predicted engagement level and the sensor data, an exercise for the participant to perform may be determined. A notification signal may be transmitted to the participant to perform the exercise.
    Type: Grant
    Filed: November 2, 2016
    Date of Patent: April 14, 2020
    Assignee: International Business Machines Corporation
    Inventors: Paul R. Bastide, Filiz Isabell Kiral-Kornek, Dwarikanath Mahapatra, Susmita Saha, Arun Vishwanath, Stefan von Cavallar
  • Publication number: 20190378274
    Abstract: A system for registering and segmenting images includes an image scanner configured to acquire an image pair including a first image at a first time and a second image at a second time that is after the first time. A joint registration and segmentation server receives the image pair from the image scanner and simultaneously performs joint registration and segmentation on the image pair using a single deep learning framework. A computer vision processor receives an output of the joint registration and segmentation server and characterizes how a condition has progressed from the first time to the second time therefrom. A user terminal presents the characterization to a user.
    Type: Application
    Filed: June 6, 2018
    Publication date: December 12, 2019
    Inventors: Rahil Garnavi, Zongyuan Ge, Dwarikanath Mahapatra, Suman Sedai
  • Publication number: 20190311230
    Abstract: Color images of food a user consumes, text information associated with the food and audio information associated with the food may be received. Color images are converted into hyperspectral images. A machine learning model classifies the hyperspectral images into features comprising at least taste, nutrient content and chemical composition. A database of food consumption pattern associated with the user is created based on classification features associated with the hyperspectral images, the text information and the audio information. A color image of local food may be received and converted into a hyperspectral image. The machine learning model is run with the hyperspectral image as input, and outputs classification features associated with the local food. Based on whether the classification features associated with the local food satisfies the food consumption pattern associated with the user, the local food may be recommended.
    Type: Application
    Filed: April 10, 2018
    Publication date: October 10, 2019
    Inventors: Dwarikanath Mahapatra, Susmita Saha, Arun Vishwanath, Paul R. Bastide
  • Patent number: 10229493
    Abstract: Jointly determining image segmentation and characterization. A computer-generated image of an organ may be received. Organ characteristics estimation may be performed to predict the organ characteristics considering organ segmentation. Organ segmentation may be performed to delineate the organ in the image considering the organ characteristics. A feedback loop feeds the organ characteristics estimation to determine the organ segmentation, and feeds back the organ segmentation to determine the organ characteristics estimation.
    Type: Grant
    Filed: August 11, 2016
    Date of Patent: March 12, 2019
    Assignee: International Business Machines Corporation
    Inventors: Rahil Garnavi, Dwarikanath Mahapatra, Pallab K. Roy, Suman Sedai
  • Patent number: 10169872
    Abstract: A computer-implemented method obtains at least one image from which severity of a given pathological condition presented in the at least one image is to be classified. The method generates a hybrid image representation of the at least one obtained image. The hybrid image representation comprises a concatenation of a discriminative pathology histogram, a generative pathology histogram, and a fully connected representation of a trained baseline convolutional neural network. The hybrid image representation is used to train a classifier to classify the severity of the given pathological condition presented in the at least one image. One non-limiting example of a pathological condition whose severity can be classified with the above method is diabetic retinopathy.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Rahil Garnavi, Dwarikanath Mahapatra, Pallab Roy, Suman Sedai, Ruwan B. Tennakoon
  • Publication number: 20180349563
    Abstract: A computer-implemented method for determining routes includes receiving a user's location and health measures, for a plurality of users, analyzing the plurality of users and their corresponding locations to determine health behaviors at a given location and time, receiving a request from a user for a route, developing a route to the user's destination from a current location to obtain a target health behavior within a given threshold or constraint based on the plurality of user's healthy behaviors, and presenting the a route on a computing device.
    Type: Application
    Filed: June 1, 2017
    Publication date: December 6, 2018
    Inventors: Paul R. Bastide, Isabell Filiz Kiral-Kornek, Dwarikanath Mahapatra, Susmita Saha, Arun Vishwanath, Stefan Von Cavallar
  • Patent number: 10098533
    Abstract: An AMD prediction model utilizes an OCT image estimation model. The OCT image estimation module is created by segmenting an OCT image to generate an OCT projection image for each of multiple biological layers; extracting from each of the generated OCT projection images a first set of features; extracting a second set of features from an input retinal fundus image; for each respective biological layer, registering the input retinal fundus image to the respective OCT projection image by matching at least some of the second set of features with corresponding ones of the first set of features; repeating the above with changes to the input retinal fundus image; and modelling how the changes to the input retinal fundus image are manifest at the correspondingly registered projection images. Estimated OCT projection images can then be generated for the multiple biological layers from a given retinal fundus image.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: October 16, 2018
    Assignee: International Business Machines Corporation
    Inventors: Rajib Chakravorty, Rahil Garnavi, Dwarikanath Mahapatra, Pallab Roy, Suman Sedai
  • Publication number: 20180268737
    Abstract: A learning sub-system models search patterns of multiple experts in analyzing an image using a recurrent neural network (RNN) architecture, creates a knowledge base that models expert knowledge. A teaching sub-system teaches the search pattern captured by the RNN model and presents to a learning user the information for analyzing an image. The teaching sub-system determines the teaching image sequence based on a difficulty level identified using image features, audio cues, expert confidence and time taken by experts. An evaluation sub-system measures the learning user's performance in terms of search strategy that is evaluated against the RNN model and provides feedback on overall sequence followed by the learning user and time spent by the learning user on each region in the image.
    Type: Application
    Filed: November 16, 2017
    Publication date: September 20, 2018
    Inventors: Rahil Garnavi, Dwarikanath Mahapatra, Pallab K. Roy, Ruwan B. Tennakoon
  • Publication number: 20180268733
    Abstract: A learning sub-system models search patterns of multiple experts in analyzing an image using a recurrent neural network (RNN) architecture, creates a knowledge base that models expert knowledge. A teaching sub-system teaches the search pattern captured by the RNN model and presents to a learning user the information for analyzing an image. The teaching sub-system determines the teaching image sequence based on a difficulty level identified using image features, audio cues, expert confidence and time taken by experts. An evaluation sub-system measures the learning user's performance in terms of search strategy that is evaluated against the RNN model and provides feedback on overall sequence followed by the learning user and time spent by the learning user on each region in the image.
    Type: Application
    Filed: March 15, 2017
    Publication date: September 20, 2018
    Inventors: Rahil Garnavi, Dwarikanath Mahapatra, Pallab K. Roy, Ruwan B. Tennakoon