Patents by Inventor SUMAN SEDAI
SUMAN SEDAI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10983032Abstract: An apparatus for automatically separating and collecting a fluid stream into multiple portions, including a tubing to receive the fluid stream, a plurality of valves attached to the tubing in spaced relation along the length of the tubing and a plurality fluid collection containers attached to the valves. A controller controls the activation of the plurality of valves to collect and separate portions of the fluid stream in the plurality of the fluid collection containers. A flow meter detects the flow of the fluid stream in the tubing and measures a flow rate of the fluid stream. The controller activates the valves based on the detection of the fluid and the flow rate. The controller also activates the valves based on the timing of the fluid stream. The controller also monitors the flow rate over time and activates the valves in sequence based on the flow rate.Type: GrantFiled: November 13, 2018Date of Patent: April 20, 2021Assignee: International Business Machines CorporationInventors: Stefan von Cavallar, Kerry J. Halupka, Rahil Garnavi, Rajib Chakravorty, Suman Sedai
-
Patent number: 10832074Abstract: From a first image using a model, a first uncertainty map is generated. An uncertainty level of a location in the first uncertainty map corresponds to a detection of a known structure in a portion of the first image. A first weighted image corresponding to the first uncertainty map is generated, the generating including assigning a first weight to a pixel of the first image, the first weight corresponding to the uncertainty level of a location in the first uncertainty map corresponding to the pixel. From a second image using a model, a second uncertainty map is generated. A second weighted image corresponding to the second uncertainty map is generated. The first image and the second image are combined to form a composite image, each image participating in the composite image according to the corresponding weighted image.Type: GrantFiled: March 8, 2019Date of Patent: November 10, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Suman Sedai, Bhavna Josephine Antony, Kerry Halupka, Dwarikanath Mahapatra, Rahil Garnavi
-
Publication number: 20200286208Abstract: A generative adversarial network including a generator portion and a discriminator portion is constructed. The network is configured such that the network operates to enhance intensity images, wherein an intensity image is obtained by illuminating an object with an energy pulse and measuring the return strength of the energy pulse, and wherein a pixel of the intensity image corresponds to the return strength. As a part of the configuring, a loss function of the generative adversarial network is minimized, the loss function comprising a mean square error loss measurement of a noisy intensity image relative to a mean square error loss measurement of a corresponding clean intensity image. An enhanced intensity image is generated by applying the minimized loss function of the network to an original intensity image, the applying improving an image quality measurement of the enhanced intensity image relative to the original intensity image.Type: ApplicationFiled: March 8, 2019Publication date: September 10, 2020Applicants: International Business Machines Corporation, New York UniversityInventors: Kerry Halupka, Bhavna Josephine Antony, Suman Sedai, Rahil Garnavi, Hiroshi Ishikawa
-
Publication number: 20200285880Abstract: From a first image using a model, a first uncertainty map is generated. An uncertainty level of a location in the first uncertainty map corresponds to a detection of a known structure in a portion of the first image. A first weighted image corresponding to the first uncertainty map is generated, the generating including assigning a first weight to a pixel of the first image, the first weight corresponding to the uncertainty level of a location in the first uncertainty map corresponding to the pixel. From a second image using a model, a second uncertainty map is generated. A second weighted image corresponding to the second uncertainty map is generated. The first image and the second image are combined to form a composite image, each image participating in the composite image according to the corresponding weighted image.Type: ApplicationFiled: March 8, 2019Publication date: September 10, 2020Applicant: International Business Machines CorporationInventors: Suman Sedai, Bhavna Josephine Antony, Kerry Halupka, Dwarikanath Mahapatra, Rahil Garnavi
-
Patent number: 10726555Abstract: A system for registering and segmenting images includes an image scanner configured to acquire an image pair including a first image at a first time and a second image at a second time that is after the first time. A joint registration and segmentation server receives the image pair from the image scanner and simultaneously performs joint registration and segmentation on the image pair using a single deep learning framework. A computer vision processor receives an output of the joint registration and segmentation server and characterizes how a condition has progressed from the first time to the second time therefrom. A user terminal presents the characterization to a user.Type: GrantFiled: June 6, 2018Date of Patent: July 28, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Rahil Garnavi, Zongyuan Ge, Dwarikanath Mahapatra, Suman Sedai
-
Publication number: 20200229770Abstract: A retinal structure and function forecasting method, system, and computer program product include producing an enriched feature representation of clinical measurements and clinical data combined with optical coherence tomography (OCT) data, training a forecasting model with the enriched feature representation, and forecasting a retinal structure at a forecast date based on the trained forecasting model.Type: ApplicationFiled: January 18, 2019Publication date: July 23, 2020Inventors: Suman Sedai, Bhavna Josephine Antony, Rahil Garnavi, Hiroshi Ishikawa
-
Patent number: 10704994Abstract: An apparatus for the collection of multiple samples of a fluid stream. The apparatus includes a plurality fluid collection containers arranged to receive separate portions of the fluid stream in a temporal sequence and a sealing mechanism provided on each fluid collection container configured to close the fluid collection containers when filled by a portion of the fluid stream and to cause the fluid stream to flow to the next fluid collection container in the sequence. The fluid collection containers separately collect at least a beginning portion of the fluid stream, a mid-portion of the fluid stream and an end-portion of the fluid stream. Each of the fluid collection containers may be detachably connected to tubing held in a housing having a sloped upper surface for receiving the fluid stream and an inlet for directing the fluid stream to the tubing. The housing may be a fluid collection cup.Type: GrantFiled: November 13, 2018Date of Patent: July 7, 2020Assignee: International Business Machines CorporationInventors: Kerry J. Halupka, Stefan von Cavallar, Rajib Chakravorty, Suman Sedai, Rahil Garnavi
-
Publication number: 20200150000Abstract: An apparatus for the collection of multiple samples of a fluid stream. The apparatus includes a plurality fluid collection containers arranged to receive separate portions of the fluid stream in a temporal sequence and a sealing mechanism provided on each fluid collection container configured to close the fluid collection containers when filled by a portion of the fluid stream and to cause the fluid stream to flow to the next fluid collection container in the sequence. The fluid collection containers separately collect at least a beginning portion of the fluid stream, a mid-portion of the fluid stream and an end-portion of the fluid stream. Each of the fluid collection containers may be detachably connected to tubing held in a housing having a sloped upper surface for receiving the fluid stream and an inlet for directing the fluid stream to the tubing. The housing may be a fluid collection cup.Type: ApplicationFiled: November 13, 2018Publication date: May 14, 2020Inventors: Kerry J. Halupka, Stefan von Cavallar, Rajib Chakravorty, Suman Sedai, Rahil Garnavi
-
Publication number: 20200150001Abstract: An apparatus for automatically separating and collecting a fluid stream into multiple portions, including a tubing to receive the fluid stream, a plurality of valves attached to the tubing in spaced relation along the length of the tubing and a plurality fluid collection containers attached to the valves. A controller controls the activation of the plurality of valves to collect and separate portions of the fluid stream in the plurality of the fluid collection containers. A flow meter detects the flow of the fluid stream in the tubing and measures a flow rate of the fluid stream. The controller activates the valves based on the detection of the fluid and the flow rate. The controller also activates the valves based on the timing of the fluid stream. The controller also monitors the flow rate over time and activates the valves in sequence based on the flow rate.Type: ApplicationFiled: November 13, 2018Publication date: May 14, 2020Inventors: Stefan von Cavallar, Kerry J. Halupka, Rahil Garnavi, Rajib Chakravorty, Suman Sedai
-
Publication number: 20200138285Abstract: A method, computer system, and computer program product for real-time pediatric eye health monitoring and assessment are provided. The embodiment may include receiving a plurality of real-time data related to an individual's eye health from a user device. The embodiment may also include assessing biometric indications relating to eye health based on the plurality of real-time data. The embodiment may further include generating a report on the assessed biometric indications. The embodiment may also include collecting clinical information from one or more databases. The embodiment may further include determining whether the assessed biometric indications reach pre-configured threshold conditions. The embodiment may also include generating alerts and recommendations based on analysis of the collected clinical information and the assessed biometric indications based on the assessed biometric indications satisfying the pre-configured threshold conditions.Type: ApplicationFiled: November 2, 2018Publication date: May 7, 2020Inventors: Bhavna Josephine Antony, Suman Sedai, Dwarikanath MAHAPATRA, Rahil Garnavi
-
Patent number: 10614575Abstract: A method of tracking a cell through a plurality of images includes selecting the cell in at least one image obtained at a first time, generating a track of the cell through a plurality of images, including the at least one image, obtained at different times using a backward tracking, and generating a cell tree lineage of the cell using the track.Type: GrantFiled: December 28, 2017Date of Patent: April 7, 2020Assignee: International Business Machines CorporationInventors: Seyedbehzad Bozorgtabar, Rahil Garnavi, Suman Sedai
-
Patent number: 10510150Abstract: A method of tracking a cell through a plurality of images includes selecting the cell in at least one image obtained at a first time, generating a track of the cell through a plurality of images, including the at least one image, obtained at different times using a backward tracking, and generating a cell tree lineage of the cell using the track.Type: GrantFiled: June 20, 2017Date of Patent: December 17, 2019Assignee: International Business Machines CorporationInventors: Seyedbehzad Bozorgtabar, Rahil Garnavi, Suman Sedai
-
Publication number: 20190378274Abstract: A system for registering and segmenting images includes an image scanner configured to acquire an image pair including a first image at a first time and a second image at a second time that is after the first time. A joint registration and segmentation server receives the image pair from the image scanner and simultaneously performs joint registration and segmentation on the image pair using a single deep learning framework. A computer vision processor receives an output of the joint registration and segmentation server and characterizes how a condition has progressed from the first time to the second time therefrom. A user terminal presents the characterization to a user.Type: ApplicationFiled: June 6, 2018Publication date: December 12, 2019Inventors: Rahil Garnavi, Zongyuan Ge, Dwarikanath Mahapatra, Suman Sedai
-
Publication number: 20190328300Abstract: A teleconferencing system includes a first terminal configured to acquire an audio signal and a video signal. A teleconferencing server in communication with the first terminal and a second terminal is configured to receive the video signal and the audio signal from the first terminal, in real-time, and transmit the video signal and the audio signal to the second terminal. A symptom recognition server in communication with the first terminal and the teleconferencing server is configured to receive the video signal and the audio signal from the first terminal, asynchronously, analyze the video signal and the audio signal to detect one or more indicia of illness, generate a diagnostic alert on detecting the one or more indicia of illness, and transmit the diagnostic alert to the teleconferencing server for display on the second terminal.Type: ApplicationFiled: April 27, 2018Publication date: October 31, 2019Inventors: SEYEDBEHZAD BOZORGTABAR, NOEL FAUX, RAHIL GARNAVI, SUMAN SEDAI
-
Patent number: 10307050Abstract: An embodiment of the invention receives by an interface a retinal image from a patient, and identifies by a feature extraction device vessel fragments in the retinal image. The vessel fragments include at least a portion of a major vessel and at least a portion of a branch connected to a major vessel. A processor computes estimated blood flow velocities in the vessel fragments with a blood flow velocity estimation model and determines actual blood flow velocities in the vessel fragments. An analysis engine compares the actual blood flow velocities in the vessel fragments to the estimated blood flow velocities in the vessel fragments. The analysis engine detects a candidate plaque affected vessel fragment when the estimated blood flow velocities in the vessel fragments differs from the actual blood flow velocities in the vessel fragments by a predetermined amount.Type: GrantFiled: April 11, 2017Date of Patent: June 4, 2019Assignee: International Business Machines CorporationInventors: Rahil Garnavi, Kerry J. Halupka, Stephen M. Moore, Pallab Roy, Suman Sedai
-
Patent number: 10229493Abstract: Jointly determining image segmentation and characterization. A computer-generated image of an organ may be received. Organ characteristics estimation may be performed to predict the organ characteristics considering organ segmentation. Organ segmentation may be performed to delineate the organ in the image considering the organ characteristics. A feedback loop feeds the organ characteristics estimation to determine the organ segmentation, and feeds back the organ segmentation to determine the organ characteristics estimation.Type: GrantFiled: August 11, 2016Date of Patent: March 12, 2019Assignee: International Business Machines CorporationInventors: Rahil Garnavi, Dwarikanath Mahapatra, Pallab K. Roy, Suman Sedai
-
Patent number: 10229499Abstract: A dermoscopic lesion area is identified by: Obtaining a dermoscopic image and running a convolutional neural network image classifier on the dermoscopic image to obtain pixelwise lesion prediction scores. Segmenting the dermoscopic image into super-pixels, and computing for each super-pixel an average of the pixelwise prediction scores for pixels within that super-pixel. Computing a mean prediction score across the plurality of super-pixels. Assigning a confidence indicator of “1” to each super-pixel with a prediction score equal or greater than the mean prediction score, and a confidence indicator of “0” to each super-pixel with a prediction score less than the mean prediction score.Type: GrantFiled: December 29, 2017Date of Patent: March 12, 2019Assignee: International Business Machines CorporationInventors: Seyedbehzad Bozorgtabar, Rahil Garnavi, Pallab Roy, Suman Sedai
-
Patent number: 10223788Abstract: A dermoscopic lesion area is identified by: Obtaining a dermoscopic image and running a convolutional neural network image classifier on the dermoscopic image to obtain pixelwise lesion prediction scores. Segmenting the dermoscopic image into super-pixels, and computing for each super-pixel an average of the pixelwise prediction scores for pixels within that super-pixel. Computing a mean prediction score across the plurality of super-pixels. Assigning a confidence indicator of “1” to each super-pixel with a prediction score equal or greater than the mean prediction score, and a confidence indicator of “0” to each super-pixel with a prediction score less than the mean prediction score.Type: GrantFiled: February 24, 2017Date of Patent: March 5, 2019Assignee: International Business Machines CorporationInventors: Seyedbehzad Bozorgtabar, Rahil Garnavi, Pallab Roy, Suman Sedai
-
Patent number: 10169872Abstract: A computer-implemented method obtains at least one image from which severity of a given pathological condition presented in the at least one image is to be classified. The method generates a hybrid image representation of the at least one obtained image. The hybrid image representation comprises a concatenation of a discriminative pathology histogram, a generative pathology histogram, and a fully connected representation of a trained baseline convolutional neural network. The hybrid image representation is used to train a classifier to classify the severity of the given pathological condition presented in the at least one image. One non-limiting example of a pathological condition whose severity can be classified with the above method is diabetic retinopathy.Type: GrantFiled: February 7, 2017Date of Patent: January 1, 2019Assignee: International Business Machines CorporationInventors: Rahil Garnavi, Dwarikanath Mahapatra, Pallab Roy, Suman Sedai, Ruwan B. Tennakoon
-
Publication number: 20180365842Abstract: A method of tracking a cell through a plurality of images includes selecting the cell in at least one image obtained at a first time, generating a track of the cell through a plurality of images, including the at least one image, obtained at different times using a backward tracking, and generating a cell tree lineage of the cell using the track.Type: ApplicationFiled: December 28, 2017Publication date: December 20, 2018Inventors: SEYEDBEHZAD BOZORGTABAR, RAHIL GARNAVI, SUMAN SEDAI