Patents by Inventor Rithesh SREENIVASAN
Rithesh SREENIVASAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12002578Abstract: An augmented reality (AR) content generation method includes: acquiring, with a camera of an AR device, one or more images of a component of a medical imaging or medical therapy device; receiving, from a microphone of the AR device, a triggering audio segment; generating one or more query data structures from both the one or more images and the triggering audio segment; retrieving AR instructional content related to the medical imaging or medical therapy device matching the generated one or more query data structures from a database; and outputting the AR instructional content one or more of (i) displayed superimposed on video displayed by the AR device and/or (ii) displayed on a head mounted display of the AR device and/or (iii) as audio content via a loudspeaker of the AR device.Type: GrantFiled: December 2, 2019Date of Patent: June 4, 2024Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Rithesh Sreenivasan, Oladimeji Feyisetan Farri, Sheikh Sadid Al Hasan, Tilak Raj Arora, Vivek Varma Datla
-
Publication number: 20240177850Abstract: A method for establishing and/or changing a communication path between a subject and a radiation medical device (20, 30) in real time during operation of the radiation medical device (20:30).Type: ApplicationFiled: February 25, 2022Publication date: May 30, 2024Inventors: Gereon Vogtmeier, Nagaraju Bussa, Mark Thomas Johnson, Christoph Günther Leussler, Rithesh Sreenivasan, Jan Hendrik Wuelbern, Rajendra Singh Sisodia
-
Publication number: 20240071110Abstract: A method (100) for generating a textual description of a medical image, comprising: receiving (130) a medical image of an anatomical region, the image comprising one or more abnormalities; segmenting (140) the anatomical region in the received medical image from a remainder of the image; identifying (150) at least one of the one or more abnormalities in the segmented anatomical region; extracting (160) one or more features from the identified abnormality; generating (170), using the extracted features and a trained text generation model, a textual description of the identified abnormality; and reporting (180), via a user interface of the system, the generated textual description of the identified abnormality.Type: ApplicationFiled: November 6, 2023Publication date: February 29, 2024Inventors: Christine Menking SWISHER, Sheikh Sadid AL HASAN, Jonathan RUBIN, Cristhian Mauricio POTES BLANDON, Yuan LING, Oladimeji Feyisetan FARRI, Rithesh SREENIVASAN
-
Publication number: 20240055138Abstract: In order to improve workflow efficiency and/or patient experience during a scan procedure, a system is proposed to provide—prior to arrival at the hospital—an indication of the suitability of a patient to be assigned to a specific level of autonomy in a medical scanning procedure. The system comprises a scan simulation module, a patient monitoring module, and a patient profile generation module. The scan simulation module comprises one or more sensory stimulation devices configured to apply at least one sensory stimulus over a patient to simulate a scan environment that may be experienced by a patient during a scan procedure. The patient monitoring module comprises one or more sensors configured to acquire data of the patient in the simulated scan environment. The patient profile generation module is configured to determine a state of anxiety of the patient based on the acquired data and to create a patient profile comprising the determined state of anxiety of the patient in the simulated scan environment.Type: ApplicationFiled: December 27, 2021Publication date: February 15, 2024Inventors: Rithesh Sreenivasan, Nagaraju Bussa, Krishnamoorthy Palanisamy, Gereon Vogtmeier, Mark Thomas Johnson, Rajendra Singh Sisodia, Steffen Weiss, Christoph Günther Leussler
-
Publication number: 20240023901Abstract: There is a need for techniques to avoid or reduce motion artifacts in medical images. According to the invention, there is provided control circuitry for a medical imaging system. The control circuitry is configured to control the medical imaging system to provide synchronized display and nudging to provide a patient with information on the progress of a scan. In this way, patient anxiety may be reduced by providing the patient with a sense of time during the scan, leading to a reduction in motion artifacts.Type: ApplicationFiled: October 20, 2021Publication date: January 25, 2024Inventors: Rithesh Sreenivasan, Krishnamoorthy Palanisamy, Sudipta Chaudhury, Jaap Knoester, Gereon Vogtmeier, Steffen Weiss
-
Publication number: 20240008783Abstract: The present invention relates to a system (100) for sensor signals dependent dialog generation during a medical imaging process, the system (100) comprising a sensor module (10), configured to measure condition data of a patient; a processor module (20), configured to analyze the condition data of the patient to determine biometric and physical condition data of the patient; a dialog data generation module (30), configured to generate questionnaires data for obtaining real-time feedback from the patient during the medical imaging process, wherein the questionnaires data is based on a parameter of the medical imaging process and on the determined biometric and physical condition data of the patient.Type: ApplicationFiled: November 10, 2021Publication date: January 11, 2024Inventors: Krishnamoorthy Palanisamy, Rithesh Sreenivasan, Rajendra Singh Sisodia, Sarif Kumar Naik, Gereon Vogtmeier, Mark Thomas Johnson, Steffen Weiss, Nagaraju Bussa, Christopher Günther Leussler
-
Patent number: 11836997Abstract: A method (100) for generating a textual description of a medical image, comprising: receiving (130) a medical image of an anatomical region, the image comprising one or more abnormalities; segmenting (140) the anatomical region in the received medical image from a remainder of the image; identifying (150) at least one of the one or more abnormalities in the segmented anatomical region; extracting (160) one or more features from the identified abnormality; generating (170), using the extracted features and a trained text generation model, a textual description of the identified abnormality; and reporting (180), via a user interface of the system, the generated textual description of the identified abnormality.Type: GrantFiled: May 7, 2019Date of Patent: December 5, 2023Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Christine Menking Swisher, Sheikh Sadid Al Hasan, Jonathan Rubin, Cristhian Mauricio Potes Blandon, Yuan Ling, Oladimeji Feyisetan Farri, Rithesh Sreenivasan
-
Publication number: 20230351252Abstract: Some embodiments are directed to training a model, e.g., a medical model. The training uses multiple model updates received from multiple client systems. At least some of the multiple client train on training sets that indicate values for different features. The model updates are aggregated in an aggregated model, for which feature weights are obtained. The feature weights provide information on the relative importance of the multiple features for the aggregated model's output.Type: ApplicationFiled: September 23, 2021Publication date: November 2, 2023Inventors: Ashul Jain, Shreya Anand, Shiva Moorthy Pookala Vittal, Aleksandr Bukharev, Richard Vdovjak, Rithesh Sreenivasan
-
Publication number: 20230214593Abstract: According to an aspect, there is provided a computer-implemented method of structuring content for training an artificial intelligence model, the method comprising: receiving (S11) input content associated with medical device documentation; converting (S12) the input content to a data interchange format; extracting (S13) a plurality of key terms from the converted input content; extracting (S14) a plurality of key phrases from the converted input content; receiving (S15) validation of the key terms and the key phrases from a supervisor; and building (S16) a dialogue, for training the artificial intelligence model, based on at least some of the validated key terms and the validated key phrases, wherein the dialogue comprises a series of statements.Type: ApplicationFiled: June 16, 2021Publication date: July 6, 2023Inventors: RISHAB PRADEEP PADUKONE, RAJENDRA SINGH SISODIA, RITHESH SREENIVASAN, SHREYA ANAND, THASNEEM MOORKAN, TILAK RAJ ARORA
-
Patent number: 11616740Abstract: A chatbot may be invoked in an online communication session between two or more human users to share additional content in the communication session. To determine when to invoke the chatbot, e.g., at which point during their conversation, an interaction model may be trained on past conversation data between participants to determine so-termed interaction points in the past conversation data which are predictive of a subsequent sharing of additional content by one of the participants. Having generated the interaction model, the interaction model may be applied to an online communication session to detect such interaction points in a real-time or near real-time conversation between users and to invoke the chatbot to participate in the communication session in response to a detection.Type: GrantFiled: September 10, 2019Date of Patent: March 28, 2023Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Rithesh Sreenivasan, Zoran Stankovic
-
Publication number: 20230066314Abstract: The present disclosure relates to preserving context in a conversation between a user (101) and a digital assistant device (102). During training, the digital assistant device (102) is provided with a plurality of conversations having a plurality of dialogues. Each of the plurality of dialogue is assigned an ID based on a context. Further, two or more test queries having a same context is provided as input and the two are more queries are assigned an ID based on the context. Thereafter, the digital assistant device (102) is configured to retrieve one or more dialogues from the plurality of dialogues where the ID of the one or more dialogues match the ID of the two or more queries. In real-time, one or more queries are received and based on a context of the one or more queries, one or more dialogues are retrieved and are provided to the user.Type: ApplicationFiled: February 5, 2021Publication date: March 2, 2023Inventors: SHREYA ANAND, RITHESH SREENIVASAN, SHEIKH SADID AL HASAN, OLADIMEJI FEYISETAN FARRI
-
Patent number: 11419559Abstract: A device, system, and method is provided for detecting pain in a cardiac-related region of the body and determining whether that pain is cardiac or non-cardiac. The device, system, and method may include calculating or determining a first feature based on a variation in activity level and a variation in the detected heartrate measurement and a second feature based on a variation in the detected ECG features and a first feature and then subjecting at least the first feature and the second feature to a cardiac pain classifier to determine a cardiac classification.Type: GrantFiled: December 3, 2018Date of Patent: August 23, 2022Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Vikram Basawaraj Patil Okaly, Ravindra Balasaheb Patil, Rithesh Sreenivasan, Krishnamoorthy Palanisamy
-
Patent number: 11403786Abstract: Embodiments of present disclosure disclose method and system for generating a medical image based on a textual data in a medical report. For generation, a textual data from each of one or more medical reports of the patient is retrieved. The textual data comprises one or more medical events and corresponding one or more attributes associated with each of the one or more medical reports. Further, a matching score for each of plurality of reference images is computed based on the textual data, using a first machine learning model. Upon computing the matching score, one or more images are selected from the plurality of reference images based on the matching score associated with each of the plurality of reference images. The medical image for the patient is generated based on the one or more images and the textual data using a second machine learning model.Type: GrantFiled: March 15, 2019Date of Patent: August 2, 2022Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Oladimeji Feyisetan Farri, Rithesh Sreenivasan, Vikram Basawaraj Patil Okaly, Ravindra Balasaheb Patil, Krishnamoorthy Palanisamy
-
Publication number: 20220078139Abstract: A chatbot may be invoked in an online communication session between two or more human users to share additional content in the communication session. To determine when to invoke the chatbot, e.g., at which point during their conversation, an interaction model may be trained on past conversation data between participants to determine so-termed interaction points in the past conversation data which are predictive of a subsequent sharing of additional content by one of the participants. Having generated the interaction model, the interaction model may be applied to an online communication session to detect such interaction points in a real-time or near real-time conversation between users and to invoke the chatbot to participate in the communication session in response to a detection.Type: ApplicationFiled: September 10, 2019Publication date: March 10, 2022Inventors: Rithesh Sreenivasan, Zoran Stankovic
-
Publication number: 20220020482Abstract: A non-transitory computer-readable medium stores instructions readable and executable by at least one electronic processor (20) to perform an augmented reality (AR) content generation method (100). The method includes: acquiring, with a camera (14) of an AR device (13), one or more images of a component of a medical imaging or medical therapy device (12); receiving, from a microphone (15) of the AR device, a triggering audio segment; generating one or more query data structures from both the one or more images and the triggering audio segment; retrieving AR instructional content related to the medical imaging or medical therapy device matching the generated one or more query data structures from a database (26); and outputting the AR instructional content one or more of (i) displayed superimposed on video displayed by the AR device and/or (ii) displayed on a head mounted display of the AR device and/or (iii) as audio content via a loudspeaker (27) of the AR device.Type: ApplicationFiled: December 2, 2019Publication date: January 20, 2022Inventors: Rithesh SREENIVASAN, Oladimeji Feyisetan FARRI, Sheikh Sadid AL HASAN, Tilak Raj ARORA, Vivek Varma DATLA
-
Patent number: 11183221Abstract: In certain embodiments, a video file may be obtained based on one or more predetermined criteria. Information associated with a user (to which dynamic content derived from at least a video portion of the video file is to be presented) may be obtained. The video file may be processed based on the information associated with the user to determine reference points within the video file. The dynamic content may be generated based on the reference points such that the dynamic content comprises a first video portion of the video file (that corresponds to at least one of the reference points) and additional content related to the first video portion. The dynamic content may be provided for presentation to the user.Type: GrantFiled: December 18, 2015Date of Patent: November 23, 2021Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Anand Srinivasan Srinivasan Natesan, Rithesh Sreenivasan, Rajendra Singh Sisodia, Shahin Basheer
-
Publication number: 20210358638Abstract: A method for training an adherence model, the method including: extracting data for a group of individuals (510), wherein the extracted data includes demographic data (205) and clinical data (210); training a linear regression model (520) using a set of hyperparameter pairs (L1, Alpha) (515), wherein the linear regression model produces an adherence index based upon the extracted data, further including: for each hyperparameter pair (L1, Alpha) in the set of hyperparameter pairs, training the linear regression model using a training data set to produce a linear regression model for each hyperparameter pair (L1, Alpha) and calculating a performance metric R2 for the resulting model based upon a validation data set (525), wherein the training data set is a subset of the extracted data and the validation data set is a subset of the extracted data that is different from the training data set; and identifying the linear regression model with the largest performance metric R2 (530).Type: ApplicationFiled: November 14, 2019Publication date: November 18, 2021Inventors: Rithesh Sreenivasan, Rose Ramasamy, Shiva Moorthy Pookala Vittal Bhat
-
Publication number: 20210241884Abstract: A method (100) for generating a textual description of a medical image, comprising: receiving (130) a medical image of an anatomical region, the image comprising one or more abnormalities; segmenting (140) the anatomical region in the received medical image from a remainder of the image; identifying (150) at least one of the one or more abnormalities in the segmented anatomical region; extracting (160) one or more features from the identified abnormality; generating (170), using the extracted features and a trained text generation model, a textual description of the identified abnormality; and reporting (180), via a user interface of the system, the generated textual description of the identified abnormality.Type: ApplicationFiled: May 7, 2019Publication date: August 5, 2021Inventors: Christine Menking Swisher, Sheikh Sadid Al Hasan, Jonathan Rubin, Cristhian Mauricio Potes Blandon, Yuan Ling, Oladimeji Feyisetan Farri, Rithesh Sreenivasan
-
Patent number: 11074485Abstract: A machine learning based recommendation model, including a supervised learning classifier configured to receive input training data that includes a plurality of behavioral determinants, a supervised learning model configured to receive subject input data that includes a plurality of behavior determinants, wherein the supervised learning model outputs a predicted behavior of a subject, and a channel selection module configured to receive the subject input data and the predicted behavior and to determine a recommended communication channel for the subject to follow to achieve the predicted behavior.Type: GrantFiled: June 25, 2019Date of Patent: July 27, 2021Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Rithesh Sreenivasan, Aart Tijmen Van Halteren, Karthik Srinivasan
-
Patent number: 11056227Abstract: A method for generating a textual description from a medical image, comprising: receiving a medical image having a first modality to a system configured to generate a textual description of the medical image; determining, using an imaging modality classification module, that the first modality is a specific one of a plurality of different modalities; determining, using an anatomy classification module, that the medical image comprises information about a specific portion of an anatomy; identifying, by an orchestrator module based at least on the determined first modality, which of a plurality of different text generation models to utilize to generate a textual description from the medical image; generating, by a text generation module utilizing the identified text generation model, a textual description from the medical image; and reporting, via a user interface of the system, the generated textual description.Type: GrantFiled: May 23, 2018Date of Patent: July 6, 2021Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Rithesh Sreenivasan, Shreya Anand, Tilak Raj Arora, Oladimeji Feyisetan Farri, Sheikh Sadid Al Hasan, Yuan Ling, Junyi Liu