Patents by Inventor Jagadish Venkataraman
Jagadish Venkataraman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250094636Abstract: This patent disclosure provides various verification techniques to ensure that anonymized surgical procedure videos are indeed free of any personally-identifiable information (PII). In a particular aspect, a process for verifying that an anonymized surgical procedure video is free of PII is disclosed. This process can begin by receiving a surgical video corresponding to a surgery. The process next removes personally-identifiable information (PII) from the surgical video to generate an anonymized surgical video. Next, the process selects a set of verification video segments from the anonymized surgical procedure video. The process subsequently determines whether each segment in the set of verification video segments is free of PII. If so, the process replaces the surgical video with the anonymized surgical video for storage. If not, the process performs additional PII removal steps on the anonymized surgical video to generate an updated anonymized surgical procedure video.Type: ApplicationFiled: December 6, 2024Publication date: March 20, 2025Inventors: Jagadish Venkataraman, Pablo Garcia Kilroy
-
Publication number: 20250082173Abstract: Embodiments described herein provide examples of a machine-learning-based visual-haptic system for constructing visual-haptic models for various interactions between surgical tools and tissues. In one aspect, a process for constructing a visual-haptic model is disclosed. This process can begin by receiving a set of training videos. The process then processes each training video in the set of training videos to extract one or more video segments that depict a target tool-tissue interaction from the training video, wherein the target tool-tissue interaction involves exerting a force by one or more surgical tools on a tissue. Next, for each video segment in the set of video segments, the process annotates each video image in the video segment with a set of force levels predefined for the target tool-tissue interaction. The process subsequently trains a machine-learning model using the annotated video images to obtain a trained machine-learning model for the target tool-tissue interaction.Type: ApplicationFiled: September 13, 2024Publication date: March 13, 2025Inventors: Jagadish Venkataraman, Denise Ann Miller
-
Publication number: 20250062020Abstract: In the disclosed systems and methods for characterizing a cancer condition of a tissue in a subject, a computer system inputs information into an ensemble model. The information includes, for each respective class of radiomics features in a plurality of classes of radiomics features, a corresponding value for each respective radiomic feature in a corresponding plurality of radiomics features of the respective class of radiomics features obtained from a medical imaging dataset. The ensemble model comprises a plurality of component models. The computer system obtains as output from each respective component model in the plurality of component models a corresponding component prediction for the cancer condition, thereby obtaining a plurality of component predictions for the cancer condition. The computer system combines the plurality of component predictions to obtain as output of the ensemble model a characterization of the cancer condition.Type: ApplicationFiled: August 14, 2023Publication date: February 20, 2025Inventors: Jacob William Gordon, Nathaniel Braman, Jagadish Venkataraman
-
Publication number: 20250046065Abstract: In the disclosed systems and methods for categorizing medical data, a computer system obtains, in electronic form, a plurality of medical records. Each medical record includes corresponding medical data from a respective medical evaluation and corresponding metadata comprising a plurality of attributes about the respective medical evaluation. Each respective attribute comprises a corresponding string of text. The computer system determines, for each respective pair of medical records consisting of a first medical record and a second medical record, a corresponding pairwise similarity between, for each respective attribute in a set of attributes, the corresponding string of text for the first medical record and the corresponding string of text for the second medical record. The computer system identifies a first subset of the plurality of medical records. Each respective medical record in the first subset is connected to each other through pairwise similarities that each satisfies a similarity threshold.Type: ApplicationFiled: August 1, 2023Publication date: February 6, 2025Inventors: Akshay Goel, Jagadish Venkataraman, Jacob William Hunter Gordon
-
Patent number: 12189821Abstract: This patent disclosure provides various verification techniques to ensure that anonymized surgical procedure videos are indeed free of any personally-identifiable information (PII). In a particular aspect, a process for verifying that an anonymized surgical procedure video is free of PII is disclosed. This process can begin by receiving a surgical video corresponding to a surgery. The process next removes personally-identifiable information (PII) from the surgical video to generate an anonymized surgical video. Next, the process selects a set of verification video segments from the anonymized surgical procedure video. The process subsequently determines whether each segment in the set of verification video segments is free of PII. If so, the process replaces the surgical video with the anonymized surgical video for storage. If not, the process performs additional PII removal steps on the anonymized surgical video to generate an updated anonymized surgical procedure video.Type: GrantFiled: May 18, 2023Date of Patent: January 7, 2025Assignee: Verb Surgical Inc.Inventors: Jagadish Venkataraman, Pablo Garcia Kilroy
-
Patent number: 12120461Abstract: This disclosure provides techniques of synchronizing the playback of two recorded videos of the same surgical procedure. In one aspect, a process for generating a composite video from two recorded videos of a surgical procedure is disclosed. This process begins by receiving a first and second surgical videos of the same surgical procedure. The process then performs phase segmentation on each of the first and second surgical videos to segment the first and second surgical videos into a first set of video segments and a second set of video segments, respectively, corresponding to a sequence of predefined phases. Next, the process time-aligns each video segment of a given predefined phase in the first video with a corresponding video segment of the given predefined phase in the second video. The process next displays the time-aligned first and second surgical videos for comparative viewing.Type: GrantFiled: May 4, 2023Date of Patent: October 15, 2024Assignee: Verb Surgical, Inc.Inventors: Pablo Garcia Kilroy, Jagadish Venkataraman
-
Patent number: 12108928Abstract: Embodiments described herein provide various examples of a machine-learning-based visual-haptic system for constructing visual-haptic models for various interactions between surgical tools and tissues. In one aspect, a process for constructing a visual-haptic model is disclosed. This process can begin by receiving a set of training videos. The process then processes each training video in the set of training videos to extract one or more video segments that depict a target tool-tissue interaction from the training video, wherein the target tool-tissue interaction involves exerting a force by one or more surgical tools on a tissue. Next, for each video segment in the set of video segments, the process annotates each video image in the video segment with a set of force levels predefined for the target tool-tissue interaction. The process subsequently trains a machine-learning model using the annotated video images to obtain a trained machine-learning model for the target tool-tissue interaction.Type: GrantFiled: October 10, 2023Date of Patent: October 8, 2024Assignee: Verb Surgical Inc.Inventors: Jagadish Venkataraman, Denise Ann Miller
-
Publication number: 20240290459Abstract: This patent disclosure provides various embodiments of combining multiple modalities of non-text surgical data in forms of videos, images, and audios in a meaningful manner so that the combined data can be used to perform comprehensive data analytics for a surgical procedure. In some embodiments, the disclosed system can begin by receiving two or more modalities of surgical data during the surgical procedure. The system then time-synchronizes the two or more modalities of surgical data to generate two or more modalities of time-synchronized surgical data. Next, the system converts each modality of the time-synchronized surgical data into a corresponding array of values of a common format. The system then combines the two or more arrays of values to generate a combined set of values. The system subsequently performs comprehensive data analytics on the combined set of values to generate a surgical decision for the surgical procedure.Type: ApplicationFiled: March 5, 2024Publication date: August 29, 2024Inventors: Jagadish Venkataraman, Pablo Garcia Kilroy
-
Publication number: 20240260813Abstract: An endoscope system capable of automatically turning on/off a light source during a surgical procedure is disclosed. This endoscope system includes: an endoscope module; a light source module coupled to the endoscope module; and a light-source control module. Moreover, the light-source control module controls an ON/OFF state of the light source by: (1) receiving real-time video images captured by the endoscope module; (2) processing the real-time video images to determine whether the endoscope module is inserted into a patient's body or is outside of the patient's body; and (3) in response to determining the endoscope module being outside of the patient's body while the light source is turned on, generating a control signal to immediately turn off the light source to the endoscope module, thereby ensuring safety of people in the operating room and preventing sensitive information in the operating room from being captured by the endoscope module.Type: ApplicationFiled: February 15, 2024Publication date: August 8, 2024Inventor: Jagadish Venkataraman
-
Publication number: 20240242818Abstract: Embodiments described herein provide various examples of a surgical video analysis system for segmenting surgical videos of a given surgical procedure into shorter video segments and labeling/tagging these video segments with multiple categories of machine learning descriptors. In one aspect, a process for processing surgical videos recorded during performed surgeries of a surgical procedure includes the steps of: receiving a diverse set of surgical videos associated with the surgical procedure; receiving a set of predefined phases for the surgical procedure and a set of machine learning descriptors identified for each predefined phase in the set of predefined phases; for each received surgical video, segmenting the surgical video into a set of video segments based on the set of predefined phases and for each segment of the surgical video of a given predefined phase, annotating the video segment with a corresponding set of machine learning descriptors for the given predefined phase.Type: ApplicationFiled: January 19, 2024Publication date: July 18, 2024Inventors: Jagadish Venkataraman, Pablo E. Garcia Kilroy
-
Publication number: 20240106988Abstract: Embodiments described herein provide various examples of monitoring adverse events in the background while displaying a higher-resolution surgical video on a lower-resolution display device. In one aspect, a process for detecting adverse events during a surgical procedure can begin by receiving a surgical video. The process then displays a first portion of the video images of the surgical video on a screen to assist a surgeon performing the surgical procedure. While displaying the first portion of the video images, the process uses a set of deep-learning models to monitor a second portion of the video images not being displayed on the screen, wherein each deep-learning model is constructed to detect a given adverse event among a set of adverse events. In response to detecting an adverse event in the second portion of the video images, the process notifies the surgeon of the detected adverse event to prompt an appropriate action.Type: ApplicationFiled: October 16, 2023Publication date: March 28, 2024Inventors: Jagadish Venkataraman, Dave Scott, Eric Johnson
-
Publication number: 20240099555Abstract: Embodiments described herein provide various examples of a machine-learning-based visual-haptic system for constructing visual-haptic models for various interactions between surgical tools and tissues. In one aspect, a process for constructing a visual-haptic model is disclosed. This process can begin by receiving a set of training videos. The process then processes each training video in the set of training videos to extract one or more video segments that depict a target tool-tissue interaction from the training video, wherein the target tool-tissue interaction involves exerting a force by one or more surgical tools on a tissue. Next, for each video segment in the set of video segments, the process annotates each video image in the video segment with a set of force levels predefined for the target tool-tissue interaction. The process subsequently trains a machine-learning model using the annotated video images to obtain a trained machine-learning model for the target tool-tissue interaction.Type: ApplicationFiled: October 10, 2023Publication date: March 28, 2024Inventors: Jagadish Venkataraman, Denise Ann Miller
-
Patent number: 11935641Abstract: This patent disclosure provides various embodiments of combining multiple modalities of non-text surgical data in forms of videos, images, and audios in a meaningful manner so that the combined data can be used to perform comprehensive data analytics for a surgical procedure. In some embodiments, the disclosed system can begin by receiving two or more modalities of surgical data during the surgical procedure. The system then time-synchronizes the two or more modalities of surgical data to generate two or more modalities of time-synchronized surgical data. Next, the system converts each modality of the time-synchronized surgical data into a corresponding array of values of a common format. The system then combines the two or more arrays of values to generate a combined set of values. The system subsequently performs comprehensive data analytics on the combined set of values to generate a surgical decision for the surgical procedure.Type: GrantFiled: October 4, 2021Date of Patent: March 19, 2024Assignee: Verb Surgical Inc.Inventors: Jagadish Venkataraman, Pablo Garcia Kilroy
-
Patent number: 11918180Abstract: An endoscope system capable of automatically turning on/off a light source during a surgical procedure is disclosed. This endoscope system includes: an endoscope module; a light source module coupled to the endoscope module; and a light-source control module. Moreover, the light-source control module controls an ON/OFF state of the light source by: (1) receiving real-time video images captured by the endoscope module; (2) processing the real-time video images to determine whether the endoscope module is inserted into a patient's body or is outside of the patient's body; and (3) in response to determining the endoscope module being outside of the patient's body while the light source is turned on, generating a control signal to immediately turn off the light source to the endoscope module, thereby ensuring safety of people in the operating room and preventing sensitive information in the operating room from being captured by the endoscope module.Type: GrantFiled: April 14, 2022Date of Patent: March 5, 2024Assignee: Verb Surgical Inc.Inventor: Jagadish Venkataraman
-
Publication number: 20240058091Abstract: An imaging system for viewing a surgical site, the imaging system including a system controller configured to: receive and process video images of the surgical site captured by an endoscopic camera coupled to an endoscope to detect at least one video signature corresponding to at least one condition that interferes with a quality of the video images; and in response to detecting the at least one video signature corresponding to the at least one condition that interferes with the quality of the video images, control a fluid system to clean a tip of the endoscope based on at least one learned preference that was learned by the system controller from user action over timeType: ApplicationFiled: July 28, 2023Publication date: February 22, 2024Applicant: Stryker CorporationInventors: Amit MAHADIK, Jagadish VENKATARAMAN, Ramanan PARAMASIVAN, Brad HUNTER, Afshin JILA, Kundan KRISHNA, Hannes RAU
-
Patent number: 11901065Abstract: Embodiments described herein provide various examples of a surgical video analysis system for segmenting surgical videos of a given surgical procedure into shorter video segments and labeling/tagging these video segments with multiple categories of machine learning descriptors. In one aspect, a process for processing surgical videos recorded during performed surgeries of a surgical procedure includes the steps of: receiving a diverse set of surgical videos associated with the surgical procedure; receiving a set of predefined phases for the surgical procedure and a set of machine learning descriptors identified for each predefined phase in the set of predefined phases; for each received surgical video, segmenting the surgical video into a set of video segments based on the set of predefined phases and for each segment of the surgical video of a given predefined phase, annotating the video segment with a corresponding set of machine learning descriptors for the given predefined phase.Type: GrantFiled: November 18, 2021Date of Patent: February 13, 2024Assignee: VERB SURGICAL INC.Inventors: Jagadish Venkataraman, Pablo E. Garcia Kilroy
-
Publication number: 20240006059Abstract: Embodiments described herein provide various examples of a system for extracting an actual procedure duration composed of actual surgical tool-tissue interactions from an overall procedure duration of a surgical procedure on a patient. In one aspect, the system is configured to obtain the actual procedure duration by: obtaining an overall procedure duration of the surgical procedure; receiving a set of operating room (OR) data from a set of OR data sources collected during the surgical procedure, wherein the set of OR data includes an endoscope video captured during the surgical procedure; analyzing the set of OR data to detect a set of non-surgical events during the surgical procedure that do not involve surgical tool-tissue interactions; extracting a set of durations corresponding to the set of non-surgical events; and determining the actual procedure duration by subtracting the set of extracted durations from the overall procedure duration.Type: ApplicationFiled: June 21, 2023Publication date: January 4, 2024Inventors: Jagadish Venkataraman, Pablo Garcia Kilroy
-
Patent number: 11819188Abstract: Embodiments described herein provide various examples of a machine-learning-based visual-haptic system for constructing visual-haptic models for various interactions between surgical tools and tissues. In one aspect, a process for constructing a visual-haptic model is disclosed. This process can begin by receiving a set of training videos. The process then processes each training video in the set of training videos to extract one or more video segments that depict a target tool-tissue interaction from the training video, wherein the target tool-tissue interaction involves exerting a force by one or more surgical tools on a tissue. Next, for each video segment in the set of video segments, the process annotates each video image in the video segment with a set of force levels predefined for the target tool-tissue interaction. The process subsequently trains a machine-learning model using the annotated video images to obtain a trained machine-learning model for the target tool-tissue interaction.Type: GrantFiled: February 8, 2023Date of Patent: November 21, 2023Assignee: Verb Surgical Inc.Inventors: Jagadish Venkataraman, Denise Ann Miller
-
Patent number: 11818510Abstract: Embodiments described herein provide various examples of monitoring adverse events in the background while displaying a higher-resolution surgical video on a lower-resolution display device. In one aspect, a process for detecting adverse events during a surgical procedure can begin by receiving a surgical video. The process then displays a first portion of the video images of the surgical video on a screen to assist a surgeon performing the surgical procedure. While displaying the first portion of the video images, the process uses a set of deep-learning models to monitor a second portion of the video images not being displayed on the screen, wherein each deep-learning model is constructed to detect a given adverse event among a set of adverse events. In response to detecting an adverse event in the second portion of the video images, the process notifies the surgeon of the detected adverse event to prompt an appropriate action.Type: GrantFiled: August 8, 2022Date of Patent: November 14, 2023Assignee: Verb Surgical Inc.Inventors: Jagadish Venkataraman, Dave Scott, Eric Johnson
-
Publication number: 20230289474Abstract: This patent disclosure provides various verification techniques to ensure that anonymized surgical procedure videos are indeed free of any personally-identifiable information (PII). In a particular aspect, a process for verifying that an anonymized surgical procedure video is free of PII is disclosed. This process can begin by receiving a surgical video corresponding to a surgery. The process next removes personally-identifiable information (PII) from the surgical video to generate an anonymized surgical video. Next, the process selects a set of verification video segments from the anonymized surgical procedure video. The process subsequently determines whether each segment in the set of verification video segments is free of PII. If so, the process replaces the surgical video with the anonymized surgical video for storage. If not, the process performs additional PII removal steps on the anonymized surgical video to generate an updated anonymized surgical procedure video.Type: ApplicationFiled: May 18, 2023Publication date: September 14, 2023Inventors: Jagadish Venkataraman, Pablo Garcia Kilroy