Patents by Inventor Jagadish Venkataraman

Jagadish Venkataraman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10910103
    Abstract: Embodiments described herein provide various examples of a surgical procedure analysis system for extracting an actual procedure duration that involves actual surgical tool-tissue interactions from a total procedure duration of a surgical procedure. In one aspect, the process for generating the haptic feedback signal includes the steps of: obtaining the total procedure duration of the surgical procedure; receiving a set of operating room (OR) data from a set of OR data sources collected during the surgical procedure; analyzing the set of OR data to detect a set of non-surgical events during the surgical procedure that do not involve surgical tool-tissue interactions; extracting a set of durations corresponding to the set of events; and determining the actual procedure duration by subtracting the combined set of durations corresponding to the set of events from the total procedure duration.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: February 2, 2021
    Assignee: VERB SURGICAL INC.
    Inventors: Jagadish Venkataraman, Pablo Garcia Kilroy
  • Publication number: 20210015342
    Abstract: In this patent disclosure, a machine-learning-based system for automatically turning on/off a light source of an endoscope camera during a surgical procedure to ensure the safety of the surgical staff is disclosed. The disclosed system can receive a real-time video image captured by the endoscope camera, wherein the real-time video image is captured either inside the patient's body or outside of the patient's body. The system next processes the real-time video image using a first statistical classifier to classify the real-time video image as either being inside the patient's body or being outside of the patient's body. If the real-time video image is classified as being outside of the patient's body, the system next determines if the light source is turned on. If so, the system generates a control signal to immediately turn off the light source. Otherwise, the system continues receiving and processing real-time video images captured by the endoscope camera.
    Type: Application
    Filed: September 29, 2020
    Publication date: January 21, 2021
    Inventor: Jagadish Venkataraman
  • Publication number: 20210006752
    Abstract: Embodiments described herein provide various examples of synchronizing the playback of a recorded video of a surgical procedure with a live video feed of a user performing the surgical procedure. In one aspect, a system can simultaneously receive a recorded video of a surgical procedure and a live video feed of a user performing the surgical procedure in a training session. More specifically, the recorded video is shown to the user as a training reference, and the surgical procedure includes a set of surgical tasks. The system next simultaneously monitors the playback of a current surgical task in the set of surgical tasks in the recorded video and the live video feed depicting the user performing the current surgical task. Next, the system detects that the end of the current surgical task has been reached during the playback of the recorded video.
    Type: Application
    Filed: September 21, 2020
    Publication date: January 7, 2021
    Inventors: Pablo Garcia Kilroy, Jagadish Venkataraman
  • Publication number: 20200372998
    Abstract: This patent disclosure provides various embodiments of combining multiple modalities of non-text surgical data of different formats, in particular in forms of videos, images, and audios in a meaningful manner so that the combined data from the multiple modalities are compatible with text data. In some embodiments, prior to combining the multiple modalities of surgical data, multiple segmentation engines are used to segment and convert a corresponding modality of surgical data into a corresponding set of metrics and parameters. The multiple sets of metrics and parameters corresponding to the multiple modalities are then combined to generate a combined feature set. The combined feature set can be provided to a data analytics tool for performing comprehensive data analyses on the combined feature set to generate one or more predictions for the surgical procedure.
    Type: Application
    Filed: May 21, 2019
    Publication date: November 26, 2020
    Inventors: Jagadish Venkataraman, Pablo Garcia Kilroy
  • Publication number: 20200372180
    Abstract: This patent disclosure provides various embodiments for anonymizing raw surgical procedure videos recorded by a recording device, such as an endoscope camera, during a surgical procedure performed on a patient inside an operating room (OR). In one aspect, a process for anonymizing raw surgical procedure videos recorded by a recording device within an OR is disclosed. This process can begin by receiving a set of raw surgical videos corresponding to a surgical procedure performed within the OR. The process next merges the set of raw surgical videos to generate a surgical procedure video corresponding to the surgical procedure. Next, the process detects image-based personally-identifiable information embedded in the set of raw video images of the surgical procedure video. When image-based personally-identifiable information is detected, the process automatically de-identifies the detected image-based personally-identifiable information in the surgical procedure video.
    Type: Application
    Filed: May 21, 2019
    Publication date: November 26, 2020
    Inventors: Jagadish Venkataraman, Pablo Garcia Kilroy
  • Patent number: 10799090
    Abstract: In this patent disclosure, a machine-learning-based system for automatically turning on/off a light source of an endoscope camera during a surgical procedure is disclosed. The disclosed system can receive a sequence of video images captured by the endoscope camera when the light source is turned on. The system next analyzes the sequence of video images using a machine-learning classifier to classify each video image as a first class of image of being inside the patient's body or a second class of image of being outside of the patient's body. The system next determines whether the endoscope camera is inside or outside of the patient's body based on the classified video images. When the endoscope camera is determined to be outside of the patient's body, the system generates a control signal for turning off the light source, wherein the control signal is used to immediately turn off the light source.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: October 13, 2020
    Assignee: VERB SURGICAL INC.
    Inventor: Jagadish Venkataraman
  • Publication number: 20200315707
    Abstract: Embodiments described herein provide various examples of predicting potential current paths from an active electrode of a monopolar electrosurgery tool to a return electrode of the monopolar electrosurgery tool based on analyzing electrical properties of tissues inside a patient's body, and evaluating and eliminating tissue burn risks associated with the predicted current paths. In some embodiments, a current-path-prediction technique is used to predict a set of potential current paths from the active electrode to the return electrode for any given geometrical configuration of the two electrodes on the patient's body. These predicted current paths can then be pictorially displayed on a 3D scan of the patient's body or an endoscopic view of the patient's body and in relation to the display of any existing metal implant inside the patient's body, which allows for visualizing points of tissue burn risks inside the patient's body.
    Type: Application
    Filed: April 2, 2019
    Publication date: October 8, 2020
    Inventor: Jagadish Venkataraman
  • Patent number: 10791301
    Abstract: Embodiments described herein provide various examples of preparing two procedure videos, in particular two surgical procedure videos for comparative learning. In some embodiments, to allow comparative learning of two recorded surgical videos, each of the two recorded surgical videos is segmented into a sequence of predefined phases/steps. Next, corresponding phases/steps of the two segmented videos are individually time-synchronized in pair-wise manner so that a given phase/step of one recorded video and a corresponding phase/step of the other segmented video can have the same or substantially the same starting time and ending timing during comparative playbacks of the two recorded videos.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: September 29, 2020
    Assignee: VERB SURGICAL INC.
    Inventors: Pablo Garcia Kilroy, Jagadish Venkataraman
  • Publication number: 20200303065
    Abstract: Embodiments described herein provide various examples of automatically processing surgical videos to detect surgical tools and tool-related events, and extract surgical-tool usage information. In one aspect, a process for automatically tracking usages of robotic surgery tools is disclosed. This process can begin by receiving a surgical video captured during a robotic surgery. The process then processes the surgical video to detect a surgical tool in the surgical video. Next, the process determines whether the detected surgical tool has been engaged in the robotic surgery. If so, the process further determines whether the detected surgical tool is engaged for a first time in the robotic surgery. If the detected surgical tool is engaged for the first time, the process subsequently increments a total-engagement count of the detected surgical tool. Otherwise, the process continues monitoring the detected surgical tool in the surgical video.
    Type: Application
    Filed: June 5, 2020
    Publication date: September 24, 2020
    Inventor: Jagadish Venkataraman
  • Publication number: 20200304753
    Abstract: Embodiments described herein provide various examples of displaying video images of a surgical video captured at a first resolution on a screen of a surgical system having a second resolution lower than the first resolution. In one aspect, a process begins by receiving the surgical video and selecting a first portion of the video images having the same or substantially the same resolution as the second resolution. The process subsequently displays the first portion of the video images on the screen. While displaying the first portion of the video images, the process monitors a second portion of the video images not being displayed on the screen for a set of predetermined events, wherein the second portion is not visible to the user. When a predetermined event in the set of predetermined events is detected in the second portion, the process generates an alert to notify the user.
    Type: Application
    Filed: March 21, 2019
    Publication date: September 24, 2020
    Inventors: Jagadish Venkataraman, David D. Scott, Eric Johnson
  • Publication number: 20200194111
    Abstract: Embodiments described herein provide various examples of a surgical procedure analysis system for extracting an actual procedure duration that involves actual surgical tool-tissue interactions from a total procedure duration of a surgical procedure. In one aspect, the process for generating the haptic feedback signal includes the steps of: obtaining the total procedure duration of the surgical procedure; receiving a set of operating room (OR) data from a set of OR data sources collected during the surgical procedure; analyzing the set of OR data to detect a set of non-surgical events during the surgical procedure that do not involve surgical tool-tissue interactions; extracting a set of durations corresponding to the set of events; and determining the actual procedure duration by subtracting the combined set of durations corresponding to the set of events from the total procedure duration.
    Type: Application
    Filed: December 14, 2018
    Publication date: June 18, 2020
    Inventors: Jagadish Venkataraman, Pablo Garcia Kilroy
  • Patent number: 10679743
    Abstract: Embodiments described herein provide various examples of automatically processing surgical videos to detect surgical tools and tool-related events, and extract surgical-tool usage information. In one aspect, a process for automatically detecting a new surgical tool engagement during a recorded surgical procedure is disclosed. This process can begin by receiving a surgical procedure video and then segmenting the surgical video into sequences of video frames. Next, for each sequence of video frames, the video frames are processed to detect one or more surgical tools and one or more surgical tool engagements associated with the detected surgical tools. If a surgical tool engagement is detected in the sequence of video frames, the process then determines if a detected surgical tool associated with the detected surgical tool engagement is associated with a previously identified surgical tool engagement. If not, the process identifies the detected surgical tool engagement as a new surgical tool engagement.
    Type: Grant
    Filed: September 12, 2018
    Date of Patent: June 9, 2020
    Assignee: VERB SURGICAL INC.
    Inventor: Jagadish Venkataraman
  • Publication number: 20200082934
    Abstract: Embodiments described herein provide various examples of automatically processing surgical videos to detect surgical tools and tool-related events, and extract surgical-tool usage information. In one aspect, a process for automatically detecting a new surgical tool engagement during a recorded surgical procedure is disclosed. This process can begin by receiving a surgical procedure video and then segmenting the surgical video into sequences of video frames. Next, for each sequence of video frames, the video frames are processed to detect one or more surgical tools and one or more surgical tool engagements associated with the detected surgical tools. If a surgical tool engagement is detected in the sequence of video frames, the process then determines if a detected surgical tool associated with the detected surgical tool engagement is associated with a previously identified surgical tool engagement. If not, the process identifies the detected surgical tool engagement as a new surgical tool engagement.
    Type: Application
    Filed: September 12, 2018
    Publication date: March 12, 2020
    Inventor: Jagadish Venkataraman
  • Publication number: 20200078123
    Abstract: Embodiments described herein provide various examples of a visual-haptic feedback system for generating a haptic feedback signal based on captured endoscopy images. In one aspect, the process for generating the haptic feedback signal includes the steps of: receiving an endoscopic video captured for a surgical procedure performed on a robotic surgical system; detecting a surgical task in the endoscopic video involving a given type of surgical tool-tissue interaction; selecting, a machine learning model constructed for analyzing the given type of surgical tool-tissue interaction; for a video image associated with the detected surgical task depicting the given type of surgical tool-tissue interaction, applying the selected machine learning model to the video image to predict a strength level of the depicted surgical tool-tissue interaction; and then providing the predicted strength level to a surgeon performing the surgical task as a haptic feedback signal for the given type of surgical tool-tissue interaction.
    Type: Application
    Filed: July 15, 2019
    Publication date: March 12, 2020
    Inventors: Jagadish Venkataraman, Denise Ann Miller
  • Publication number: 20190362834
    Abstract: Embodiments described herein provide various examples of a surgical video analysis system for segmenting surgical videos of a given surgical procedure into shorter video segments and labeling/tagging these video segments with multiple categories of machine learning descriptors. In one aspect, a process for processing surgical videos recorded during performed surgeries of a surgical procedure includes the steps of: receiving a diverse set of surgical videos associated with the surgical procedure; receiving a set of predefined phases for the surgical procedure and a set of machine learning descriptors identified for each predefined phase in the set of predefined phases; for each received surgical video, segmenting the surgical video into a set of video segments based on the set of predefined phases and for each segment of the surgical video of a given predefined phase, annotating the video segment with a corresponding set of machine learning descriptors for the given predefined phase.
    Type: Application
    Filed: May 23, 2018
    Publication date: November 28, 2019
    Inventors: Jagadish Venkataraman, Pablo E. Garcia Kilroy
  • Patent number: 10383694
    Abstract: Embodiments described herein provide various examples of a visual-haptic feedback system for generating a haptic feedback signal based on captured endoscopy images. In one aspect, the process for generating the haptic feedback signal includes the steps of: receiving an endoscopic video captured for a surgical procedure performed on a robotic surgical system; detecting a surgical task in the endoscopic video involving a given type of surgical tool-tissue interaction; selecting a machine learning model constructed for analyzing the given type of surgical tool-tissue interaction; for a video image associated with the detected surgical task depicting the given type of surgical tool-tissue interaction, applying the selected machine learning model to the video image to predict a strength level of the depicted surgical tool-tissue interaction; and then providing the predicted strength level to a surgeon performing the surgical task as a haptic feedback signal for the given type of surgical tool-tissue interaction.
    Type: Grant
    Filed: September 12, 2018
    Date of Patent: August 20, 2019
    Assignees: JOHNSON & JOHNSON INNOVATION—JJDC, INC., VERILY LIFE SCIENCES LLC
    Inventors: Jagadish Venkataraman, Denise Ann Miller
  • Patent number: 10203397
    Abstract: Devices, systems, and methods for improving performance in positioning systems. Performance may be improved using disclosed signal processing methods for separating eigenvalues corresponding to noise and eigenvalues corresponding to one or more direct path signal components or multipath signal components.
    Type: Grant
    Filed: March 12, 2014
    Date of Patent: February 12, 2019
    Assignee: NextNav, LLC
    Inventors: Andrew Sendonaris, Norman F. Krasner, Jagadish Venkataraman, Chen Meng
  • Patent number: 10194269
    Abstract: Estimating the position of a receiver using positioning signals and Doppler frequency measurements. Approaches for estimating the position of a receiver using positioning signals and Doppler frequency shift measurements determine an initial estimate of a receiver's position using ranging signals from a first system, generate Doppler frequency shift measurements using the Doppler positioning signals from a second system, and refine the initial estimate using the Doppler frequency shift measurements.
    Type: Grant
    Filed: October 7, 2016
    Date of Patent: January 29, 2019
    Assignee: NextNav, LLC
    Inventors: Jagadish Venkataraman, Chen Meng
  • Patent number: 10175364
    Abstract: Systems and methods for estimating whether a receiver is indoors or outdoors. Certain approaches evaluate data associated with a network of beacons to determine whether the receiver is indoors or outdoors. Such evaluation may include any of determining whether azimuthal angles corresponding to the beacons meet an azimuthal angle condition, determining whether elevation angles corresponding to the beacons meet an elevation angle condition, determining whether signal strengths corresponding to the beacons meet a signal strength condition, and determining whether other measurements associated with the beacons meet other measurement conditions.
    Type: Grant
    Filed: June 24, 2015
    Date of Patent: January 8, 2019
    Assignee: NextNav, LLC
    Inventors: Jagadish Venkataraman, Ganesh Pattabiraman
  • Publication number: 20180271615
    Abstract: A surgical system for providing an improved video image of a surgical site including a system controller that receives and processes video images to determine a video signature corresponding to a condition that interferes with a quality of the video images, with the system controller interacting with a video enhancer to enhance the video images from a video capturing device to automatically control the video enhancer to enhance the video images. The surgical system can also review the video images for a trigger event and automatically begin or stop recording of the video images upon occurrence of the trigger event.
    Type: Application
    Filed: March 21, 2018
    Publication date: September 27, 2018
    Inventors: Amit MAHADIK, Jagadish VENKATARAMAN, Ramanan PARAMASIVAN, Brad HUNTER, Afshin JILA, Kundan KRISHNA, Hannes RAU