Patents by Inventor Lav Rai

Lav Rai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160180529
    Abstract: The present invention is a method to register 3D image data with fluoroscopic images of the chest of a patient. The ribs and spine, which are visible in the fluoroscopic images, are analyzed and a rib signature or cost map is generated. The rib signature or cost map is matched to corresponding structures of the 3D image data of the patient. Registration is evaluated by computing a difference between the fluoroscopic image and a virtual fluoroscopic projected image of the 3D data. Related systems are also described.
    Type: Application
    Filed: August 7, 2014
    Publication date: June 23, 2016
    Inventors: Lav Rai, Jason David Gibbs, Henky Wibowo
  • Patent number: 9265468
    Abstract: A method for assisting a physician track a surgical device in a body organ of a subject during a procedure includes fluoroscopic based registration, and tracking. An initial registration step includes receiving a 3D image data of a subject in a first body position, receiving a real time fluoroscopy image data, and estimating a deformation model or field to match points in the real time fluoro image with a corresponding point in the 3D model. A tracking step includes computing the 3D location of the surgical device based on a reference mark present on the surgical device, and displaying the surgical device and the 3D model of the body organ in a fused arrangement.
    Type: Grant
    Filed: May 11, 2011
    Date of Patent: February 23, 2016
    Assignee: BRONCUS MEDICAL, INC.
    Inventors: Lav Rai, Jason David Gibbs, Henky Wibowo
  • Publication number: 20150228074
    Abstract: A medical analysis method for estimating a motion vector field of the magnitude and direction of local motion of lung tissue of a subject is described. In one embodiment a first 3D image data set of the lung and a second 3D image data set is obtained. The first and second 3D image data sets correspond to images obtained during inspiration and expiration respectively. A rigid registration is performed to align the 3D image data sets with one another. A deformable registration is perforated to match the 3D image data sets with one another. A motion vector field of the magnitude and direction of local motion of lung tissue is estimated based on the deforming step. The motion vector field may be computed prior to treatment to assist with planning a treatment as well as subsequent to a treatment to gauge efficacy of a treatment. Results may be displayed to highlight.
    Type: Application
    Filed: April 27, 2015
    Publication date: August 13, 2015
    Applicant: BRONCUS TECHNOLOGIES
    Inventors: Lav RAI, Jason David GIBBS, Henky WIBOWO
  • Patent number: 9020229
    Abstract: A medical analysis method for estimating a motion vector field of the magnitude and direction of local motion of lung tissue of a subject is described. In one embodiment a first 3D image data set of the lung and a second 3D image data set is obtained. The first and second 3D image data sets correspond to images obtained during inspiration and expiration respectively. A rigid registration is performed to align the 3D image data sets with one another. A deformable registration is performed to match the 3D image data sets with one another. A motion vector field of the magnitude and direction of local motion of lung tissue is estimated based on the deforming step. The motion vector field may be computed prior to treatment to assist with planning a treatment as well as subsequent to a treatment to gauge efficacy of a treatment. Results may be displayed to highlight.
    Type: Grant
    Filed: May 13, 2011
    Date of Patent: April 28, 2015
    Assignee: Broncus Medical, Inc.
    Inventors: Lav Rai, Jason David Gibbs, Henky Wibowo
  • Publication number: 20140221824
    Abstract: A method and system for estimating calibration parameters of a medical fluoroscope and more particularly a method and system which automatically determines intrinsic and distortion correction parameters of a fluoroscopy device.
    Type: Application
    Filed: July 23, 2012
    Publication date: August 7, 2014
    Applicant: Broncus Medical Inc.
    Inventors: Lav Rai, Henky Wibowo
  • Patent number: 8675935
    Abstract: Fast and continuous registration between two imaging modalities makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize video cameras and register the two sources. A set of reference images are computed or captured within a known environment, with corresponding depth maps and image gradients defining a reference source. Given one frame from a real-time or near-real time video feed, and starting from an initial guess of viewpoint, a real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. Steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source.
    Type: Grant
    Filed: November 16, 2011
    Date of Patent: March 18, 2014
    Assignee: The Penn State Research Foundation
    Inventors: William E. Higgins, Scott A. Merritt, Lav Rai
  • Patent number: 8672836
    Abstract: Methods and apparatus provide continuous guidance of endoscopy during a live procedure. A data-set based on 3D image data is pre-computed including reference information representative of a predefined route through a body organ to a final destination. A plurality of live real endoscopic (RE) images are displayed as an operator maneuvers an endoscope within the body organ. A registration and tracking algorithm registers the data-set to one or more of the RE images and continuously maintains the registration as the endoscope is locally maneuvered. Additional information related to the final destination is then presented enabling the endoscope operator to decide on a final maneuver for the procedure. The reference information may include 3D organ surfaces, 3D routes through an organ system, or 3D regions of interest (ROIs), as well as a virtual endoscopic (VE) image generated from the precomputed data-set.
    Type: Grant
    Filed: January 30, 2008
    Date of Patent: March 18, 2014
    Assignee: The Penn State Research Foundation
    Inventors: William E. Higgins, Scott A. Merritt, Lav Rai, Jason D. Gibbs, Kun-Chang Yu
  • Publication number: 20130346051
    Abstract: A system and method are described for determining candidate fiducial marker locations in the vicinity of a lesion. Imaging information and data are input or received by the system and candidate marker locations are calculated and displayed to the physician. Additionally, interactive feedback may be provided to the physician for manually selected or identified sites. The physician may thus receive automatic real time feedback for a candidate fiducial marker location and adjust or accept a constellation of fiducial marker locations. 3D renderings of the airway tree, lesion, and marker constellations may be displayed.
    Type: Application
    Filed: June 3, 2013
    Publication date: December 26, 2013
    Inventors: Jason David Gibbs, Lav Rai, Henky Wibowo
  • Patent number: 8468003
    Abstract: A system and method are described for determining candidate fiducial marker locations in the vicinity of a lesion. Imaging information and data are input or received by the system and candidate marker locations are calculated and displayed to the physician. Additionally, interactive feedback may be provided to the physician for manually selected or identified sites. The physician may thus receive automatic real time feedback for a candidate fiducial marker location and adjust or accept a constellation of fiducial marker locations. 3D renderings of the airway tree, lesion, and marker constellations may be displayed.
    Type: Grant
    Filed: August 23, 2010
    Date of Patent: June 18, 2013
    Assignee: Broncus Medical, Inc.
    Inventors: Jason David Gibbs, Lav Rai, Henky Wibowo
  • Publication number: 20120288173
    Abstract: A medical analysis method for estimating a motion vector field of the magnitude and direction of local motion of lung tissue of a subject is described. In one embodiment a first 3D image data set of the lung and a second 3D image data set is obtained. The first and second 3D image data sets correspond to images obtained during inspiration and expiration respectively. A rigid registration is performed to align the 3D image data sets with one another. A deformable registration is performed to match the 3D image data sets with one another. A motion vector field of the magnitude and direction of local motion of lung tissue is estimated based on the deforming step. The motion vector field may be computed prior to treatment to assist with planning a treatment as well as subsequent to a treatment to gauge efficacy of a treatment. Results may be displayed to highlight.
    Type: Application
    Filed: May 13, 2011
    Publication date: November 15, 2012
    Applicant: BRONCUS TECHNOLOGIES, INC.
    Inventors: Lav Rai, Jason David Gibbs, Henky Wibowo
  • Publication number: 20120289825
    Abstract: A method and system for assisting a physician track a surgical device in a body organ of a subject during a procedure includes fluoroscopic based registration, tracking, and optimizing a fluoroscopy position. An initial registration step includes receiving a 3D image data of a subject in a first body position, receiving a real time fluoroscopy image data, and estimating a deformation model or field to match points in the real time fluoro image with a corresponding point in the 3D model. A tracking step includes computing the 3D location of the surgical device and displaying the surgical device and the 3D model of the body organ in a fused arrangement. Optimizing the fluoroscope camera pose includes computing a candidate camera pose to assist the surgeon to track a surgical device based on features of the surgical device, position of the patient, and mechanical properties or constraints of the fluoroscope.
    Type: Application
    Filed: May 11, 2011
    Publication date: November 15, 2012
    Applicant: BRONCUS, TECHNOLOGIES, INC.
    Inventors: Lav Rai, Jason David Gibbs, Henky Wibowo
  • Publication number: 20120082351
    Abstract: Fast and continuous registration between two imaging modalities makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize video cameras and register the two sources. A set of reference images are computed or captured within a known environment, with corresponding depth maps and image gradients defining a reference source. Given one frame from a real-time or near-real time video feed, and starting from an initial guess of viewpoint, a real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. Steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source.
    Type: Application
    Filed: November 16, 2011
    Publication date: April 5, 2012
    Applicant: The Penn State Research Foundation
    Inventors: William E. Higgins, Scott A. Merritt, Lav Rai
  • Patent number: 8064669
    Abstract: A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image.
    Type: Grant
    Filed: February 7, 2011
    Date of Patent: November 22, 2011
    Assignee: The Penn State Research Foundation
    Inventors: William E. Higgins, Scott A. Merritt, Lav Rai
  • Publication number: 20110128352
    Abstract: A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image.
    Type: Application
    Filed: February 7, 2011
    Publication date: June 2, 2011
    Applicant: The Penn State Research Foundation
    Inventors: William E. Higgins, Scott A. Merritt, Lav Rai
  • Patent number: 7889905
    Abstract: A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image.
    Type: Grant
    Filed: May 19, 2006
    Date of Patent: February 15, 2011
    Assignee: The Penn State Research Foundation
    Inventors: William E. Higgins, Scott A. Merritt, Lav Rai
  • Publication number: 20100280365
    Abstract: A method provides guidance to the physician during a live bronchoscopy or other endoscopic procedures. The 3D motion of the bronchoscope is estimated using a fast coarse tracking step followed by a fine registration step. The tracking is based on finding a set of corresponding feature points across a plurality of consecutive bronchoscopic video frames, then estimating for the new pose of the bronchoscope. In the preferred embodiment the pose estimation is based on linearization of the rotation matrix. By giving a set of corresponding points across the current bronchoscopic video image, and the CT-based virtual image as an input, the same method can also be used for manual registration. The fine registration step is preferably a gradient-based Gauss-Newton method that maximizes the correlation between the bronchoscopic video image and the CT-based virtual image. The continuous guidance is provided by estimating the 3D motion of the bronchoscope in a loop.
    Type: Application
    Filed: July 12, 2010
    Publication date: November 4, 2010
    Applicant: The Penn State Research Foundation
    Inventors: William E. Higgins, Scott A. Merritt, Lav Rai
  • Patent number: 7756563
    Abstract: A method provides guidance to the physician during a live bronchoscopy or other endoscopic procedures. The 3D motion of the bronchoscope is estimated using a fast coarse tracking step followed by a fine registration step. The tracking is based on finding a set of corresponding feature points across a plurality of consecutive bronchoscopic video frames, then estimating for the new pose of the bronchoscope. In the preferred embodiment the pose estimation is based on linearization of the rotation matrix. By giving a set of corresponding points across the current bronchoscopic video image, and the CT-based virtual image as an input, the same method can also be used for manual registration. The fine registration step is preferably a gradient-based Gauss-Newton method that maximizes the correlation between the bronchoscopic video image and the CT-based virtual image. The continuous guidance is provided by estimating the 3D motion of the bronchoscope in a loop.
    Type: Grant
    Filed: May 19, 2006
    Date of Patent: July 13, 2010
    Assignee: The Penn State Research Foundation
    Inventors: William E. Higgins, Scott A. Merritt, Lav Rai
  • Publication number: 20080207997
    Abstract: Methods and apparatus provide continuous guidance of endoscopy during a live procedure. A data-set based on 3D image data is pre-computed including reference information representative of a predefined route through a body organ to a final destination. A plurality of live real endoscopic (RE) images are displayed as an operator maneuvers an endoscope within the body organ. A registration and tracking algorithm registers the data-set to one or more of the RE images and continuously maintains the registration as the endoscope is locally maneuvered. Additional information related to the final destination is then presented enabling the endoscope operator to decide on a final maneuver for the procedure. The reference information may include 3D organ surfaces, 3D routes through an organ system, or 3D regions of interest (ROIs), as well as a virtual endoscopic (VE) image generated from the precomputed data-set.
    Type: Application
    Filed: January 30, 2008
    Publication date: August 28, 2008
    Applicant: The Penn State Research Foundation
    Inventors: William E. Higgins, Scott A. Merritt, Lav Rai, Jason D. Gibbs, Kun-Chang Yu
  • Publication number: 20070015997
    Abstract: A method provides guidance to the physician during a live bronchoscopy or other endoscopic procedures. The 3D motion of the bronchoscope is estimated using a fast coarse tracking step followed by a fine registration step. The tracking is based on finding a set of corresponding feature points across a plurality of consecutive bronchoscopic video frames, then estimating for the new pose of the bronchoscope. In the preferred embodiment the pose estimation is based on linearization of the rotation matrix. By giving a set of corresponding points across the current bronchoscopic video image, and the CT-based virtual image as an input, the same method can also be used for manual registration. The fine registration step is preferably a gradient-based Gauss-Newton method that maximizes the correlation between the bronchoscopic video image and the CT-based virtual image. The continuous guidance is provided by estimating the 3D motion of the bronchoscope in a loop.
    Type: Application
    Filed: May 19, 2006
    Publication date: January 18, 2007
    Inventors: William Higgins, Scott Merritt, Lav Rai
  • Publication number: 20070013710
    Abstract: A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image.
    Type: Application
    Filed: May 19, 2006
    Publication date: January 18, 2007
    Inventors: William Higgins, Scott Merritt, Lav Rai