Patents by Inventor Lav Rai
Lav Rai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20160180529Abstract: The present invention is a method to register 3D image data with fluoroscopic images of the chest of a patient. The ribs and spine, which are visible in the fluoroscopic images, are analyzed and a rib signature or cost map is generated. The rib signature or cost map is matched to corresponding structures of the 3D image data of the patient. Registration is evaluated by computing a difference between the fluoroscopic image and a virtual fluoroscopic projected image of the 3D data. Related systems are also described.Type: ApplicationFiled: August 7, 2014Publication date: June 23, 2016Inventors: Lav Rai, Jason David Gibbs, Henky Wibowo
-
Patent number: 9265468Abstract: A method for assisting a physician track a surgical device in a body organ of a subject during a procedure includes fluoroscopic based registration, and tracking. An initial registration step includes receiving a 3D image data of a subject in a first body position, receiving a real time fluoroscopy image data, and estimating a deformation model or field to match points in the real time fluoro image with a corresponding point in the 3D model. A tracking step includes computing the 3D location of the surgical device based on a reference mark present on the surgical device, and displaying the surgical device and the 3D model of the body organ in a fused arrangement.Type: GrantFiled: May 11, 2011Date of Patent: February 23, 2016Assignee: BRONCUS MEDICAL, INC.Inventors: Lav Rai, Jason David Gibbs, Henky Wibowo
-
Publication number: 20150228074Abstract: A medical analysis method for estimating a motion vector field of the magnitude and direction of local motion of lung tissue of a subject is described. In one embodiment a first 3D image data set of the lung and a second 3D image data set is obtained. The first and second 3D image data sets correspond to images obtained during inspiration and expiration respectively. A rigid registration is performed to align the 3D image data sets with one another. A deformable registration is perforated to match the 3D image data sets with one another. A motion vector field of the magnitude and direction of local motion of lung tissue is estimated based on the deforming step. The motion vector field may be computed prior to treatment to assist with planning a treatment as well as subsequent to a treatment to gauge efficacy of a treatment. Results may be displayed to highlight.Type: ApplicationFiled: April 27, 2015Publication date: August 13, 2015Applicant: BRONCUS TECHNOLOGIESInventors: Lav RAI, Jason David GIBBS, Henky WIBOWO
-
Patent number: 9020229Abstract: A medical analysis method for estimating a motion vector field of the magnitude and direction of local motion of lung tissue of a subject is described. In one embodiment a first 3D image data set of the lung and a second 3D image data set is obtained. The first and second 3D image data sets correspond to images obtained during inspiration and expiration respectively. A rigid registration is performed to align the 3D image data sets with one another. A deformable registration is performed to match the 3D image data sets with one another. A motion vector field of the magnitude and direction of local motion of lung tissue is estimated based on the deforming step. The motion vector field may be computed prior to treatment to assist with planning a treatment as well as subsequent to a treatment to gauge efficacy of a treatment. Results may be displayed to highlight.Type: GrantFiled: May 13, 2011Date of Patent: April 28, 2015Assignee: Broncus Medical, Inc.Inventors: Lav Rai, Jason David Gibbs, Henky Wibowo
-
Publication number: 20140221824Abstract: A method and system for estimating calibration parameters of a medical fluoroscope and more particularly a method and system which automatically determines intrinsic and distortion correction parameters of a fluoroscopy device.Type: ApplicationFiled: July 23, 2012Publication date: August 7, 2014Applicant: Broncus Medical Inc.Inventors: Lav Rai, Henky Wibowo
-
Patent number: 8675935Abstract: Fast and continuous registration between two imaging modalities makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize video cameras and register the two sources. A set of reference images are computed or captured within a known environment, with corresponding depth maps and image gradients defining a reference source. Given one frame from a real-time or near-real time video feed, and starting from an initial guess of viewpoint, a real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. Steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source.Type: GrantFiled: November 16, 2011Date of Patent: March 18, 2014Assignee: The Penn State Research FoundationInventors: William E. Higgins, Scott A. Merritt, Lav Rai
-
Patent number: 8672836Abstract: Methods and apparatus provide continuous guidance of endoscopy during a live procedure. A data-set based on 3D image data is pre-computed including reference information representative of a predefined route through a body organ to a final destination. A plurality of live real endoscopic (RE) images are displayed as an operator maneuvers an endoscope within the body organ. A registration and tracking algorithm registers the data-set to one or more of the RE images and continuously maintains the registration as the endoscope is locally maneuvered. Additional information related to the final destination is then presented enabling the endoscope operator to decide on a final maneuver for the procedure. The reference information may include 3D organ surfaces, 3D routes through an organ system, or 3D regions of interest (ROIs), as well as a virtual endoscopic (VE) image generated from the precomputed data-set.Type: GrantFiled: January 30, 2008Date of Patent: March 18, 2014Assignee: The Penn State Research FoundationInventors: William E. Higgins, Scott A. Merritt, Lav Rai, Jason D. Gibbs, Kun-Chang Yu
-
Publication number: 20130346051Abstract: A system and method are described for determining candidate fiducial marker locations in the vicinity of a lesion. Imaging information and data are input or received by the system and candidate marker locations are calculated and displayed to the physician. Additionally, interactive feedback may be provided to the physician for manually selected or identified sites. The physician may thus receive automatic real time feedback for a candidate fiducial marker location and adjust or accept a constellation of fiducial marker locations. 3D renderings of the airway tree, lesion, and marker constellations may be displayed.Type: ApplicationFiled: June 3, 2013Publication date: December 26, 2013Inventors: Jason David Gibbs, Lav Rai, Henky Wibowo
-
Patent number: 8468003Abstract: A system and method are described for determining candidate fiducial marker locations in the vicinity of a lesion. Imaging information and data are input or received by the system and candidate marker locations are calculated and displayed to the physician. Additionally, interactive feedback may be provided to the physician for manually selected or identified sites. The physician may thus receive automatic real time feedback for a candidate fiducial marker location and adjust or accept a constellation of fiducial marker locations. 3D renderings of the airway tree, lesion, and marker constellations may be displayed.Type: GrantFiled: August 23, 2010Date of Patent: June 18, 2013Assignee: Broncus Medical, Inc.Inventors: Jason David Gibbs, Lav Rai, Henky Wibowo
-
Publication number: 20120288173Abstract: A medical analysis method for estimating a motion vector field of the magnitude and direction of local motion of lung tissue of a subject is described. In one embodiment a first 3D image data set of the lung and a second 3D image data set is obtained. The first and second 3D image data sets correspond to images obtained during inspiration and expiration respectively. A rigid registration is performed to align the 3D image data sets with one another. A deformable registration is performed to match the 3D image data sets with one another. A motion vector field of the magnitude and direction of local motion of lung tissue is estimated based on the deforming step. The motion vector field may be computed prior to treatment to assist with planning a treatment as well as subsequent to a treatment to gauge efficacy of a treatment. Results may be displayed to highlight.Type: ApplicationFiled: May 13, 2011Publication date: November 15, 2012Applicant: BRONCUS TECHNOLOGIES, INC.Inventors: Lav Rai, Jason David Gibbs, Henky Wibowo
-
Publication number: 20120289825Abstract: A method and system for assisting a physician track a surgical device in a body organ of a subject during a procedure includes fluoroscopic based registration, tracking, and optimizing a fluoroscopy position. An initial registration step includes receiving a 3D image data of a subject in a first body position, receiving a real time fluoroscopy image data, and estimating a deformation model or field to match points in the real time fluoro image with a corresponding point in the 3D model. A tracking step includes computing the 3D location of the surgical device and displaying the surgical device and the 3D model of the body organ in a fused arrangement. Optimizing the fluoroscope camera pose includes computing a candidate camera pose to assist the surgeon to track a surgical device based on features of the surgical device, position of the patient, and mechanical properties or constraints of the fluoroscope.Type: ApplicationFiled: May 11, 2011Publication date: November 15, 2012Applicant: BRONCUS, TECHNOLOGIES, INC.Inventors: Lav Rai, Jason David Gibbs, Henky Wibowo
-
Publication number: 20120082351Abstract: Fast and continuous registration between two imaging modalities makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize video cameras and register the two sources. A set of reference images are computed or captured within a known environment, with corresponding depth maps and image gradients defining a reference source. Given one frame from a real-time or near-real time video feed, and starting from an initial guess of viewpoint, a real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. Steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source.Type: ApplicationFiled: November 16, 2011Publication date: April 5, 2012Applicant: The Penn State Research FoundationInventors: William E. Higgins, Scott A. Merritt, Lav Rai
-
Patent number: 8064669Abstract: A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image.Type: GrantFiled: February 7, 2011Date of Patent: November 22, 2011Assignee: The Penn State Research FoundationInventors: William E. Higgins, Scott A. Merritt, Lav Rai
-
Publication number: 20110128352Abstract: A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image.Type: ApplicationFiled: February 7, 2011Publication date: June 2, 2011Applicant: The Penn State Research FoundationInventors: William E. Higgins, Scott A. Merritt, Lav Rai
-
Patent number: 7889905Abstract: A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image.Type: GrantFiled: May 19, 2006Date of Patent: February 15, 2011Assignee: The Penn State Research FoundationInventors: William E. Higgins, Scott A. Merritt, Lav Rai
-
Publication number: 20100280365Abstract: A method provides guidance to the physician during a live bronchoscopy or other endoscopic procedures. The 3D motion of the bronchoscope is estimated using a fast coarse tracking step followed by a fine registration step. The tracking is based on finding a set of corresponding feature points across a plurality of consecutive bronchoscopic video frames, then estimating for the new pose of the bronchoscope. In the preferred embodiment the pose estimation is based on linearization of the rotation matrix. By giving a set of corresponding points across the current bronchoscopic video image, and the CT-based virtual image as an input, the same method can also be used for manual registration. The fine registration step is preferably a gradient-based Gauss-Newton method that maximizes the correlation between the bronchoscopic video image and the CT-based virtual image. The continuous guidance is provided by estimating the 3D motion of the bronchoscope in a loop.Type: ApplicationFiled: July 12, 2010Publication date: November 4, 2010Applicant: The Penn State Research FoundationInventors: William E. Higgins, Scott A. Merritt, Lav Rai
-
Patent number: 7756563Abstract: A method provides guidance to the physician during a live bronchoscopy or other endoscopic procedures. The 3D motion of the bronchoscope is estimated using a fast coarse tracking step followed by a fine registration step. The tracking is based on finding a set of corresponding feature points across a plurality of consecutive bronchoscopic video frames, then estimating for the new pose of the bronchoscope. In the preferred embodiment the pose estimation is based on linearization of the rotation matrix. By giving a set of corresponding points across the current bronchoscopic video image, and the CT-based virtual image as an input, the same method can also be used for manual registration. The fine registration step is preferably a gradient-based Gauss-Newton method that maximizes the correlation between the bronchoscopic video image and the CT-based virtual image. The continuous guidance is provided by estimating the 3D motion of the bronchoscope in a loop.Type: GrantFiled: May 19, 2006Date of Patent: July 13, 2010Assignee: The Penn State Research FoundationInventors: William E. Higgins, Scott A. Merritt, Lav Rai
-
Publication number: 20080207997Abstract: Methods and apparatus provide continuous guidance of endoscopy during a live procedure. A data-set based on 3D image data is pre-computed including reference information representative of a predefined route through a body organ to a final destination. A plurality of live real endoscopic (RE) images are displayed as an operator maneuvers an endoscope within the body organ. A registration and tracking algorithm registers the data-set to one or more of the RE images and continuously maintains the registration as the endoscope is locally maneuvered. Additional information related to the final destination is then presented enabling the endoscope operator to decide on a final maneuver for the procedure. The reference information may include 3D organ surfaces, 3D routes through an organ system, or 3D regions of interest (ROIs), as well as a virtual endoscopic (VE) image generated from the precomputed data-set.Type: ApplicationFiled: January 30, 2008Publication date: August 28, 2008Applicant: The Penn State Research FoundationInventors: William E. Higgins, Scott A. Merritt, Lav Rai, Jason D. Gibbs, Kun-Chang Yu
-
Publication number: 20070015997Abstract: A method provides guidance to the physician during a live bronchoscopy or other endoscopic procedures. The 3D motion of the bronchoscope is estimated using a fast coarse tracking step followed by a fine registration step. The tracking is based on finding a set of corresponding feature points across a plurality of consecutive bronchoscopic video frames, then estimating for the new pose of the bronchoscope. In the preferred embodiment the pose estimation is based on linearization of the rotation matrix. By giving a set of corresponding points across the current bronchoscopic video image, and the CT-based virtual image as an input, the same method can also be used for manual registration. The fine registration step is preferably a gradient-based Gauss-Newton method that maximizes the correlation between the bronchoscopic video image and the CT-based virtual image. The continuous guidance is provided by estimating the 3D motion of the bronchoscope in a loop.Type: ApplicationFiled: May 19, 2006Publication date: January 18, 2007Inventors: William Higgins, Scott Merritt, Lav Rai
-
Publication number: 20070013710Abstract: A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image.Type: ApplicationFiled: May 19, 2006Publication date: January 18, 2007Inventors: William Higgins, Scott Merritt, Lav Rai