SYSTEM AND METHOD FOR OCT DEPTH CALIBRATION

- VOLCANO CORPORATION

The invention generally relates to methods for calibrating an OCT image by comparing a detected location of a feature within the image to an expected location of the feature, calculating a calibration value, and transforming the location of pixels within the image to provide a calibrated image.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
FIELD OF THE INVENTION

This application claims the benefit of, and priority to, U.S. Provisional Patent Application No. 61/778,761, filed Mar. 13, 2013, which is incorporated by reference.

FIELD OF THE INVENTION

The invention relates to methods for calibrating optical coherence tomography images.

BACKGROUND

Intravascular optical coherence tomography (OCT) systems operate via a rotating catheter that is introduced into a patient's blood vessels to image the vessel walls to detect atherosclerotic plaque and other life-threatening conditions. The catheter takes a picture by shooting beams of light as it rotates and translates within the vessel and detecting light that is reflected back. The light is detected in the form of a series of lines arranged in a helix. A computer processor can digitize the information and arrange it for presentation on a screen.

One problem in OCT is calibration. The lines are treated as if they originate at the center of the fiber optical catheter, but this is based on assumptions about the path length of light through the catheter. The catheter is, in-fact, subject to stresses that can distort the path length during imaging. Thus, if the catheter stretches by a millimeter during imaging, the radius to a point on each line is artificially shrunken by a millimeter. If a diameter is measured on that image, the diameter will be wrong by two millimeters.

Unfortunately, the fiber optical catheter is within a patient during imaging and the imaging proceeds rapidly, so it is not realistic for an attendant to stand by and measure stretching or distortion of the catheter. Since one measure of the severity of atherosclerosis is cross-sectional area of space within a blood vessel, path length changes in OCT imaging hardware interfere with accurately evaluating the severity of a patient's condition.

SUMMARY

The invention provides systems and methods for calibrating OCT images by detecting a feature within the image, where a position of the feature within the actual patient is known. For example, a surface of an imaging catheter can be detected automatically within the OCT image. If the diameter of the catheter is known a priori, the detected location can be compared to the expected location to determine a path length distortion. The image can be transformed to correct for the path length distortion, thereby repositioning all of the pixels into locations that accurately depict the bodily tissue. This provides a calibrated image in which apparent dimensions in the image display (e.g., as seen on a computer monitor) can be used to faithfully determine actual dimensions of medically significant features within a patient's body. Thus OCT imaging can be used to evaluate the severity of atherosclerosis by, for example, revealing the actual space through a blood vessel through which blood may flow.

In certain aspects, the invention provides a method of calibrating an imaging system by obtaining an image that includes a target and a reference item, detecting a location of the reference item within the image, determining a y-value of the location, comparing the determined y-value to a stored reference y-value, calculating a calibration value based on the comparison, and providing a calibrated image by shifting pixels in a y-direction according to the calibration value. The imaging system may be an optical coherence tomography system. The reference item may be an image of a catheter sheath. The detecting step may include a morphological image processing operation. The method may include digitally transforming image data to provide the calibrated image. Detecting the location may include peak searching, correlation, thresholding, or performing a pattern recognition algorithm. The method may involve obtaining an a priori range over which a path length displacement may occur. Determining the y-value may include averaging across the image. Calculating the calibration value may include taking a difference between the determined y-value and the stored reference y-value.

In some embodiments, a feature is also removed (cropped) from the image. For example, the detected reference item may be removed from the image. Removing may be by cropping out the feature across its extent in the y direction at all x positions. Cropping or removing a feature may include calculating an average value of pixels proximal to the cropped region and inserting the average values to replace the cropped pixels.

In certain embodiments, any or all of the foregoing steps are performed using a computer system comprising a tangible, non-transitory memory coupled to a processor. In related aspects, the invention provides a system for providing a calibrated image, in which the system includes a memory coupled to a processor, the system being operable to perform any of the foregoing steps.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 shows use of an imaging system according to certain embodiments.

FIG. 2 is a diagram of components of an OCT system.

FIG. 3 diagrams components within a patient interface module (PIM).

FIG. 4 shows the structure of a PIM according to certain embodiments.

FIG. 5 is a diagram of components in an imaging engine.

FIG. 6 is a diagram of an interferometer for use with systems of certain embodiments.

FIG. 7 illustrates a segment of a blood vessel.

FIG. 8 illustrates a section of the blood vessel.

FIG. 9 shows the motion of parts of an imaging catheter.

FIG. 10 shows an array of A scan lines of a three-dimensional imaging system.

FIG. 11 shows the positioning of A scans with in a vessel.

FIG. 12 shows a B-scan.

FIG. 13 shows a tomographic view based on the B-scan of FIG. 11.

FIG. 14 illustrates a set of A scans used to compose a tomographic view.

FIG. 15 shows the set of A scans shown in FIG. 14 within a cross section of a vessel.

FIG. 16 shows a longitudinal plane through a vessel including several A scans.

FIG. 17 is a perspective view of an image longitudinal display (ILD) of FIG. 16.

FIG. 18 is a display providing an image of the vessel shown in FIGS. 7 and 8.

FIG. 19 illustrates receiving user input indicating a point within an image.

FIG. 20 shows an area around a point to be searched.

FIG. 21 shows a calibrated B-scan.

FIG. 22 illustrates components of a system according to certain embodiments.

DETAILED DESCRIPTION

The invention provides systems and methods for calibrating an imaging system. Systems and methods of the invention have application in imaging systems that require calibration to provide a scale. Exemplary systems include imaging and sensing systems based on principles of rotational imaging. In some embodiments, systems and applications contemplated for use with the invention include optical coherence tomography (OCT).

In OCT systems, a light source is used to provide a beam of coherent light. The light source can include an optical gain medium (e.g., laser or optical amplifier) to produce coherent light by stimulated emission. In some embodiments, the gain medium is provided by a semiconductor optical amplifier. A light source may further include other components, such as a tunable filter that allows a user to select a wavelength of light to be amplified. Wavelengths commonly used in medical applications include near-infrared light, for example between about 800 nm and about 1700 nm.

Generally, there are two types of OCT systems, common beam path systems and differential beam path systems, that differ from each other based upon the optical layout of the systems. A common beam path system sends all produced light through a single optical fiber to generate a reference signal and a sample signal whereas a differential beam path system splits the produced light such that a portion of the light is directed to the sample and the other portion is directed to a reference surface. Common beam path systems are further described for example in U.S. Pat. No. 7,999,938; U.S. Pat. No. 7,995,210; and U.S. Pat. No. 7,787,127 the contents of each of which are incorporated by reference herein in their entirety.

In a differential beam path system, the coherent light from the light source is input into an interferometer and split into a reference path and a sample path. The sample path is directed to the target and used to image the target. Reflections from the sample path are joined with the reference path and the combination of the reference-path light and the sample-path light produces interference patterns in the resulting light. The light, and thus the patterns, are converted to electric signals, which are then analyzed to produce depth-resolved images of the target tissue on a micron scale. Exemplary differential beam path interferometers are Mach-Zehnder interferometers and Michelson interferometers. Differential beam path interferometers are further described for example in U.S. Pat. No. 7,783,337; U.S. Pat. No. 6,134,003; and U.S. Pat. No. 6,421,164, the contents of each of which are incorporated by reference herein in its entirety.

Commercially available OCT systems are employed in diverse applications, including art conservation and diagnostic medicine, notably in ophthalmology where OCT can be used to obtain detailed images from within the retina. The detailed images of the retina allow one to identify diseases and trauma of the eye. Other applications of imaging systems of the invention include, for example, dermatology (e.g., to image subsurface structural and blood flow formations), dentistry (to image teeth and gum line), gastroenterology (e.g., to image the gastrointestinal tract to detect polyps and inflammation), and cancer diagnostics (for example, to discriminate between malignant and normal tissue).

In certain embodiments, systems and methods of the invention image within a lumen of tissue. Various lumen of biological structures may be imaged including, for example, blood vessels, including, but not limited, to vasculature of the lymphatic and nervous systems, various structures of the gastrointestinal tract including lumen of the small intestine, large intestine, stomach, esophagus, colon, pancreatic duct, bile duct, hepatic duct, lumen of the reproductive tract including the vas deferens, vagina, uterus and fallopian tubes, structures of the urinary tract including urinary collecting ducts, renal tubules, ureter, and bladder, and structures of the head and neck and pulmonary system including sinuses, parotid, trachea, bronchi, and lungs. Systems and methods of the invention have particular applicability in imaging veins and arteries such as, for example, the arteries of the heart. Since an OCT system can be calibrated to provide scale information, intravascular OCT imaging of the coronary arteries can reveal plaque build-up over time, change in dimensions of features, and progress of thrombotic elements. The accumulation of plaque within the artery wall over decades is the setup for vulnerable plaque which, in turn, leads to heart attack and stenosis (narrowing) of the artery. OCT images, if scaled or calibrated, are useful in determining both plaque volume within the wall of the artery and/or the degree of stenosis of the artery lumen. Intravascular OCT can also be used to assess the effects of treatments of stenosis such as with hydraulic angioplasty expansion of the artery, with or without stents, and the results of medical therapy over time.

FIG. 1 depicts the use of an exemplary intravascular OCT system 801. A physician controls an imaging catheter 826 through use of a handheld patient interface module (PIM) 839 to collect image data from a patient. Image data collected through catheter 826 is transmitted by PIM cable 841 to an imaging engine 859, which can be, for example, housed within a bedside unit or in a nearby computer installation or in a server rack coupled via networking technologies. As shown in FIG. 1, an OCT system can further include a workstation 433 (e.g., a monitor, keyboard, and mouse).

FIG. 2 gives a block diagram of components of OCT system 801. Imaging engine 859 is coupled to PIM 839 via PIM cable 841. Imaging catheter 826 extends from PIM 839 to the site of imaging. Engine cable 845 connects imaging engine 859 to host workstation 433. OCT is discussed in U.S. Pat. No. 8,108,030; U.S. Pub. 2011/0152771; U.S. Pub. 2010/0220334; U.S. Pub. 2009/0043191; U.S. Pub. 2008/0291463; and U.S. Pub. 2008/0180683, the contents of each of which are incorporated by reference in their entirety for all purposes. In certain embodiments, systems and methods of the invention include processing hardware configured to interact with more than one different three dimensional imaging system so that the tissue imaging devices and methods described here in can be alternatively used with OCT, IVUS, or other hardware.

As shown in FIG. 1, an operator controls imaging catheter 826 via handheld PIM 839. PIM 839 may include controls such as knobs or buttons to start or stop operation, set or vary speed or displacement, or otherwise control the imaging operation. PIM 839 further includes hardware for operating the imaging catheter.

FIG. 3 shows components of PIM 839. Catheter 826 is mounted to PIM 839 via a catheter receptacle 869. Spin motor 861 is provided to rotate catheter 826 and pullback motor 865 is provided to drive lateral translation of catheter 826. Also depicted is a keypad for input/output, a fiber-optic rotary joint (iFORj), a printed circuit board assembly (PCBA), and optional RFID components.

FIG. 4 gives a perspective view of PIM 839 with a keypad cover removed. Spin motor 861 is provided to rotate catheter 826 and pullback motor 865 causes lateral translation. Optical signals, electrical signals, or both arrive at PIM 839 via PIM cable 841. PIM cable 841 extends to imaging engine 859 as shown in FIG. 2.

FIG. 5 shows components of imaging engine 859. As shown in FIG. 5, the imaging engine 859 (e.g., a bedside unit) houses a power distribution board 849, light source 827, interferometer 831, and variable delay line 835 as well as a data acquisition (DAQ) board 855 and optical controller board (OCB) 851.

Light source 827, as discussed above, may use a laser or an optical amplifier as a source of coherent light. Coherent light is transmitted to interferometer 831.

FIG. 6 shows a path of light through interferometer 831 during OCT imaging. Coherent light for image capture originates within the light source 827. This light is split between an OCT interferometer 905 and an auxiliary, or “clock”, interferometer 911. Light directed to the OCT interferometer is further split by splitter 917 and recombined by splitter 919 with an asymmetric split ratio. The majority of the light is guided into the sample path 913 and the remainder into a reference path 915. The sample path includes optical fibers running through the PIM 839 and the imaging catheter 826 and terminating at the distal end of the imaging catheter where the image is captured.

An image is captured by introducing imaging catheter 826 into a target within a patient, such as a lumen of a blood vessel. This can be accomplished by using standard interventional techniques and tools such as a guide wire, guide catheter, or angiography system. Suitable imaging catheters and their use are discussed in U.S. Pat. No. 8,116,605 and U.S. Pat. No. 7,711,413, the contents of which are incorporated by reference in their entirety for all purposes.

FIG. 7 provides an illustration of a segment of a vessel 101 having a feature 113 of interest. FIG. 8 shows a cross-section of vessel 101 through feature 113. In certain embodiments, intravascular imaging involves positioning imaging catheter 826 within vessel 101 near feature 113 and collecting data to provide a three-dimensional image. Data can be collected in three dimensions by rotating catheter 826 around a catheter axis to collect image data in radial directions around the catheter while also translating catheter 826 along the catheter axis. As a result of combined rotation and translation, catheter 826 collects image data from a series of scan lines (each referred to as an A-scan line, or A-scan) disposed in a helical array.

FIG. 9 shows the motion of parts of an imaging catheter according to certain embodiments of the invention. Rotation of imaging catheter 826 around axis 117 is driven by spin motor 861 while translation along axis 117 is driven by pullback motor 865, as discussed above with reference to FIG. 4. An imaging tip of catheter 826 generally follows helical trace 119, resulting in a motion for image capture described by FIG. 9. Blood in the vessel is temporarily flushed with a clear solution for imaging. When operation is triggered from PIM 839 or a control console, the imaging core of catheter 826 rotates while collecting image data, which data is delivered to the imaging system.

FIG. 10 illustrates the helical array of A-scan lines A11, A12, . . . , AN captured by the imaging operation.

FIG. 11 is provided to show the positioning of A-scans A11, A12, . . . , AN within vessel 101. Each place where one of A-scans A11, A12, . . . , AN intersects a surface of a feature within vessel 101 (e.g., a vessel wall) coherent light is reflected and detected. Catheter 826 translates along axis 117 being pushed or pulled by pullback motor 865.

Looking back at FIG. 6, the reflected, detected light is transmitted along sample path 913 to be recombined with the light from reference path 915 at splitter 919. Calibration of the system relates to a length of sample path 913 compared to a length of reference path 915. The difference between these lengths is referred to as the z-offset and when the paths are the same length, the z-offset is said to be zero, and the system is calibrated. Calibration will be discussed in more detail below. Z-offset is discussed in U.S. Pat. No. 8,116,605, the contents of which are hereby incorporated by reference in their entirety for all purposes. When the z-offset is zero, the system is said to be calibrated.

After combining light from the sample, and reference paths, the combined light from splitter 919 is split into orthogonal polarization states, resulting in RF-band polarization-diverse temporal interference fringe signals. The interference fringe signals are converted to photocurrents using PIN photodiodes 929a, 929b, . . . on the OCB 851 as shown in FIG. 6. The interfering, polarization splitting, and detection steps are done by a polarization diversity module (PDM) on the OCB. Signal from the OCB is sent to the DAQ 855, shown in FIG. 5. The DAQ includes a digital signal processing (DSP) microprocessor and a field programmable gate array (FPGA) to digitize signals and communicate with the host workstation and the PIM. The FPGA converts raw optical interference signals into meaningful OCT images. The DAQ also compresses data as necessary to reduce image transfer bandwidth to 1 gigabit per second (Gbps) (e.g., compressing frames with a lossy compression JPEG encoder).

Data is collected from A-scans A11, A12, . . . , AN, as shown in FIG. 11, and stored in a tangible, non-transitory memory. A set of A-scans captured in a helical pattern during a rotation and pullback event can be collected and viewed alongside one another in a plane, in a format known as a B-scan.

FIG. 12 gives a reproduction of a B-scan collected using an OCT system. Each horizontal row of pixels corresponds to one A-scan, with the first A-scan (e.g., A11) being displayed across the top of the image. The horizontal axis labeled “Depth” represents a radial distance from imaging catheter 826. Noting—as shown in FIG. 10—that each A-scan line is progressively displaced from an adjacent A-scan in an angular direction around an axis 117 of catheter 826 (while also being displaced in a translational direction along axis 117), one set of A-scans associated with a 360° displacement around axis 117 can be collected into a view that depicts a slice of vessel 101 perpendicular to axis 117. This view is referred to as a tomographic view.

FIG. 13 shows a tomographic view based on the B-scan of FIG. 11. A tomographic view comprises a set of A-scans that defines one circumference around vessel 101. An arrow pointing straight down in FIG. 12 corresponds to the circular arrow in FIG. 13 and aids in visualization of the three-dimensional nature of the data.

FIG. 14 provides a cartoon illustration of a set of A-scans A11, A12, . . . , A18 used to compose a tomographic view. These A-scan lines are shown as would be seen looking down axis 117 (i.e., longitudinal distance between them is not shown). While eight A-scan lines are here illustrated in cartoon format in FIG. 14, typical OCT applications can include between 300 and 1,000 A-scan lines to create a B scan (e.g., about 660) or a tomographic view.

FIG. 15 provides a cartoon illustration of the tomographic view associated with the A-scans of FIG. 14. Reflections detected along each A-scan line are associated with features within the imaged tissue. Reflected light from each A-scan is combined with corresponding light that was split and sent through reference path 915 and VDL 925 and interference between these two light paths as they are recombined indicates features in the tissue. Where a tomographic view such as is depicted in FIG. 15 generally represents an image as a planar view across a vessel (i.e., normal to axis 117), an image can also be represented as a planar view along a vessel (i.e., axis 117 lies in the plane of the view).

FIG. 16 shows a longitudinal plane 127 through a vessel 101 including several A scans. Such a planar image along a vessel is sometimes referred to as an in-line digital view or image longitudinal display (ILD). As shown in FIG. 16, plane 127 generally comprises data associated with a subset of the A scans. The data of the A scan lines is processed according to systems and methods of the inventions to generate images of the tissue. By processing the data appropriately (e.g., by fast Fourier transformation), a two-dimensional image can be prepared from the three dimensional data set. Systems and methods of the invention provide one or more of a tomographic view, ILD, or both.

FIG. 17 is a perspective view of an idealized plane shown including an exemplary ILD in the same perspective as the longitudinal plane shown in FIG. 16. Where an OCT system captures three-dimensional image data, host workstation 433 may store the three dimensional image data in a tangible, non-transitory memory and provides a display that includes a tomographic view (e.g., FIG. 15), an ILD (e.g., FIG. 17), or both (e.g., on a screen or computer monitor). In some embodiments, a tomographic view and an ILD are displayed together, providing information that operators can intuitively visualize as representing a three-dimensional structure.

FIG. 18 is a reproduction of a display of an OCT system including a tomographic view on the left and an ILD on the right. As shown in FIG. 18, a tomographic view may include ring-like elements near the center and the ILD may include corresponding sets of vertical line-like elements. One ring in the tomographic view may correspond to one pair of lines in the ILD. These elements within the displays are often, in-fact, images of part of the imaging system itself. In some embodiments, a ring in a tomographic view and lines in an ILD represent a surface of catheter 826 such as, for example, an outer surface of a catheter sheath. The portions of the images extending away from those elements are the images of the patient's tissue.

The invention provides methods and systems for detecting and using OCT image features, either intentionally generated or artifactual, for automatically adjusting the depth range in polar (“radar-like”) OCT images. Circular or cylindrical OCT scanning devices sample physical space in an inherently polar coordinate system (e.g. radius and angle rather than length and width). However, digital representations of images (i.e. arrays of pixels representing numeric values) are inherently rectangular as shown in FIG. 12.

Polar OCT images must be converted from their rectangular representation (from FIG. 12) before displaying to the viewer (e.g., transformed to FIG. 13). Additionally, if quantitative values (e.g. lumen diameters, lumen areas, circumferences, etc.) are to be measured on the polar image, then the transformation from rectangular to polar must preserve relative distances between pixels in all dimensions (radial and angular). Generally, the OCT depth scan (y axis in rectangular coordinates) maps directly to radius and the OCT circumferential scan (x axis in rectangular coordinates) maps to some increment of 2π radians (or 360°) polar angle.

For example: y=0 (the top row of the rectangular image) maps to radius=0 (the center of the polar image) and y=ymax (the bottom row of the rectangular image) maps to radius=ymax (the perimeter of the polar image). Likewise, x=0 (the left column in the rectangular image) maps to angle=0° and x=xmax/2 maps to approximately 180° and x=xmax maps to an angle of approximately 359°. (It is noted that the assignment of x or y to rows or columns is arbitrary and non-limiting. Herein, “x” and “y” could be reversed in any example, meaning only that a rectangular image would be stored in the computer memory as a transpose relative to the other.)

For accurate quantitative dimensional measurement in polar images, pixels mapping to radius=0 must represent the actual physical space at the center of the axis of rotation of the imaging probe, otherwise the polar image will be artificially warped (expanded or contracted) in the radial direction. However, in an arbitrary OCT image, the pixels at y=0 do not necessarily satisfy this requirement and must be shifted in the y dimension until this is satisfied before mapping to a polar representation. Differential displacements (either controlled or uncontrolled) in the path length of the sample vs. reference arms of the interferometer will shift the pixels in the y dimension.

Displacements can occur when using cylindrical (typically actually helical) scanning fiber-optic OCT catheters. For example, when the catheter is pushed or pulled longitudinally, the fiber-optic cable can be compressed or stretched and thus a path length displacement is incurred.

The method disclosed herein is of automatic recognition of the uncontrolled displacement effect based on searching for image features which should be stationary (but are not due to uncontrollable displacement), and successive calibration of OCT image data so that polar representations can then be used for accurate dimensional measurements.

Additionally, a method is provided for subsequent removal of said features in image prior to display.

Image features used by the methods and systems disclosed herein are preferably generated within the catheter itself (i.e., not within the imaged subject or surroundings) and should appear somewhat stable in depth and consistent in intensity throughout the 360° rotation of the catheter. These include but are not limited to back reflections at interfaces between optical components (aka “ghost-lines” or “echo artifacts”, these occur along the optical axis of rotating parts and thus appear as uniform circles in the polar image when no differential path length displacement occurs over the course of one catheter rotation), or reflections from the boundaries of or from within the stationary (non-rotating) catheter sheath (if it is circular in cross-sectional profile and also mechanically concentric with the rotating portion).

In some embodiments, steps in the automatic recognition and calibration method could include first averaging an OCT image frame along the x—(i.e. angular) dimension. This selectively enhances the feature(s) which are rotationally stable in the y dimension (i.e. radius) instead of other image features generated by subject or surroundings. Efficacy of the method is improved if the image feature(s) used have high intensity relative to the surrounding pixels and if subject/environment features (noise) do not have strong circumferential symmetry.

Next, features can be detected within the image. Features can be found by any suitable method or algorithm such as, for example, peak searching, correlation, thresholding, or other pattern recognition algorithms known in the art. The efficacy of this method can be improved if the range over which uncontrolled path length displacements can occur is known a priori, thus limiting the required search space.

Then, the y-value(s) of feature(s) found in the previous step are compared to a pre-calibrated y-value which represents the actual physical location(s) of that feature(s) relative to the rotational axis, or to the location of a known “conjugate image” or “aliased image” of that feature(s) when using spectral-domain OCT.

Finally, the image is calibrated by shifting the OCT image pixels in the y dimension by the difference between searched feature(s) and pre-calibrated feature(s). Multiple features can be used to improved efficacy of the algorithm. After shifting the rectangular image in the y dimension, map to polar image coordinates. Radii measured to the center of the calibrated polar image will represent actual radii measured to the rotational axis in physical space.

Often image features due to the catheter are unwanted for effective and distraction-free display of the subject/environment features. For example, the catheter image features could overlap the subject/environment features.

Steps to remove (or make less noticeable) the image features could include cropping out the image feature(s) extent in the y, or radial, direction and in all columns, or angles. Removal (or diminishment) can further include calculating the average value of the pixels immediately inside and outside (above and below) of the cropped region for all columns, or angles, and inserting this averaged row, or circumference, in the cropped location.

FIG. 18 is a cartoon illustration of a display 237 including an image of the vessel shown in FIGS. 7 and 8, as rendered by a system of the invention. The images included in display 237 in FIG. 18 are rendered in a simplified style of the purposes of ease of understanding. A system of the invention may render a display as shown in FIG. 18, or in any style known in the art (e.g., with or without color).

As shown in FIG. 18, a tomographic view of vessel 101 is depicted alongside an ILD. An outer surface of a catheter sheath appears as a ring 211 in the tomographic view and as lines 217 in the ILD. The tomographic view is depicted as including calibration mark 215, while calibration mark 219 appears in the ILD.

In some embodiments, calibration involves determining the position of ring 211 (or lines 217) in display 237 so that the system can calculate a calibration value based on a known position of calibration mark 215 (see FIG. 19; calibration mark 215 need not ever be a visible mark—it represents a conceptual point). Systems of the invention can determine the position of ring 211 or any other calibration element based on an image processing operation.

FIG. 19 illustrates, in simplified fashion, a display of an imaging system showing a catheter sheath 211 and calibration mark 215.

The system can additionally use a processor to perform an image processing operation to detect a feature such as sheath 211.

In some embodiments, the system detects a feature by starting with a defined area 227 (e.g., a pre-stored range of search values, or an arbitrary area, or the entire image).

FIG. 20 depicts a defined area 227 around point 221 on a B-scan. Area 227 operates as a search window. The search window area 227 may be a rectangle, circle, ellipse, polygon, or other shape. It may have a predetermined area (e.g., a certain number of pixels). In some embodiments, a size and shape of area 227 is determined by a prior information about a likely location of a feature to be detected.

The system searches for a feature (e.g., a sheath) within area 227 by performing a processing operation on the corresponding data. The processing operation can be any suitable search algorithm known in the art.

In some embodiments, a morphological image processing operation is used. Morphological image processing includes operations such as erosion, dilation, opening, and closing, as well as combination thereof. In some embodiments, these operations involve converting the image data to binary data giving each pixel a binary value. With pixels within area 227 converted to binary, each pixel of catheter sheath 211 will be black, and the background pixels will predominantly be white. In erosion, every pixel that is touching background is changed into a background pixel. In dilation, every background pixel that is adjacent to the non-background object pixels is changed into an object pixel. Opening is an erosion followed by a dilation, and closing is a dilation followed by an erosion. Morphological image processing is discussed in Smith, The Scientist and Engineer's Guide to Digital Signal Processing, 1997, California Technical Publishing, San Diego, Calif., pp. 436-442.

If sheath 211 is not found within area 227, area 227 can be increased and the increased area can be searched. This strategy can exploit the statistical properties of signal-to-noise ratio (SNR) by which the ability to detect an object is proportional to the square root of its area. See Smith, Ibid., pp. 432-436.

With continued reference to FIG. 20, once a portion of catheter sheath 211 is detected within area 227, the search can then be extended “upwards” and “downwards” into adjacent A-scan lines in the B-scan until the entire catheter sheath 211 is detected by the processor and its location is determined with precision. In some embodiments, image processing operations incorporate algorithms with pre-set or user-set parameters that optimize results and continuity of results. For example, if a line appears that is not contiguous across an entire 100% of the image (e.g., the entire extent of the B-scan or a full circle in a tomographic view), an accept or reject parameter can be established based on a percent contiguous factor. In some embodiments, lines that are contiguous across less than 75% (or 50% or 90%, depending on applications) are rejected while others are accepted.

While described above as detecting a reference item (e.g., catheter sheath 211) by receiving user input followed by using a processor to detect a location of the sheath, the steps can be performed in other orders. For example, the system can apply morphological processing operations to an entire image and detect every element, or every element that satisfies a certain quality criterion. In some embodiments, the system can receive user input that indicates a point within an image and the user can then choose the pre-detected element that is closest to that point within the image. Similarly, the steps can be performed simultaneously.

Using the methodologies herein, systems of the invention can detect an element within an image of an imaging system, such as an OCT system, with great precision, based on a search algorithm. Based on this detection, an actual location of a catheter sheath is determined and thus a precise calibration value for the catheter sheath and thus for the image (e.g., within a B-scan) is known. Where an expected z-coordinate Zc for the catheter sheath is known, based on information provided extrinsically, the a calibration value can be determined. For example, in FIG. 20, Zs is depicted as lying to the right of Zc, thereby showing a non-zero z-offset. The calibration value is then used to provide a calibrated image, or an image at a known scale.

In some embodiments, the system calculates or uses the mean, median, or root-mean-squared distance of the sheath from the calibration mark to compute the calibration value. This may be advantageous in the event of interfering speckle noise, rough or acylindrical sheaths, non-uniform catheter rotation (NURD), angular displacement of a transducer within the sheath, off-center positioning of the transducer within the sheath, or a combination thereof. In certain embodiments, only a subset of the detected points are used, for example, for efficiency or performance optimization.

FIG. 21 shows a calibrated image, here, a B-scan. The image is depicted having the catheter sheath aligned with the calibration mark. Bars on the left and right side of FIG. 21 show that some data may be shifted out and some blank space introduced by the calibration. In an alternative embodiment, the image can be stretched or compressed, or a combination of stretching and shifting may be performed, depending on preferences, purposes, or functions of a system.

It will be appreciated that the foregoing description is applicable in live mode or review mode. If the imaging system is operating in live mode, capturing an image of tissue, the calibration can be put into effect either by changing the length of reference path 915 so that z-offset is zero or by transforming the dataset or on-screen image. The length of reference path 915 can be changed through the operation of the motor in the VDL. The distance Zc-Zs is converted into millimeters and the a command is sent to move the VDL to a new position.

If the dataset is to be transformed, either in live mode or while the system is operating in review mode, the system is digitally shifted, stretched, or a combination thereof.

In another aspect, the invention provides a method for calibrating an imaging system based on receipt of user input that indicates a “motion”, such as a click-and-drag operation on a computer screen.

While discussed above using a surface of a catheter sheath as a reference item which is used as a basis for calibration, other reference items are suitable. For example, any item that can be depicted such that its expected location and actual location can be compared in a display of an imaging system may be used. In some embodiments, a fiducial marker or calibration bar is introduced into the imaging target having a known dimension (e.g., 1 nm, 1 mm, 1 cm). The system operates to display a scale or a grid based on an expected appearance of the known dimension. The user then gives input indicating a point in the display near the reference item and the system also detects a location of the reference item in an area around the indicated point. Based on the expected and actual locations or dimensions of the reference item, a calibration value is calculated and a calibrated image is provided. User input, displays, and methods of receiving user input and performing calculations may be provided by one or more computers.

In certain embodiments, display 237 is rendered within a computer operating system environment, such as Windows, Mac OS, or Linux or within a display or GUI of a specialized system. Display 237 can include any standard controls associated with a display (e.g., within a windowing environment) including minimize and close buttons, scroll bars, menus, and window resizing controls. Elements of display 237 can be provided by an operating system, windows environment, application programming interface (API), web browser, program, or combination thereof (for example, in some embodiments a computer includes an operating system in which an independent program such as a web browser runs and the independent program supplies one or more of an API to render elements of a GUI). Display 237 can further include any controls or information related to viewing images (e.g., zoom, color controls, brightness/contrast) or handling files comprising three-dimensional image data (e.g., open, save, close, select, cut, delete, etc.). Further, display 237 can include controls (e.g., buttons, sliders, tabs, switches) related to operating a three dimensional image capture system (e.g., go, stop, pause, power up, power down).

In certain embodiments, display 237 includes controls related to three dimensional imaging systems that are operable with different imaging modalities. For example, display 237 may include start, stop, zoom, save, etc., buttons, and be rendered by a computer program that interoperates with OCT or IVUS modalities. Thus display 237 can display an image derived from a three-dimensional data set with or without regard to the imaging mode of the system.

FIG. 22 diagrams an exemplary system 400. As shown in FIG. 22, imaging engine 859 communicates with host workstation 433 as well as optionally server 413 over network 409. In some embodiments, an operator uses host workstation 433, computer 449, or terminal 467 to control system 400 or to receive images. An image may be displayed using an I/O 454, which may include a monitor. Any I/O may include a monitor, keyboard, mouse or touchscreen to communicate with any of processor 459, for example, to cause data to be stored in any tangible, nontransitory memory 463. Server 413 generally includes an interface module 425 to communicate over network 409 or write data to data file 417. Input from a user is received by a processor in an electronic device such as, for example, host workstation 433, server 413, or computer 449. Methods of the invention can be performed using software, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions can also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations (e.g., imaging apparatus in one room and host workstation in another, or in separate buildings, for example, with wireless or wired connections). In certain embodiments, host workstation 433 and imaging engine 855 are included in a bedside console unit to operate system 400.

A computer generally includes a processor for executing instructions and one or more memory devices for storing instructions, data, or both. Processors suitable for the execution of methods and operations described herein include, by way of example, both general and special purpose microprocessors (e.g., an Intel chip, an AMD chip, an FPGA). Generally, a processor will receive instructions or data from read-only memory, random access memory, or both. Generally, a computer will also include, or be operatively coupled, one or more mass storage devices for storing data that represent target such as bodily tissue. Any suitable computer-readable storage device may be used such as, for example, solid-state, magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, particularly tangible, non-transitory memory including by way of example semiconductor memory devices, (e.g., EPROM, EEPROM, NAND-based flash memory, solid state drive (SSD), and other flash memory devices); magnetic disks, (e.g., internal hard disks or removable disks); magneto-optical disks; and optical disks (e.g., CD and DVD disks).

INCORPORATION BY REFERENCE

References and citations to other documents, such as patents, patent applications, patent publications, journals, books, papers, web contents, have been made throughout this disclosure. All such documents are hereby incorporated herein by reference in their entirety for all purposes.

EQUIVALENTS

Various modifications of the invention and many further embodiments thereof, in addition to those shown and described herein, will become apparent to those skilled in the art from the full contents of this document, including references to the scientific and patent literature cited herein. The subject matter herein contains important information, exemplification and guidance that can be adapted to the practice of this invention in its various embodiments and equivalents thereof.

Claims

1. A method of calibrating an imaging system, the method comprising:

obtaining an image that includes a target and a reference item;
detecting a location of the reference item within the image;
determining a y-value of the location;
comparing the determined y-value to a stored reference y-value;
calculating a calibration value based on the comparison; and
providing a calibrated image by shifting pixels in a y-direction according to the calibration value.

2. The method of claim 1, wherein the imaging system is an optical coherence tomography system.

3. The method of claim 1, wherein the reference item comprises an image of a catheter sheath.

4. The method of claim 1, wherein the detecting step comprises a morphological image processing operation.

5. The method of claim 1, further comprising digitally transforming image data to provide the calibrated image.

6. The method of claim 1, wherein detecting the location comprises one selected from the list consisting of peak searching, correlation, thresholding, and performing a pattern recognition algorithm.

7. The method of claim 1, further comprising obtaining an a priori range over which a path length displacement may occur.

8. The method of claim 1, wherein determining the y-value comprises averaging across the image.

9. The method of claim 1, wherein calculating the calibration value comprises taking a difference between the determined y-value and the stored reference y-value.

10. The method of claim 9, wherein the pixels are shifted by the difference.

11. The method of claim 1, further comprising removing a feature from the image.

12. The method of claim 1, further comprising removing the reference item from the image.

13. The method of claim 11, wherein removing comprises cropping out the feature across its extent in the y direction at all x positions.

14. The method of claim 13, wherein removing comprises calculating an average value of pixels proximal to the cropped region and inserting the average values to replace the cropped pixels.

15. The method claim 1, wherein the method is performed using a computer system comprising a tangible, non-transitory memory coupled to a processor.

16. An imaging system, the system comprising:

a processor coupled to a non-transitory memory, wherein the system is operable to: obtain an image that includes a target and a reference item; detect a location of the reference item within the image; determine a y-value of the location; compare the determined y-value to a stored reference y-value; calculate a calibration value based on the comparison; and provide a calibrated image by shifting pixels in a y-direction per the calibration value.
Patent History
Publication number: 20140270445
Type: Application
Filed: Mar 12, 2014
Publication Date: Sep 18, 2014
Patent Grant number: 9301687
Applicant: VOLCANO CORPORATION (San Diego, CA)
Inventor: Nathaniel J. Kemp (Concord, MA)
Application Number: 14/206,485
Classifications
Current U.S. Class: Tomography (e.g., Cat Scanner) (382/131)
International Classification: G06T 7/00 (20060101);