Method and apparatus for correction of errors in surfaces
Methods and systems are disclosed for correcting segmentation errors in pre-existing contours and surfaces. Techniques are disclosed for receiving one or more edit contours from a user, identifying pre-existing data points that should be eliminated, and generating a new corrected surface. Embodiments disclosed herein relate to using received edit contours to generate a set of points on a pre-existing surface, and applying a proximity test to eliminate pre-existing constraint points that are undesirable.
Latest Impac Medical Systems, Inc. Patents:
- THERAPY CONTROL USING MOTION PREDICTION BASED ON CYCLIC MOTION MODEL
- SYSTEMS AND METHODS FOR SEGMENTATION OF INTRA-PATIENT MEDICAL IMAGES
- System and method for auto-contouring in adaptive radiotherapy
- Dose rate modulated stereotactic radio surgery
- Three dimensional localization of a moving target for adaptive radiation therapy
The present invention pertains generally to the field of processing medical images, particularly to computer assisted modification of surfaces representative of anatomical structures.
BACKGROUND AND SUMMARY OF THE INVENTIONVarious techniques are known in the art for automated contouring and segmentation of computer images as well as generation of three-dimensional surfaces, e.g. from two-dimensional contour data. Typical objects of interest in medical images include organs such as bladder, prostate, kidneys, and many other types of anatomical objects as is well known in the art. Objects of interest in cellular imaging include, for example, cell nucleus, organelles, etc. It will be understood that the techniques disclosed herein are equally applicable to any type of object of interest.
Exemplary techniques for generating and manipulating three-dimensional surfaces are disclosed in U.S. application Ser. No. 11/848,624, entitled “Method and Apparatus for Efficient Three-Dimensional Contouring of Medical Images”, filed Aug. 31, 2007, and published as U.S. Patent Pub. No. 2009-0060299, and U.S. application Ser. No. 12/022,929, entitled “Method and Apparatus for Efficient Automated Re-Contouring of Four-Dimensional Medical Imagery Using Surface Displacement Fields”, filed Jan. 30, 2008, and published as U.S. Patent Pub. No. 2009-0190809, the entire disclosures of each of which are incorporated herein by reference.
Software utilities for generating and manipulating 2D contours and 3D surfaces include 3D Slicer (Pieper et al., 2006; Gering et al., 1999) and ITK-SNAP (Yushkevich et 75 al., 2006), and software packages including VTK software system available from Kitware, Inc. of Clifton Park, N.Y. (See Schroeder et al., The Visualization Toolkit, 4th Ed., Kitware, 2006), and Insight Registration and Segmentation ToolKit (ITK, Ibanez et al., 2005), the entire disclosures of each of which are incorporated herein by reference.
Three-dimensional (3D) surfaces are typically generated based on contour data corresponding to many two-dimensional (2D) images. Generally speaking, a contour is a set of points that identifies or delineates a portion or segment of an image that corresponds to an object in the image. Each contour separates or segments an object from the remainder of the image. Contours may be generated by computer vision (e.g. by edge detection software), manually (e.g. by a person using a marker to draw edges on an image), or any combination of the two (e.g. by a person using computer-assisted segmentation or contouring software).
An exemplary system may be configured to (1) capture many images of an object of interest from many different viewpoints, (2) perform an automated segmentation process to automatically generate contours that define the object of interest, and (3) generate a 3D surface representative of the object of interest. A 3D surface may be represented by one or more radial basis functions, each centered on a constraint point. Thus, a 3D surface may be defined by a plurality of constraint points.
As an arbitrary example, a system may be configured to capture 100 2D images in each of the coronal, sagittal, and transverse planes, for a total of 300 two-dimensional images. The exemplary system could then automatically generate contours for each of the 2D images using a segmentation process (such as the exemplary segmentation processes disclosed in the cross-referenced applications), and then use the generated contours to create a 3D object representative of the anatomical structure.
Automated or computer generated contouring and segmentation of medical images frequently results in erroneous contours that do not accurately reflect the shape of the anatomical structure shown in the underlying images. Errors may be more prevalent where an original image set suffers from low contrast, noise, or nonstationary intensities.
Furthermore, manual contouring and computer-assisted contouring based on user-input may also result in contours having mistakes that would benefit from further manual editing. For example, a more experienced user may wish to modify erroneous contours created by a less experienced user.
Errors in contours fall into two general categories: under-segmentation and over-segmentation. With under-segmentation, only a first portion of an object (e.g. anatomical structure) is correctly identified by the contour, while a second portion of the object is incorrectly excluded. In the case of over-segmentation, extraneous portions of an image are incorrectly identified by the contour as part of the object (e.g. anatomical structure).
Thus, it is desirable to provide a user with the ability to manually edit contours to correct various mistakes and errors in an existing contour. With existing contour editing software, a user supplies a 2D “edit contour” for an object of interest (e.g. anatomical structure) for one or more 2D images. The user-supplied edit contour data is indicative of an edited or corrected contour for the object.
Due to the large number of underlying images and contours that may be involved, it is preferable that contour editing software not require the user to create edit contours for all of the underlying images. It is preferable to allow the user to modify only a subset of the underlying 2D contours (e.g. based on user selection of the viewpoint or viewpoints in which the error is most clearly visible), and to provide software for correcting a 3D surface shape representative of an object based on the received edit contours.
The inventor has identified various problems that arise in the process of modifying contours and surfaces. For example, one problem is the difficulty inherent in deciding which pre-existing constraint points for a pre-existing surface should be eliminated. For example, when an object of interest has been under-segmented, the pre-existing surface will be too small. The received edit contours will thus correspond to a new 3D surface that is larger than the pre-existing surface. Thus, an interface between the pre-existing 3D surface and the new 3D surface may exist, such as a concave deformity. In the case of over-segmentation, the interface may be a convex deformity. It will be understood that any combination of under-segmentation and over-segmentation may occur for a given object of interest. E.g., one portion of an object may be over-segmented, while another portion is under-segmented, as is known in the art. Accordingly, multiple edit contours may be received for a single object in a single image, each edit contour corresponding to a different segmentation error.
Embodiments disclosed herein are directed to correcting errors in one or more pre-existing contour, pre-existing surface, and/or pre-existing set of constraint points. The pre-existing contour(s), pre-existing constraint points, and pre-existing surface(s) may have been automatically or manually generated. Embodiments disclosed herein are directed to modifying a pre-existing three-dimensional surface and/or set of pre-existing constraint points based on one or more received edit contours. Embodiments disclosed herein are directed to creating a new three-dimensional surface and/or set of constraint points based on one or more received edit contours. Embodiments are disclosed for correcting both under-segmentation and over-segmentation.
Embodiments disclosed herein use data corresponding to the received edit contours to selectively eliminate pre-existing constraint points on a pre-existing 3D surface.
These and other features and advantages of the present invention are disclosed herein and will be understood by those having ordinary skill in the art upon review of the description and figures hereinafter.
Comprehensible displays of patient anatomy based on medical imaging are helpful in many areas, such as radiotherapy planning. Interactive contouring of patient anatomy can be a large part of the planning cost. While auto-segmentation programs now coming into use can produce results that approximate the true anatomy, these results must be reviewed by trained personnel and often require modification. The inventor has identified a need in the art to accurately and efficiently reconstruct 3D objects by combining 2D contours from the most informative views, and edit existing structures by reshaping the structure's surface based on user input. The inventor discloses various embodiments for providing these features. Both goals may be achieved by interpolating over new and existing structure elements to produce reconstructions that are smooth and continuous like the physical anatomy.
Surface Representations
A three-dimensional (3D) surface is one of the most compact and versatile ways to represent anatomical structures. The structure surface is sampled during contouring, and resampled when contours are edited. Recent progress in computer science has produced new methods to create and manipulate surfaces efficiently, in two broad areas depending on the surface definition. Explicit surfaces are meshes with vertices and polygon edges connecting the vertices. Implicit surfaces (or “implicit function surfaces”) are defined by spline-like basis functions (e.g. radial basis functions (RBF)) and control or constraint points.
The explicit mesh is a familiar graphics object and is the representation used by programs like 3D Slicer (Pieper et al., 2006; Gering et al., 1999) and ITK-SNAP (Yushkevich et al., 2006) to perform anatomy display, registration, and some segmentation functions. These two programs (and many others) are built in part on open source software including the Visualization Toolkit (VTK, Schroeder, et al., 2006) and the Insight Registration and Segmentation ToolKit (ITK, Ibanez et al., 2005). The Marching Cubes algorithm (Lorensen and Cline, 1987) used to create surface meshes is an essential technology for these programs.
Deformation of explicit meshes depends on tightly constrained, coordinated relocations of vertices that preserve the mesh polygon geometry. Laplacian methods (reviewed in Botsch and Sorkine) enable mesh deformations that ascend in complexity from local linear mesh translations up to general deformations that preserve local surface differential properties. Deformations responding to motions of defined “handles” on the mesh produce efficient and detailed animation (Nealen, et al., 2005; Kho and Garland, 2005). A medical application of explicit deformation for semiautomated segmentation is given in (Pekar et al., 2004; Kaus et al., 2003) in which the mesh smoothness and geometry are constrained by quadratic penalties on deviation from a model and balanced with mesh vertices' attraction to image edges. In most cases, the deforming mesh has a fixed set of vertices and fixed polygonal connectivity, and only needs to be re-rendered to observe a change.
Implicit surfaces may be defined by the locations of constraint points and weighted sums of basis functions that interpolate the surface between constraint points. The surface shape can be changed simply by relocating or replacing some of the constraint points. Turk and O'Brien (1999) popularized what they termed variational implicit surfaces by demonstrating several useful properties: the implicit surfaces are smooth and continuous, one can approach real-time surface generation and editing for small numbers of constraints (<2000), and one 3-D shape can continuously blend into another. Carr et al. (2001) proposed a computational method to accelerate implicit surface generation, and earlier (Carr et al., 1997) demonstrated a medical application of implicit surfaces to the design of cranial implants. Karpenko et al. (2002) demonstrated an interactive graphics program that created complex 3-D shapes using implicit surfaces generated and modified by user gestures. Jackowski and Goshtasby (2005) demonstrated an implicit representation of anatomy which the user could edit a surface by moving the constraints. More recent work has concentrated on hardware acceleration of implicit rendering (Knoll et al., 2007; Singh and Narayanan, 2010) and alternative implicit surface polygonization algorithms (Gelas, et al., 2009; Kalbe, et al., 2009), that purport to improve on the classic polygonization method of Bloomenthal (1994).
Implicit Functions as Shape Media
Implicit functions have several important applications in imaging science, including PDE-driven (Partial Differential Equations) image restoration, deformable registration, and segmentation (Sethian, 1999; Sapiro, 2001; Osher & Fedkiw, 2003). Leventon et al. (2000) pointed out that the average shape of multiple implicit function-objects could be obtained by averaging the registered signed distance functions (a kind of implicit function) and distributions of shapes could be represented by the principal components of sets of implicit functions. This result was elaborated in joint registration-segmentation several medical applications (Tsai et al 2003; Tsai et al., 2005; Pohl et al., 2006).
Implicit surfaces may be represented as the zero level sets of a signed distance function ƒ(x)=h where ƒ is a real function taking a real value h at the general point x=(x,y,z), where h=0 at the surface, h>0 inside the surface and h<0 outside the surface. Signed distance functions are implicits where h is the distance from a general point x to the nearest {circumflex over (x)} for which ƒ({circumflex over (x)})=0, with the sign convention above. The goal is to reconstruct a surface ƒ( ) from N points {x1, x2, . . . , xN} with corresponding {h1, h2, . . . , hN} where hi=ƒ(xi). However, such a problem does not have a unique solution. A constraint may be applied to convert this ill-posed problem to one with a solution. Data smoothness is the usual constraint, and regularization theory (Tihkonov and Arsenin (1977), Girosi et al (1993)) provides such solutions by the variational minimization of functionals of the form
where λ≧0 is the regularization parameter that establishes the tradeoff between the error term (ƒ(xi)−hi)2 and the smoothness functional S[ƒ] that penalizes functions ƒ that change direction rapidly (a smooth surface is preferred over a wrinkled one). It has been shown (Duchon, 1977; Wahba, 1990; Girosi et al., 1993) that H[ƒ] is minimized when ƒ is expressed as the weighted sum of radial basis functions (RBFs)—positive, radially symmetric, real functions. Two important examples are given below.
where ∥r∥=∥x−cj∥ is the Euclidean distance from x to the radial function center cj, m is the smoothness parameter and d is the dimensionality of the object on which interpolation is done. In one embodiment, the form ∥r∥2 ln ∥r∥ (m=2, d=2) is used, which corresponds to the well-known 2D thin plate spline (Bookstein, 1978). In another embodiment, the triharmonic RBF ∥r∥3 (m=3, d=3) (Turk & O'Brien, 1999) is used.
Implicit Surface Interpolation
Interpolation of general points x=(x,y,z)T across a 3D surface is computed by
where interpolant s(x) approximates ƒ(x) by summing over the RBFs φ(∥x−cj∥3) weighted by the wj. The points cj=(xj,yj,zj) are constraint points on which the RBFs are centered. The RBFs, in conjunction with the scalars hj=ƒ(cj), determine the shape of the surface.
To make shape specification more reliable, two sets of constraints may be used. (Turk and O'Brien, 1999; Carr et al., 2001). A first set is on the implicit function at the zero level, ƒ(cj)=hj=0, and a second set equal in number to the first set and each located at the end of an inward-pointing normal of length one from a constraint in the first set, ƒ(ck≠j)=hk=1. The first term is a linear polynomial P(xj)=p0+p1xj+p2yj+p3zj that locates the surface relative to an arbitrary origin. Since the surface function is linear in the φ, the wj can be determined by least squares solution of the linear system
where A is an n×n matrix with Ai,j=φ(∥ci−cj∥3), C is an n×4 matrix whose i-th row contains (1 xi yi zi), vector p=(p0, p1, p2, p3)T is the origin basis, vector w=(w1, . . . , wn)T contains the weights, and vector h=(h1, . . . , hn)T contains the known implicit function values at the constraint locations. Matrix B has dimensions n+4×n+4. Because the φ(x)=φ(∥x−c∥3) are montonically increasing, submatrix A is dense and the solution
can in principle be obtained by factorization using LU decomposition (Golub and Van Loan, 1996). Unfortunately, the solution is ill-conditioned because matrix B has a zero diagonal. Dinh et al. (2003) suggest making the diagonal more positive by adding the n×n diagonal matrix Λ
where the diagonal elements Λii may be individually set for each constraint. One can use the values,
for constraints lying either on the implicit surface (ƒ(ci)=0) or on a normal inside the surface (ƒ(ci)=+1). This greatly improves the stability of the B−1 solution.
After solving for the dj and the pj in Equation (6), the implicit function in (3) can be evaluated to determine the zero-th level points of ƒ. The method of Bloomenthal (1994) may be used to enumerate mesh vertices and polygons.
Efficient Constraint Allocation
The main performance limitation is the number of constraints n. Matrix factorization (LU decomposition using Gaussian elimination) has a complexity of about O(2n3/3) and the subsequent surface meshing requires m determinations of s(x) each requiring an n-length inner product (Equation (3)) where the number of surface search computations m depends on the area of the surface and the sampling rate.
In addition to hardware and software parallelization, the computational burden can also be reduced by minimizing the number of constraints to those that sample the contour only at the most shape-informative points, where the second and higher derivatives are changing most rapidly. DeBoor (2001) described a method to optimally place points along a curve to represent its shape. The idea is to divide the total curvature into m equal parts, concentrating them where the curvature is greatest. A planar closed curve C(x,y,z) of length L can be parameterized by distance u along the curve such that C(x,y,z)=C(x(u),y(u),z(u))=C(u) where 0≦u≦L and C(0)=C(L). The DeBoor curvature (bending energy) is the k-th root of absolute value of the k-th derivative of the curve at point u,
where DkC(u) is the derivative operator. The total curvature K
divided into m equal parts
enables one to solve for sample points νj, j=1, . . . , m. These νj are the surface constraints derived from contour C(u). A set of m corresponding normal constraints are then created to complete the implicit shape definition so the total number of constraints is n=2m. The number C must be specified to the program before the νj can be determined.
Implicit Surface Reconstruction
Implicit surface reconstruction accuracy is related to the number of constraints n (Eq. (3)) and their distribution in space. That distribution can be controlled by three parameters: 1) the number of constraints used to resample each contour, 2) the number of contours arrayed in various planes spanning the volume of the structure, and 3) a distance limit that prevents constraints being placed closer than a threshold distance from one another. This distance limit confers robustness on reconstructions in situations where manually drawn contours in orthogonal planes do not exactly intersect.
The goal in reconstruction is to efficiently sample a surface by using as few constraints as necessary to obtain the desired accuracy. FIGS. 14(A)-(C) show the effects of varying these three parameters on reconstruction accuracy.
To study the behavior of implicit surface reconstruction, simulated contouring and reconstruction was performed by resampling of expert-contoured prostate, bladder and rectum from 103 cases, intersecting each expert structure with transverse/sagittal/coronal (T/S/C) planes and assigning constraints to locations on these planes' intersection polygons, using the efficient allocation method described above. With these constraints, one can solve the linear problem in Equation 6, and reconstruct the surface with Equation 3. The reconstructed surface can then be compared with the expert surface by overlap measures. For example, the Dice coefficient (Dice, 1945) may be used, defined as the overlap of two volumes A and B:
D=2(A∪B)/(A∩B)
A second measure of overlap is the mean distance between nearest vertices in the two surfaces, denoted as the mean vertex distance (MVD).
Spatial detail increases with increasing density of constraints along the contours. The deep invagination on the anterior coronal face of the prostate (left side of the oblique views) appears in the lowest resolution (20 constraints/contour) reconstruction and becomes more sharply defined with increased sampling rates. A smaller indentation under the coronal contour (right side of oblique view) at 160 and 80 has disappeared at 40. The squared inferior aspect of the prostate seen in the sagittal view is most sharply defined at 160 and diminishes steadily to 20.
Implicit Surface Modification
The shape of an implicit surface may be modified by changing the locations or number of the constraint points that define the implicit surface. An original or pre-existing set of constraint points is represented by C={c1, c2, . . . } and edit contour constraints may be represented as E={e1, e2, . . . }. As described above, the inventor discloses that it is preferable to eliminate some of the C constraints. Equation 6 may be recalculated using the reduced set of C constraints and the E constraints as described in detail below.
Error Correction Techniques
Once an acceptable 3D surface is generated corresponding to an object of interest, the 3D surface may be used to re-contour the object in one or more 2D images including the object (or a portion of the object). For example, mesh 137 may be used to generate a new set of contours (e.g. contours 131), as is known in the art. A 3D surface (such as mesh 137) may be used to generate one or more contours (e.g. 2D contours) by computing the intersection of the surface (e.g. vertices of a mesh) with one or more planes.
At step 203, the process receives data corresponding to one or more new edit contours, such as edit contour 115. The received edit contours may comprise two-dimensional contours from one, two, three, or more different planes (e.g. coronal, sagittal, transverse, oblique). The process may receive user input indicative of edit contour data at step 203. It should be noted that edit contours may be received in any plane, not limited to the standard coronal, sagittal, and transverse planes.
At step 205, each received edit contour is down-sampled. Exemplary graphical depictions of down-sampling are shown in
At step 207, the system computes new edit constraint points based on the received edit contour data. These new edit constraint points will be used later to create the new corrected surface, as described in detail below. A graphical depiction of an exemplary set of edit constraint points based on two edit contours is shown in
At step 209, the system identifies one or more erroneous pre-existing data items (e.g. constraint points) that should be eliminated. Exemplary embodiments for identification/elimination step 209 are described in detail below with reference to
At step 211, the system creates corrected surface data for the new, corrected surface. In a preferred embodiment, a new corrected set of constraint points is computed, wherein the new corrected set of constraint points comprises the set of pre-existing constraint points, plus the constraint points generated as a result of the new edit contours, minus the constraint points identified for elimination, as shown in Equation (11).
C(correct)=C(old)+C(edit)−C(eliminate) EQ(11)
At step 213 the system generates a new corrected surface based on the new corrected set of constraint points. Various methods for generating a surface from a set of constraint points are known in the art. See, e.g., Turk and O'Brien, Shape Transformation Using Variational Implicit Functions (1999). See also Bloomenthal, (1994).
At step 215 the system displays the new surface. For example, the system may display the new surface on a monitor in real-time as new edit contours are received from user input. Or as another example, the system may generate and display the new surface steps (211 and 213) in response to a user request to re-generate the surface based on edit contours input so far. The system may allow the user to generate a new surface based on a single edit contour, as described below.
It should be noted again that the embodiment shown in
It should also be noted that the modification process (e.g. the process of
Graphical depictions of exemplary surface projection points 425 and 435 are shown in
At step 225, the system uses the surface projection points to identify pre-existing surface constraint points for elimination. In an exemplary embodiment, the system applies a proximity threshold test to identify all pre-existing data points that are within a pre-set Euclidean threshold distance from any of the surface points 425 and 435. The pre-set threshold distance effectively defines a 3D “zone of elimination” around each surface point, such that any pre-existing constraint point within the zone will be eliminated.
It should be noted that the proximity threshold test (in the surface projection approach and/or the surface patch approach) may compute a monotonic function of the distance between the two points, and compare the output of the function to the proximity threshold. For example, the system could compute the square of the distance or the logarithm of the distance between the points.
It should further be noted that the proximity threshold value may be input by a user. For example, the system may prompt the user for a proximity threshold at the beginning of the process (e.g. at step 203). In another exemplary embodiment, the system could display a newly generated surface along with a prompt requesting the user to input a proximity threshold value (e.g. at step 215), and the prompt may be operable to adjust the proximity threshold and thereby cause the system to re-generate the new surface based on the received proximity threshold value. It should be understood that various user prompts for requesting a proximity threshold value may be employed as is known in the art. For example, the system may display a “slider” tool for receiving a proximity threshold value from the user with pre-determined slider increments as is known in the art.
At step 227 the system stores a list of the pre-existing constraint points to be eliminated. It should be noted that rather than storing a list of points for elimination, the system could simply exclude the eliminated points from the new surface data.
A more detailed depiction of the re-ordering step 233 is shown in
At step 237 the system uses the surface patch to identify pre-existing surface constraint points for elimination. In an exemplary embodiment, the system applies a proximity threshold test to identify all pre-existing data points that are within a pre-set Euclidean threshold distance from any vertex of the surface patch mesh. The pre-set threshold distance effectively defines a 3D “zone of elimination” around each surface patch vertex, such that any pre-existing constraint point within the zone will be eliminated. At step 239 the system stores a list of pre-existing points to be eliminated (similar to step 227). In an exemplary embodiment wherein both the surface projection approach and surface patch approach are used, the system may use the same stored list of pre-existing constraint points to be eliminated. For example, at step 239 the system may append additional points for elimination to the list.
The system may be programmed to perform the two approaches in a parallel fashion. For example, a computer having multiple processing units (e.g. CPUs and/or CPU cores) could be programmed to use a first processing unit to generate surface projection points (e.g., steps 221-223) and a second processing unit to simultaneously generate one or more surface patches (e.g., steps 231-235). For example, in the exemplary embodiment of
At step 245, the system applies a combined proximity threshold test using both the surface projection points from step 223 and the surface patches from step 235. For each pre-existing constraint point, the system loops through each surface projection point and surface patch vertex, and determines the Euclidean distance from the pre-existing constraint point. If the distance is less than the threshold, then flow proceeds to step 247 wherein the system identifies the pre-existing constraint point for elimination. Thus, this embodiment efficiently eliminates pre-existing constraint points based on both the surface projection points and surface patch approaches.
At step 249, the system determines whether the loop has iterated through all pre-existing constraint points. If not, flow proceeds to step 245 for the next pre-existing constraint point. If the loop is finished, then flow proceeds to step 239 where the system stores the list of pre-existing constraint points for elimination.
It should also be noted that the system could also perform the two approaches separately (e.g. perform the process of
where ∥ ∥ is the scalar length of the vector cross product. The midpoint between the endpoints is x0=(x1+x2)/2, and the height of the curve through the new constraints is approximated by the difference vector h=x3−x0 with scalar length H=∥h∥. New points x4 and x5 are created according to the following equations:
x4=x0+Hn EQ (13)
x5=x0−Hn EQ(14)
Interpolated constraint points x4 and x5 bring the total number of available end points to 4, thus allowing use of the VTK surface patch generation algorithm in the “surface patch” approach, and/or providing additional surface projection points in the “surface projection” approach.
The surface patch approach could also be used with multiple disjoint edit contours. To allow generation of multiple surface patches for an object, it would be preferable that the contour editing software allow the user to assigned it contours to a corresponding segmentation error. For example, the user could specify that a first set of edit contours corresponds to segmentation error A, while a second set of edit contours corresponds to segmentation error B. When generating surface patches, the first set of edit contours would be used to generate a first surface patch, and the second set of edit contours would be used to generate a second surface patch.
It should be noted that disjoint edit contours may correspond to any combination of different types of segmentation errors. For example, in the case of multiple disjoint edit contours, some edit contours may be designed to correct an over-segmentation while different edit contours are designed to correct an under-segmentation.
The processor 1300 can be configured to execute one or more software programs. These software programs can take the form of a plurality of processor-executable instructions that are resident on a non-transitory computer-readable storage medium such as memory 1302. Processor 1304 is also preferably configured to execute an operating system such as Microsoft Windows™ or Linux™, as is known in the art. Furthermore, it should be understood that processor 1304 can be implemented as a special purpose processor that is configured for high performance with respect to computationally intensive mathematics, such as math or graphics co-processors.
Analysis of Results
FIGS. 15(A)-(C) explored the reconstruction properties of implicit surfaces;
The full disclosure of each of the following references is incorporated herein by reference:
- Bloomenthal, J, “An Implicit Surface Polygonizer,” Graphics Gems IV, P Heckbert, Ed., Academic Press, New York, 1994.
- Botsch, M, and Sorkine, O, “On linear variational surface deformation methods,” IEEE Transactions on Visualization and Computer Graphics, 14(1), 213-230, 2008.
- Carr, J C, Fright, R, and Beatson, R K, “Surface interpolation with radial basis functions for medical imaging,” IEEE Transactions on Medical Imaging, 16, 96-107, 1997.
- Carr, J C, Beatson, R K, Cherrie, J B, Mitchell, T J, Fright, W R, McCallum, B C, and Evans, T R, “Reconstruction and representation of 3D objects with radial basis functions,” Proceedings of SIGGRAPH 01, pp 67-76, 2001.
- de Berg, M, van Krefeld, M, Overmars, M, Schwartzkopf, O, Computational Geometry: Algorithms and Applications, Springer-Verlag, New York, 1997.
- DeBoor, C, A Practical Guide to Splines, Springer, New York, 2001.
- Cruz, L M V, and Velho, L, “A sketch on Sketch-based interfaces and modeling,” Graphics, Patterns and Images Tutorials, 23rd SIBGRAPI Conference, 2010, pp. 22-33.
- Dinh, H Q, Turk, G, Slabaough, G, “Reconstructing surfaces by volumetric regularization using radial basis functions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 24, 1358-1371, 2002.
- Dinh, H Q, Yezzi, A, and Turk, G, “Texture transfer during shape transformation,” ACM Transactions on Graphics, 24, 289-310, 2005.
- Gelas, A, Valette, S, Prost, R, Nowinski, W L, “Variational implicit surface meshing,” Computers and Graphics, 33, 312-320, 2009.
- Gering D, Nabavi A, Kikinis R, Grimson W, Hata N, Everett P, Jolesz F, Wells W. An Integrated Visualization System for Surgical Planning and Guidance using Image Fusion and Interventional Imaging. Int Conf Med Image Comput Assist Interv. 1999; 2:809-819.
- Golub G H and Van Loan, C F, Matrix Computations, The Johns Hopkins University Press, Baltimore, 1996.
- Ho, S, Cody, H, Gerig, G, “SNAP: A Software Package for User-Guided Geodesic Snake Segmentation”, Technical Report, UNC Chapel Hill, April 2003
- Ibanez, L, Schroeder, Ng L, Cates, J, The ITK Software Guide, Second Edition, 2005.
- Jackowski, J, and Goshtasby, A, “A computer-aided design system for refinement of segmentation errors,” MICCAI 2005, LNCS 3750, 717-724, 2005.
- Kalbe, T, Koch, T, and Goesese, “High-quality rendering of varying isosurfaces with cubic trivariate C1-continuous splines,” ISVC 1, LNCS 5875, 596-607, 2009.
- Karpenko, O, Hughes, J F, and Raskar, R, “Free-form sketching with variational implicit surfaces,” Computer Graphics Forum, 21, 585-594, 2002.
- Kaus, M R, Pekar, V, Lorenz, C, Truyen, R, Lobregt, S, and Weese, J, “Automated 3-d PDM construction from segmented images using deformable models,” IEEE Transactions on Medical Imaging, 22(8), 1005-1013, 2003.
- Kho, Y, and Garland, M, “Sketching mesh deformations,” ACM Symposium on Interactive 3D Graphics and Games, 2005.
- Knoll, A, Hijazi, Y, Kensler, A, Schott, M, Hansen, C, and Hagen, H, “Fast and robust ray tracing of general implicits on the GPU,” Scientific Computing and Imaging Institute, University of Utah, Technical Report No. UUSCI-2007-014, 2007.
- Leventon, M E, Grimson, W E L, Faugeras, O, “Statistical shape influence in geodesic active contours,” IEEE Conference on Computer Vision and Pattern Recognition, 1316-1323, 2000.
- Lorensen, W E, and Cline, H E, “Marching Cubes: A High Resolution 3D Surface Construction Algorithm,” Computer Graphics; Proceedings of SIGGRAPH '87, 21, 163-169, 1987.
- Nealen, A, Sorkine, O, Alexa, M, and Cohen-Or, D, “A sketch-based interface for detail—preserving mesh editing,” Proceedings of ACM SIGGRAPH 2005, 24(3), 2005.
- Pekar, V, McNutt, T R, Kaus, M R, “Automated model-based organ delineation for radio-therapy planning in prostatic region,” Int. J. Radiation Oncology Biol. Phys., 60, (3), 973-980, 2004.
- Pieper S, Lorensen B, Schroeder W, Kikinis R. The NA-MIC Kit: ITK, VTK, Pipelines, Grids and 3D Slicer as an Open Platform for the Medical Image Computing Community. Proceedings of the 3rd IEEE International Symposium on Biomedical Imaging: From Nano to Macro 2006; 1:698-701.
- Pohl K M, Fisher, J, Grimson, W E L, Kikinis, R, Wells, W M, “A Bayesian model for joint segmentation and registration,” NeuroImage, 31, 228-239, 2006.
- Schroeder, W, Martin, K, and Lorensen, W, The Visualization Toolkit, 4th Ed., Kitware, 2006.
- Singh, J M, and Narayanan, P J, “Real-time ray-tracing of implicit surfaces on the GPU,” IEEE Transactions on Visualization and Computer Graphics, 99, 261-272, 2009.
- Tsai, A, Yezzi, A Jr., Wells, W, Tempany, C, Tucker, D, Fan, A, Grimson, W E, Willsky, A, “A shape-based approach to the segmentation of medical imagery using level sets,”
- IEEE Transactions on Medical Imaging, 22(2), 137-154, 2003.
- Tsai, A, Wells, W M, Warfield, S K, Willsky, A S, “An EM algorithm for shape classification based on level sets,” Medical Image Analysis, 9, 491-502, 2005.
- Turk, G and O'Brien, J F, “Shape Transformation Using Variational Implicit Functions,” in Proceedings of SIGGRAPH 99, Annual Conference Series, (Los Angeles, Calif.), pp. 335-342, August 1999.
- Wahba, G, Spline Models for Observational Data, SIAM (Society for Industrial and Ap-plied Mathematics), Philadelphia, Pa., 1990.
- Yushkevich, P A, Piven, J, Cody Hazlett, H, Gimpel Smith, R., Ho, S, Gee, J J, Gerig, G, “User-Guided 3D Active Contour Segmentation of Anatomical Structures: Significantly Improved Efficiency and Reliability,” NeuroImage 31, 1116-1128, 2006.
It should be noted again that embodiments described above may be useful beyond the field of medical imaging, in any field where there is a need for correcting segmentation errors in images, such as cellular imaging or other fields.
While the present invention has been described above in relation to its preferred embodiments including the preferred equations discussed herein, various modifications may be made thereto that still fall within the invention's scope. Such modifications to the invention will be recognizable upon review of the teachings herein. Accordingly, the full scope of the present invention is to be defined solely by the appended claims and their legal equivalents.
Claims
1. A computer-implemented method for processing image data of an anatomical object utilizing a processor executing computer-executable instructions, the method comprising:
- receiving a first plurality of data points defining a three-dimensional surface reflecting a first contour and representing at least a portion of the anatomical object;
- receiving a second plurality of data points defining at least one edit contour for modifying the three-dimensional surface;
- generating a third plurality of data points based on the first and second plurality of data points, the third plurality of data points lying on the three-dimensional surface; and
- identifying at least a subset of the first plurality of data points for elimination by applying a proximity threshold based on proximity between each of the first plurality of data points and at least one of the third plurality of data points,
- wherein the identified subset defines a modification of at least a portion of the three-dimensional surface.
2. The computer-implemented method of claim 1, further comprising:
- generating a fourth plurality of data points based on the first, second, and third plurality of data points, the fourth plurality of data points comprising the second plurality of data points, and the first plurality of data points excluding the identified subset of the first plurality of data points.
3. The computer-implemented method of claim 2, further comprising:
- generating a corrected surface based on the fourth plurality of data points; and
- displaying the corrected surface on a display device.
4. The computer-implemented method of claim 1, wherein the identifying step further comprises:
- generating the third plurality of data points by projecting on the three-dimensional surface the second plurality of data points,
- evaluating each data point in the first plurality of data points for its proximity to the third plurality of data points, and
- identifying data points from the first plurality of data points based on a determination that the data point is within a pre-determined threshold distance from at least one of the third plurality of data points.
5. The computer-implemented method of claim 1, wherein the identifying step further comprises:
- generating a surface patch based on the first and second plurality of data points, the surface patch comprising a plurality of vertices on the three-dimensional surface, the surface patch area being bounded by the end points of the at least one edit contour,
- evaluating each data point in the first plurality of data points for its proximity to the plurality of vertices, and
- identifying data points from the first plurality of data points for elimination based on a determination that the data point is within a pre-determined threshold distance from at least one of the vertices.
6. The computer-implemented method of claim 1, wherein the at least one edit contour corrects at least one of an over-segmentation error or an under-segmentation error for the object of interest.
7. The computer-implemented method of claim 1, wherein the at least one edit contour comprises a first edit contour in a first plane, and a second edit contour in a second plane.
8. The computer-implemented method of claim 7, wherein the at least one edit contour further comprises a third edit contour in a third plane.
9. The computer-implemented method of claim 1, wherein the anatomical object comprises at least one of a bladder or a prostate.
10. The computer-implemented method of claim 1, wherein the step of applying the proximity threshold comprises computing a value based on a function of a distance between two points and comparing the computed value to a threshold.
11. The computer-implemented method of claim 10, wherein the function comprises the distance squared or a logarithm of the distance.
12. An apparatus for processing image data of an anatomical object, the apparatus comprising:
- a processor configured to:
- receive a first plurality of data points defining a three-dimensional surface reflecting a first contour and representing at least a portion of the anatomical object;
- receive a second plurality of data points defining at least one edit contour for modifying the three-dimensional surface;
- generate a third plurality of data points based on the first and second plurality of data points, the third plurality of data points lying on the three-dimensional surface; and
- identify at least a subset of the first plurality of data points for elimination by applying a proximity threshold based on proximity between each of the first plurality of data points and at least one of the third plurality of data points, wherein the identified subset defines a modification of at least a portion of the three-dimensional surface.
13. The apparatus of claim 12, wherein the processor is further configured to generate a fourth plurality of data points based on the first, second, and third plurality of data points, the fourth plurality of data points comprising the second plurality of data points plus the first plurality of data points excluding the identified data points.
14. The apparatus of claim 13, wherein the processor is further configured to:
- generate a corrected surface based on the fourth plurality of data points; and
- display the corrected surface on a display device.
15. The apparatus of claim 12, wherein the processor is further configured to:
- generate the third plurality of data points by projecting on the three-dimensional surface the second plurality of data points;
- evaluate each data point in the first plurality of data points for its proximity to the third plurality of data points; and
- identify data points from the first plurality of data points for elimination based on a determination that the data point is within a pre-determined threshold distance from at least one of the third plurality of data points.
16. The apparatus of claim 12, wherein the processor is further configured to:
- generate a surface patch based on the first and second plurality of data points, the surface patch comprising a plurality of vertices on the three-dimensional surface, the surface patch area being bounded by the end points of the at least one edit contour;
- evaluate each data point in the first plurality of data points for its proximity to the plurality of vertices; and
- identify data points from the first plurality of data points for elimination based on a determination that the data point is within a pre-determined threshold distance from at least one of the vertices.
17. The apparatus of claim 12, wherein the at least one edit contour corrects at least one of an over-segmentation error or an under-segmentation error for the object of interest.
18. The apparatus of claim 12, wherein the at least one edit contour comprises a first edit contour in a first plane, and a second edit contour in a second plane.
19. The apparatus of claim 18, wherein the at least one edit contour further comprises a third edit contour in a third plane.
20. The apparatus of claim 12, wherein the anatomical object comprises at least one of a bladder or a prostate.
21. The apparatus of claim 12, wherein the processor is configured to apply the proximity threshold by first computing a value based on a function of distance between two points and then comparing the computed value to a threshold.
22. The apparatus of claim 21, wherein the function of distance comprises at least one of a distance squared or a logarithm of the distance.
23. A computer program product for processing image data of an anatomical object, comprising:
- a non-transitory computer-readable medium for storing a plurality of instructions that are executable by a processor to:
- receive a first plurality of data points defining a three-dimensional surface reflecting a first contour and representing at least a portion of the anatomical object;
- receive a second plurality of data points defining at least one edit contour for modifying the three-dimensional surface;
- generate a third plurality of data points based on the first and second plurality of data points, the third plurality of data points lying on the three-dimensional surface; and
- identify at least a subset of the first plurality of data points for elimination by applying a proximity threshold based on proximity between each of the first plurality of data points and at least one of the third plurality of data points, wherein the identified subset defines a modification of at least a portion of the three-dimensional surface.
5859891 | January 12, 1999 | Hibbard |
6075538 | June 13, 2000 | Shu et al. |
6112109 | August 29, 2000 | D'Urso |
6142019 | November 7, 2000 | Venchiarutti et al. |
6259943 | July 10, 2001 | Cosman et al. |
6262739 | July 17, 2001 | Migdal et al. |
6343936 | February 5, 2002 | Kaufman et al. |
6606091 | August 12, 2003 | Liang et al. |
6683933 | January 27, 2004 | Saito et al. |
6947584 | September 20, 2005 | Avila et al. |
7010164 | March 7, 2006 | Weese et al. |
7110583 | September 19, 2006 | Yamauchi |
7167172 | January 23, 2007 | Kaus et al. |
7333644 | February 19, 2008 | Jerebko et al. |
7428334 | September 23, 2008 | Schoisswohl et al. |
7471815 | December 30, 2008 | Hong et al. |
7620224 | November 17, 2009 | Matsumoto |
8098909 | January 17, 2012 | Hibbard et al. |
20050089214 | April 28, 2005 | Rubbert et al. |
20050168461 | August 4, 2005 | Acosta et al. |
20050231530 | October 20, 2005 | Liang et al. |
20050276455 | December 15, 2005 | Fidrich et al. |
20060147114 | July 6, 2006 | Kaus et al. |
20060149511 | July 6, 2006 | Kaus et al. |
20060159322 | July 20, 2006 | Rinck et al. |
20060159341 | July 20, 2006 | Pekar et al. |
20060177133 | August 10, 2006 | Kee |
20060204040 | September 14, 2006 | Freeman et al. |
20060256114 | November 16, 2006 | Nielsen et al. |
20070014462 | January 18, 2007 | Rousson et al. |
20070041639 | February 22, 2007 | Von Berg et al. |
20070092115 | April 26, 2007 | Usher et al. |
20070167699 | July 19, 2007 | Lathuiliere et al. |
20080225044 | September 18, 2008 | Huang et al. |
20080310716 | December 18, 2008 | Jolly et al. |
20090016612 | January 15, 2009 | Lobregt et al. |
20090060299 | March 5, 2009 | Hibbard et al. |
20090190809 | July 30, 2009 | Han et al. |
20100134517 | June 3, 2010 | Saikaly et al. |
20110200241 | August 18, 2011 | Roy et al. |
20120057768 | March 8, 2012 | Hibbard et al. |
20120057769 | March 8, 2012 | Hibbard et al. |
- Stefanescu, “Parallel Nonlinear Registration of Medical Images With a Priori Information on Anatomy and Pathology”, PhD Thesis. Sophia-Antipolis: University of Nice, 2005, 140 pages.
- Strang, “Introduction to Applied Mathematics”, 1986, Wellesley, MA: Wellesley-Cambridge Press, pp. 242-262.
- Thirion, “Image Matching as a Diffusion Process: An Analog with Maxwell's Demons”, Med. Imag. Anal., 1998, pp. 243-260, vol. 2 (3).
- Thomas, “Numerical Partial Differential Equations: Finite Difference Methods”, Springer, New York, 1995.
- Turk et al., “Shape Transformation Using Variational Implicit Functions”, Proceedings of SIGGRAPH 99, Annual Conference Series, (Los Angeles, California), pp. 335-342, Aug. 1999.
- Vemuri et al., “Joint Image Registration and Segmentation”, Geometric. Level Set Methods in Imaging, Vision, and Graphics, S. Osher and N. Paragios, Editors, 2003, Springer-Verlag, New York, pp. 251-269.
- Wahba, “Spline Models for Observational Data”, SIAM (Society for Industrial and Applied Mathematics), Philadelphia, PA, 1990.
- Wang et al., “Validation of an Accelerated ‘Demons’ Algorithm for Deformable Image Registration in Radiation Therapy”, Phys. Med. Biol., 2005, pp. 2887-2905, vol. 50.
- Wolf et al., “ROPES: a Semiautomated Segmentation Method for Accelerated Analysis of Three-Dimensional Echocardiographic Data”, IEEE Transactions on Medical Imaging, 21, 1091-1104, 2002.
- Xing et al., “Overview of Image-Guided Radiation Therapy”, Med. Dosimetry, 2006, pp. 91-112, vol. 31 (2).
- Xu et al., “Image Segmentation Using Deformable Models”, Handbook of Medical Imaging, vol. 2, M. Sonka and J. M. Fitzpatrick, Editors, 2000, SPIE Press, Chapter 3.
- Yezzi et al., “A Variational Framework for Integrating Segmentation and Registration Through Active Contours”, Med. lmag. Anal., 2003, pp. 171-185, vol. 7.
- Yoo, “Anatomic Modeling from Unstructured Samples Using Variational Implicit Surfaces”, Proceedings of Medicine Meets Virtual Reality 2001, 594-600.
- Young et al., “Registration-Based Morphing of Active Contours for Segmentation of CT Scans”, Mathematical Biosciences and Engineering, Jan. 2005, pp. 79-96, vol. 2 (1).
- Yushkevich et al., “User-Guided 3D Active Contour Segmentation of Anatomical Structures: Significantly Improved Efficiency and Reliability” , NeuroImage 31, 1116-1128, 2006.
- Zagrodsky et al., “Registration-Assisted Segmentation of Real-Time 3-D Echocardiographic Data Using Deformable Models”, IEEE Trans. Med. Imag., Sep. 2005, pp. 1089-1099, vol. 24 (9).
- Zeleznik et al., “Sketch: An Interface for Sketching 3D Scenes”, Proceedings of SIGGRAPH 96, 163-170, 1996.
- Zhong et al., “Object Tracking Using Deformable Templates”, IEEE Trans. Patt. Anal. Machine Intell., May 2000, pp. 544-549, vol. 22 (5).
- Bookstein, “Principal Warps: Thin-Plate Splines and the Decomposition of Deformations”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Jun. 1989, pp. 567-585, vol. 11, No. 6.
- Botsch et al., “On Linear Variational Surface Deformation Methods”, IEEE Transactions on Visualization and Computer Graphics, 2008, pp. 213-230, vol. 14, No. 1.
- Cruz et al., “A sketch on Sketch-Based Interfaces and Modeling”, Graphics, Patterns and Images Tutorials, 23rd SIBGRAPI Conference, 2010, pp. 22-33.
- De Berg et al., “Computational Geometry: Algorithms and Applications”, 1997, Chapter 5, Springer-Verlag, New York.
- Dice's coeffieient, Wikipedia, 1945.
- Dinh et al., “Reconstructing Surfaces by Volumetric Regularization Using Radial Basis Functions”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Oct. 2002, pp. 1358-1371, vol. 24, No. 10.
- Duchon, “Splines Minimizing Rotation-Invariant SBMI-NORMS in Soboley Spaces”, 1977, Universite Scientifique et Medicale Laboratoire de Mathematiques Appliques, Grenoble France.
- Gelas et al., “Variatonal Implicit Surface Meshing”, Computers and Graphics, 2009, pp. 312-320, vol. 33.
- Gering et al., “An Integrated Visualization System for Surgical Planning and Guidance using Image Fusion and Interventional Imaging”, Int Conf Med Image Comput Assist Interv, 1999, pp. 809-819, vol. 2.
- Girosi et al., “Priors, Stabilizers and Basis Functions: from regularization to radial, tensor and additive splines”, Massachusetts Institute of Technology Artificial Intelligence Laboratory, Jun. 1993, 28 pages.
- Ibanez et al., “The ITK Software Guide” Second Edition, 2005.
- Jackowski et al., “A Computer-Aided Design System for Refinement of Segmentation Errors”, MICCAI 2005, LNCS 3750, pp. 717-724.
- Kalbe et al., “High-Quality Rendering of Varying Isosurfaces with Cubic Trivariate C1-continuous Splines”, ISVC 1, LNCS 5875, 2009, pp. 596-607.
- Kaus et al., “Automated 3-D PDM Construction From Segmented Images Using Deformable Models”, IEEE Transactions on Medical Imaging, Aug. 2003, pp. 1005-1013, vol. 22, No. 8.
- Kho et al., “Sketching Mesh Deformations”, ACM Symposium on Interactive 3D Graphics and Games, 2005, 8 pages.
- Knoll et al., “Fast and Robust Ray Tracing of General Implicits on the GPU”, Scientific Computing and Imaging Institute, University of Utah, Technical Report No. UUSCI-2007-014, 2007, 8 pages.
- Leventon et al., “Statistical Shape Influence in Geodesic Active Contours”, IEEE Conference on Computer Vision and Pattern Recognition, 2000, pp. 1316-1323.
- Nealen et al., “A Sketch-Based Interface for Detail-Preserving Mesh Editing”, Proceedings of ACM SIGGRAPH 2005, 6 pages, vol. 24, No. 3.
- Notice of Allowance for U.S. Appl. No. 12/022,929 dated May 8, 2012.
- Osher et al., “Level Set Methods and Dynamic Implicit Surfaces”, Chapters 11-13, 2003, Springer-Verlag, New York, NY.
- Pieper et al., “The NA-MIC Kit: ITK, VTK, Pipelines, Grids and 3D Slicer as An Open Platform for the Medical Image Computing Community”, Proceedings of the 3rd IEEE International Symposium on Biomedical Imaging: From Nano to Macro 2006, pp. 698-701, vol. 1.
- Pohl et al., “A Bayesian model for joint segmentation and registration”, NeuroImage, 2006, pp. 228-239, vol. 31.
- Sapiro, “Geometric Partial Differential Equations and Image Analysis”, Chapter 8, 2001, Cambridge University Press.
- Schroeder et al., “The Visualization Toolkit”, 2nd edition, Chapter 5, 1998, Prentice-Hall, Inc.
- Singh et al., “Real-Time Ray-Tracing of Implicit Surfaces on the GPU”, IEEE Transactions on Visualization and Computer Graphics, 2009, pp. 261-272, vol. 99.
- Tikhonov et al., “Solutions of III-Posed Problems”, Introduction-Chapter 2, 1977, John Wiley & Sons.
- Tsai et al, “A Shape-Based Approach to the Segmentation of Medical Imagery Using Level Sets”, IEEE Transactions on Medical Imaging, Feb. 2003, pp. 137-154, vol. 22, No. 2.
- Tsai et al., “An EM algorithm for shape classification based on level sets”, Medical Image Analysis, 2005, pp. 491-502, vol. 9.
- Adelson et al., “Pyramid Methods in Image Processing”, RCA Engineer, Nov./Dec. 1984, pp. 33-41, vol. 29-6.
- Anderson et al., “LAPACK User's Guide”, Third Edition, SIAM—Society for Industrial and Applied Mathematics, 1999, Philadelphia.
- Barrett et al., “Interactive Live-Wire Boundary Extraction”, Medical Image Analysis, 1, 331-341, 1997.
- Bertalmio et al., “Morphing Active Countours”, IEEE Trans. Patt. Anal. Machine Intell, 2000, pp. 733-737, vol. 22.
- Bloomenthal, “An Implicit Surface Polygonizer”, Graphics Gems IV, P. Heckbert, Ed., Academic Press, New York, 1994.
- Burnett et al., “A Deformable-Model Approach to Semi-Automatic Segmentation of CT Images Demonstrated by Application to the Spinal Canal”, Med. Phys., Feb. 2004, pp. 251-263, vol. 31 (2).
- Carr et al., “Reconstruction and Representation of 3D Objects with Radial Basis Functions”, Proceedings of SIGGRAPH 01, pp. 67-76, 2001.
- Carr et al., “Surface Interpolation with Radial Basis Functions for Medical Imaging”, IEEE Transactions on Medical Imaging, 16, 96-107, 1997.
- Cover et al., “Elements of Information Theory”, Chapter 2, 1991, Wiley, New York, 33 pages.
- Cover et al., “Elements of Information Theory”, Chapter 8, 1991, Wiley, New York, 17 pages.
- Davis et al., “Automatic Segmentation of Intra-Treatment CT Images for Adaptive Radiation Therapy of the Prostate”, presented at 8th Int. Conf. MICCAI 2005, Palm Springs, CA, pp. 442-450.
- DeBoor, “A Practical Guide to Splines”, Springer, New York, 2001.
- Digital Imaging and Communications in Medicine (DICOM), http://medical.nema.org/.
- Dinh et al., “Reconstructing Surfaces by Volumetric Regularization Using Radial Basis Functions”, IEEE Trans. Patt. Anal. Mach. Intell., 24, 1358-1371, 2002.
- Dinh et al., “Texture Transfer During Shape Transformation”, ACM Transactions on Graphics, 24, 289-310, 2005.
- DoCarmo, “Differential Geometry of Curves and Surfaces” Prentice Hall, New Jersey, 1976.
- Falcao et al., “An Ultra-Fast User-Steered Image Segmentation Paradigm: Live Wire on the Fly”, IEEE Transactions on Medical Imaging, 19, 55-62, 2000.
- Freedman et al., “Active Contours for Tracking Distributions”, IEEE Trans. Imag. Proc., Apr. 2004, pp. 518-526, vol. 13 (4).
- Gao et al., “A Deformable Image Registration Method to Handle Distended Rectums in Prostate Cancer Radiotherapy”, Med. Phys., Sep. 2006, pp. 3304-3312, vol. 33 (9).
- Gering et al., “An Integrated Visualization System for Surgical Planning and Guidance Using Image Fusion and an Open MR”, Journal of Magnetic Resonance Imaging, 13, 967-975, 2001.
- Gering, “A System for Surgical Planning and Guidance Using Image Fusion and Interventional MR”, MS Thesis, MIT, 1999.
- Golub et al., “Matrix Computations”, Third Edition, The Johns Hopkins University Press, Baltimore, 1996.
- Han et al., “A Morphing Active Surface Model for Automatic Re-Contouring in 4D Radiotherapy”, Proc. of SPIE, 2007, vol. 6512, 9 pages.
- Ho et al., “SNAP: A Software Package for User-Guided Geodesic Snake Segmentation”, Technical Report, UNC Chapel Hill, Apr. 2003.
- Huang et al., “Semi-Automated CT Segmentation Using Optic Flow and Fourier Interpolation Techniques”, Computer Methods and Programs in Biomedicine, 84, 124-134, 2006.
- Igarashi et al., “Smooth Meshes for Sketch-based Freeform Modeling” In ACM Symposium on Interactive 3D Graphics, (ACM I3D'03), pp. 139-142, 2003.
- Igarashi et al., “Teddy: A Sketching Interface for 3D Freeform Design”, Proceedings of SIGGRAPH 1999, 409-416.
- Ijiri et al., “Seamless Integration of Initial Sketching and Subsequent Detail Editing in Flower Modeling”, Eurographics 2006, 25, 617-624, 2006.
- Jain et al., “Deformable Template Models: A Review”, Signal Proc., 1998, pp. 109-129, vol. 71.
- Jain, “Fundamentals of Digital Image Processing”, Prentice-Hall, New Jersey, 1989.
- Jehan-Besson et al., “Shape Gradients for Histogram Segmentation Using Active Contours”, 2003, presented at the 9th IEEE Int. Conf. Comput. Vision, Nice, France, 8 pages.
- Kalet et al., “The Use of Medical Images in Planning and Delivery of Radiation Therapy”, J. Am. Med. Inf. Assoc., Sep./Oct. 1997, pp. 327-339, vol. 4 (5).
- Karpenko et al “Free-Form Sketching with Variational Implicit Surfaces” Computer Graphics Forum 21, 585-594, 2002.
- Karpenko et al., “SmoothSketch: 3D Free-Form Shapes From Complex Sketches”, Proceedings of SIGGRAPH 06, pp. 589-598.
- Leymarie et al., “Tracking Deformable Objects in the Plane Using an Active Contour Model”, IEEE Trans. Patt. Anal. Machine Intell., Jun. 1993, pp. 617-634, vol. 15 (6).
- Lipson et al., “Conceptual Design and Analysis by Sketching”, Journal of AI in Design and Manufacturing, 14, 391-401, 2000.
- Lorenson et al., “Marching Cubes: A High Resolution 3D Surface Construction Algorithm”, Computer Graphics, Jul. 1987, pp. 163-169, vol. 21 (4).
- Lu et al., “Automatic Re-Contouring in 4D Radiotherapy”, Phys. Med. Biol., 2006, pp. 1077-1099, vol. 51.
- Lu et al., “Fast Free-Form Deformable Registration Via Calculus of Variations”, Phys. Med. Biol., 2004, pp. 3067-3087, vol. 49.
- Marker et al., “Contour-Based Surface Reconstruction Using Implicit Curve Fitting, and Distance Field Filtering and Interpolation”, The Eurographics Association, 2006, 9 pages.
- Paragios et al., “Geodesic Active Contours and Level Sets for the Detection and Tracking of Moving Objects”, IEEE Trans. Patt. Anal. Machine Intell., Mar. 2000, pp. 266-280, vol. 22 (3).
- Pekar et al., “Automated Model-Based Organ Delineation for Radiotherapy Planning in Prostate Region”, Int. J. Radiation Oncology Biol. Phys., 2004, pp. 973-980, vol. 60 (3 ).
- Pentland et al., “Closed-Form Solutions for Physically Based Shape Modeling and Recognition”, IEEE Trans. Patt. Anal. Machine Intell., Jul. 1991, pp. 715-729, vol. 13 (7).
- Piegl et al., “The NURBS Book”, Second Edition, Springer, New York, 1997.
- Press et al. “Numerical Recipes in C”, Second Edition, Cambridge University Press, 1992.
- Rogelj et al., “Symmetric Image Registration”, Med. Imag. Anal., 2006, pp. 484-493, vol. 10.
- Sarrut et al., “Simulation of Four-Dimensional CT Images from Deformable Registration Between Inhale and Exhale Breath-Hold CT Scans”, Med. Phys., Mar. 2006, pp. 605-617, vol. 33 (3).
- Schmidt et al., ShapeShop: Sketch-Based. Solid Modeling with Blob Trees, EUROGRAPHICS Workshop on Sketch-Based Interfaces and Modeling, 2005.
- Schroeder et al., “The Visualization Toolkit”, 2nd Edition, Kitware, 2006, Ch. 5 & 13, 65 pp.
- Sethian, “Level Set Methods and Fast Marching Methods”, 2nd ed., 1999, Cambridge University Press, Chapters 1, 2 & 6, 39 pages.
- International Search Report and Written Opinion for PCT/US2012/048938 dated Oct. 16, 2012.
- Office Action for U.S. Appl. No. 13/295,494 dated Sep. 13, 2012.
- Office Action for U.S. Appl. No. 13/295,525 dated Nov. 28, 2012.
Type: Grant
Filed: Aug 1, 2011
Date of Patent: Oct 21, 2014
Patent Publication Number: 20130034276
Assignee: Impac Medical Systems, Inc. (Sunnyvale, CA)
Inventor: Lyndon S. Hibbard (St. Louis, MO)
Primary Examiner: Jon Chang
Application Number: 13/195,771
International Classification: G06T 7/00 (20060101); G06T 19/20 (20110101); G06T 17/30 (20060101);