METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR MAPPING REGIONS IN A MODEL OF AN OBJECT COMPRISING AN ANATOMICAL STRUCTURE FROM ONE IMAGE DATA SET TO IMAGES USED IN A DIAGNOSTIC OR THERAPEUTIC INTERVENTION

-

Methods, systems, and computer readable media for mapping a model of an object comprising an anatomical structure in a planning image and an intervention target region within it to intervention-guiding image data are disclosed. According to one method, an initial medial representation object model (m-rep) of an object comprising an anatomical structure is created based on image data of at least a first instance of the object. A patient-specific m-rep is created by deforming the initial m-rep based on planning image data of at least a second instance of the object, wherein the at least second instance of the object is associated with the patient. An intervention target region within the m-rep is identified in an image registered with the planning image. The patient-specific m-rep is correlated to the intervention-guiding image data of the at least second instance of the object, deformed from the planning image. The intervention target region is transferred to the intervention-guiding image according to the transformation between the m-rep in the planning image and the m-rep in the intervention-guiding image.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Patent Application Ser. No. 60/999,420 filed Oct. 18, 2007 and U.S. Provisional Patent Application Ser. No. 61/087,026 filed Aug. 7, 2008; the disclosures of which are incorporated herein by reference in their entireties.

GRANT STATEMENT

Some of the work involved in creating this invention was supported by NCI and NIBIB Grant Number P01 EB002779. Thus, the U.S. Government has certain rights to this invention.

TECHNICAL FIELD

The subject matter described herein relates to medical image segmentation and analysis. More specifically, the subject matter relates to methods, systems, and computer readable media for mapping regions in a model of an object comprising an anatomical structure from one image data set to images used in a diagnostic or therapeutic intervention.

BACKGROUND

Prostate cancer (CaP) is a major health issue in the United States, with 240,000 diagnoses and 30,000 deaths in 2005. CaP accounts for 33% of all new incidences of non-skin cancer in the United States male population, by far the greatest rate among all non-skin cancers, and the annual death rate is second only to lung/bronchial cancer. Due to the prevalence and serious consequences of CaP, 1.3-1.5 million patients in the United States annually undergo transrectal ultrasound imaging (TRUS)-guided interventional prostate procedures, with biopsy being the most common. Other procedures include insertion of radioactive seeds (brachytherapy) and tissue ablation by methods that freeze or heat tissue.

During biopsy, small-gauge hollow needles are introduced into the prostate to collect core tissue samples under visual guidance of TRUS. While TRUS reveals the overall shape of the prostate well, a major disadvantage is the inability to differentiate between normal and cancerous tissues inside the prostate. Due to this limitation, tissue samples are collected based on a pre-defined grid pattern applied to all patients. Unfortunately the false negative rate of this approach is high (˜25-30%). Moreover, even when a biopsy is positive, the tumor may be sampled incompletely, leading to an erroneously low Gleason score (tumor grade), and perhaps to inappropriate treatment decisions. Consequences of false negatives include multiple repeat biopsies over a number of years, missed cancers that continue to grow and possibly metastasize, and patient anxiety. These undesired outcomes significantly affect the patient's quality of life and contribute to health care burden and costs.

TRUS-guided biopsies suffer problems of not allowing doctors accurately enough to preferentially take biopsy samples from tissues inside the prostate that are suspicious for cancer. To overcome these problems, image segmentation may be used to identify anatomical objects of interest and intervention-target regions within them such as potentially cancerous portions of the anatomical structure. As used herein, “image segmentation” refers to the process of partitioning a digital image into a region (i.e., sets of voxels or pixels) in order to find anatomic entities relevant to the medical procedure. Image segmentation may be used to locate objects and boundaries (lines, curves, etc.) appearing in various types of medical images including, but not limited to, magnetic resonance images (MRI) and computed tomography (CT) images.

Thus, segmenting an image divides the image into various regions, wherein voxels in positions inside and outside the object region, or regions, are similar at corresponding positions to an atlas (i.e., a template image, possibly equipped with information as to its variation across cases), with respect to some characteristic or property, such as color, intensity, or texture. In addition to using appearance information in image segmentation, model-based segmentation assumes that objects of interest (i.e., organs, tumors, etc.) have a repetitive geometry and, therefore, a model of the object can be created and used to impose probabilistic constraints on the segmentation of the image, thereby increasing its accuracy.

Two examples of images that may be used to perform image segmentation for medical purposes, including CaP screening and treatment, are a magnetic resonance image (MRI) and an ultrasound (US) image. From such an image one may extract a geometric model of an anatomical structure within a patient that may be used to help make critical treatment planning and delivery decisions, such as aiming a needle for biopsy or for inserting radioactive seeds for treatment. However, current methods for extracting an object model from ultrasound images used to guide clinical procedures, and even extraction from images such as MRI used to plan procedures, require expert human interaction and thus are extremely time consuming and expensive. Moreover humans, even experts with similar training, are known to exhibit inter- and intra-user variabilities that can adversely affect clinical decisions.

Some images used for planning an intervention, such as biopsy, contain not only information about the object of interest, such as a prostate, but also information about where an intervention target region within it is situated. Alternatively, it may be possible to acquire a separate image perfectly registered with the image showing the object of interest, and showing the location of the intervention. Recent research has demonstrated that magnetic resonance spectroscopy images (MRSI) acquired simultaneously and in full registration with MR images may yield high sensitivity for detecting regions within the prostate that are suspicious for CaP, i.e., a target region for the intervention.

Unfortunately it is impractical to routinely perform biopsy and therapeutic procedures under direct visual guidance of images showing degree of suspicion for cancer because the guidance requires real time imaging techniques, such as TRUS, that show the anatomic object but give no information as to degree of suspicion for disease.

Accordingly, in light of these difficulties, a need exists for improved methods, systems, and computer readable media for transferring intervention target regions from the images from which they are derived into the intervention-guiding image. This need is satisfied by mapping a model of an object comprising an anatomical structure, which was derived from a planning image, to the image data used to guide a diagnostic or therapeutic intervention.

SUMMARY

Methods, systems, and computer readable media for mapping a model of an object comprising an anatomical structure segmented from a planning image to an intervention-guiding image of a different type from which the object was derived, and thereby mapping a region contained in the object to the intervention-guiding image are disclosed. According to one method, an initial medial representation object model (m-rep) of an object comprising an anatomical structure is created based on image data of at least a first instance of the object. A patient-specific m-rep is created by deforming the initial m-rep based on planning-image data of at least a second instance of the object, wherein the at least a second instance of the object is associated with the patient and identifying an intervention target region registered with the planning image data. The patient-specific m-rep is correlated to the intervention-guiding image data of the at least second instance of the object, which has been deformed from its state in the planning image data. The intervention target region may then be identified by a mapping derived from a transformation between the m-rep in the planning image data and the m-rep in the intervention-guiding image data.

A system for mapping a model of an object comprising an anatomical structure to the intervention-guiding image data is also disclosed. The system includes an object model generator for generating a medial representation object model (m-rep) of an object comprising an anatomical structure based on planning image data of an object. An object model deformation and mapping module correlates then the m-rep to intervention-guiding image data of the object.

According to one aspect, the system may include probabilistic information derived from numerous instances of image data of an object type, each from a different individual (between-patient probabilistic information), as well as probabilistic information derived from numerous instances of image data of the same individual (within-patient probabilistic information). A patient-specific medial representation object model (m-rep) is based on at least a portion of the planning image data for a patient, wherein the at least a portion of the image data is associated with the object type, such as the prostate. The probabilistic information derived from other individuals is used to derive an m-rep fitted into the planning image for the patient, i.e., a patient-specific m-rep. At the time of intervention an intervention-guiding image is obtained of the same organ, and the patient-specific m-rep is deformed into the intervention-guiding image using the within-patient probabilistic information. This deformation is applied to the intervention target region within the object. A biopsy needle is guided to remove a tissue sample from the intervention target region within the object for biopsy. The guidance is provided by a display of the intervention target region overlaid on the intervention-guiding image data.

According to another aspect, deformation of the m-rep into the intervention-guiding image data of the object includes optimizing an objective function including a component relating the m-rep to the intervention-guiding image data.

As used herein, the term “object” refers to a real-world structure, such as an anatomical structure, desired to be modeled. For example, objects may include kidneys, hearts, lungs, bladders, and prostates.

The term intervention target region refers to a target for the medical intervention, such as a region marked as suspicious for some disease or as a target for treatment.

The term “object image data” refers to a set of data collected by a sensor from an object and stored on a computer. Examples of object image data include CT scan data, MRI data, x-ray data, digital photographs, or any other type of data collected from the real world that can be represented by a set of pixel intensities and positions.

As used herein, the term “medial atom” refers to a position and a collection of vectors having predetermined relationships with respect to each other and with respect to one or more medial axes in a model. Medial atoms may be grouped together to form models. For example, the medial manifold may be sampled over a spatially regular lattice, wherein elements of this lattice are called medial atoms.

The term “medial axis,” as used herein, refers to a set of points equidistant from tangent points on opposite surfaces of a model and located at the intersections of orthogonal lines from the tangent points within the surfaces.

As used herein, the term “object model” refers to a mathematical representation an object including, but not limited to, its surface, interior, and shape variability.

As used herein, the term “m-rep” refers to a medial atom representation of an object or of object image data. An m-rep is an explicit mathematical representation (i.e., a model) of an object's geometry in Riemannian space that represents a geometric object as a set of connected continuous medial manifolds. For 3D objects these medial manifolds may be formed by the centers of all spheres that are interior to the object and tangent to the object's boundary at two or more points. The medial description is defined by the centers of the inscribed spheres and by the associated vectors, called spokes, from the sphere centers to the two respective tangent points on the object boundary. It is appreciated that in addition to representing the surface boundaries of objects, m-reps may also represent object interiors in terms of local position, orientation, and size and may include a single FIGURE or multiple figures. Also, an m-rep may be defined in an object based coordinate system, i.e., a coordinate system of the object of object image data being modeled.

As used herein, the term “figure” refers to a component or a sub-component of a model. Each continuous segment of a medial manifold represents a medial figure. For example, some models may have only a single FIGURE. An example of an object that can be represented by a single FIGURE is an object with a relatively simple shape, such as a kidney. An example of an object that may require multiple figures for accurate modeling is a human hand. A medial atom based model of a hand may include a main figure consisting of the palm of the hand and subfigures consisting of each finger of the hand.

As used herein, the term “voxel” refers to a volume element that represents a value on a regular grid in 3D space.

The subject matter described herein for mapping a planning image of an object comprising an anatomical structure to intervention-guiding object image data and thereby carrying an intervention-target region registered with the planning image into the intervention-guiding image may be implemented using a computer program product comprising computer executable instructions embodied in a tangible computer readable medium that are executed by a computer processor. Exemplary computer readable media suitable for implementing the subject matter described herein includes disk memory devices, programmable logic devices, and application specific integrated circuits. In one implementation, the computer readable medium may include a memory accessible by a processor. The memory may include instructions executable by the processor for implementing any of the methods for routing a call described herein. In addition, a computer readable medium that implements the subject matter described herein may be distributed across multiple physical devices and/or computing platforms.

BRIEF DESCRIPTION OF THE DRAWINGS

The subject matter described herein will now be explained with reference to the accompanying drawings of which:

FIG. 1 is a block diagram of a general purpose computing platform on which the portion of the methods and systems bringing the model and intervention target region to the intervention-guiding image may be implemented according to an embodiment of the subject matter described herein;

FIG. 2 is a flow chart of an exemplary process for mapping MR/MRS imaging data to a TRUS image according to an embodiment of the subject matter described herein;

FIG. 3 is a pair of segmented CT images illustrating expert-generated boundaries of a bladder and a prostate, respectively, according to an embodiment of the subject matter described herein;

FIG. 4 is a three-dimensional representation of an m-rep object model according to an embodiment of the subject matter described herein;

FIG. 5 is a triorthogonal representation of a prostate in a CT image suitable for sampling image intensity values according to an embodiment of the subject matter described herein;

FIG. 6 includes representations of exemplary m-rep object models before and after RIQF analysis is performed according to an embodiment of the subject matter described herein;

FIG. 7 is a series of plots of various statistical measures of RIQF image fit for a bladder according to an embodiment of the subject matter described herein;

FIG. 8 is series of exemplary TRUS images illustrating prostate boundaries generated using various methods according to an embodiment of the subject matter described herein;

FIG. 9A is a surface representation of a bladder and prostate according to an embodiment of the subject matter described herein;

FIG. 9B is a midsagittal slice of a bladder and prostate in an ultrasound scan according to an embodiment of the subject matter described herein;

FIG. 10A is a surface representation of a bladder according to an embodiment of the subject matter described herein;

FIG. 10B is a representation of a bladder including human expert segmentation and computer-based segmentation according to an embodiment of the subject matter described herein;

FIG. 10C is a midaxial slice of a bladder showing computer-based segmentation according to an embodiment of the subject matter described herein;

FIG. 11A is a 3-dimensional representation of a prostate including human expert segmentation and computer-based segmentation according to an embodiment of the subject matter described herein;

FIG. 11B is a slice of a prostate in a CT scan showing computer-based segmentation according to an embodiment of the subject matter described herein;

FIG. 12 is a 3-dimensional representation of a prostate including human expert segmentation and computer-based segmentation;

FIG. 13 is a series of plots illustrating various statistical measures of image fit according to an embodiment of the subject matter described herein;

FIG. 14 is a series of samples from the first two eigenmodes computed from a plurality of m-reps fit to human-generated contours according to an embodiment of the subject matter described herein;

FIGS. 15A-15C are CT scans of a male pelvis phantom according to an embodiment of the subject matter described herein;

FIG. 16 is an exemplary display of a 3D m-rep model overlaid onto a TRUS image of a prostate for guiding a physician's needle during a biopsy procedure according to an embodiment of the subject matter described herein; and

FIG. 17 is a diagram illustrating an exemplary system for providing mapping of an object comprising an anatomical structure from planning image data to intervention-guiding image data according to an embodiment of the subject matter described herein.

DETAILED DESCRIPTION Exemplary Operating Environment

The subject matter described herein includes methods and systems for mapping an object comprising an anatomical structure in a planning image and an intervention target region derived from an image registered with the planning image to intervention-guiding image data showing the object. The methods and systems of the subject matter described herein can be implemented in hardware, firmware, software, or any combination thereof. In one exemplary embodiment, the methods and systems that perform object modeling for mapping an object model comprising an anatomical structure in a planning image to an intervention-guiding image data may be implemented as application software adapted to execute on a general purpose computer. FIG. 1 illustrates an exemplary operating environment for the methods and systems for mapping the object model comprising the anatomical structure in the planning image to the intervention-guiding image data according to an embodiment of the present invention. Referring to FIG. 1, computer 100 includes a microprocessor 102, network interfaces 104, I/O interfaces 105, disk controllers/drives 106, and memory 108 connected via bus 109. Additionally, computer 100 may be connected, via I/O interfaces 105, to intervention-guiding imaging device 110 for receiving image data of, for example, an anatomical object of interest.

Microprocessor 102 may be any type of general-purpose processor suitable for executing application software. An exemplary microprocessor suitable for use with embodiments of the present invention is any of the Pentium® family of processors available from Intel® Corporation.

Network interfaces 104 may include one or more network adapter cards that communicate with one or more remote computers 111 via network 112. For example, network interfaces 104 may be Ethernet or ATM cards.

I/O interfaces 105 may provide serial ports for communicating with external devices, such as display device 113, mouse 114, and keyboard 116. I/O interfaces 105 may also include interfaces for other types of input and output devices, such as microphones and speakers.

Disk controllers/drives 106 may include hardware components for reading to and writing from storage devices, such as removable disks 118, optical disks 120, and fixed disk 122.

Memory 108 may include random access memory 124 and read only memory 126 for storing instructions to be executed by microprocessor 102. According to the present invention, memory 108 may store an instance of software 128 for deforming an object model to fit a particular instance of an object and mapping the object model to one or more ultrasound images of the object and an instance of object model generator 130 for generating a medial representation object model of an object comprising an anatomical structure based on high resolution image data of an object. Software 128 and 130 may be run on a modern PC under the Windows® operating system. Software 128 and 130 may also automatically deform medial atom models into image data and then carry the intervention target region into the intervention-guiding image. Exemplary operations that may be performed by software 128 and 130 will now be described in more detail below.

Overview

According to one aspect of the subject matter described herein, object models may be automatically imbedded in a target image, rigidly registered with the corresponding target object, and then deformed to closely match the shape of the specific target object. FIG. 2 is a flow chart of an exemplary process for creating a patient-specific statistically-trainable deformable shape model of a prostate based on a planning image of the object, creating an intervention target region from an image registered with the planning image, and mapping the object model and thereby the intervention target region to a series of intervention-guiding images in order to guide a physician's biopsy needle during surgery according to an embodiment of the subject matter described herein. Referring to FIG. 2, in block 200, an initial medial representation object model (m-rep) of an object comprising an anatomical structure is created based on image data of at least a first instance of the object. The object model may initially be generic for the type of object and later customized for a specific instance of the object (i.e., patient). An m-rep may be created based on one or more pre-biopsy (e.g., MR, CT, etc.) images of an object. For example, a physician may acquire pre-biopsy MR images showing the geometric details of a prostate, including any nodules that may be present. Additionally, probability distributions of the prostate shape and the image intensity patterns in regions relative to the prostate model may be generated.

In block 202, a patient-specific m-rep is created by deforming the initial m-rep based on planning image data of at least a second instance of the object, wherein the at least second instance of the object is associated with the patient. For example, a patient-specific atlas of a prostate shape may be created by analyzing pixel intensities of the planning image, possibly in combination with using image intensity related statistical training methods. Also, an intervention target region is extracted from the planning image or another image perfectly registered with it and portraying information as to the intervention target region.

In block 204, the patient-specific m-rep is correlated to the intervention-guiding image data of the at least second instance of the object, deformed from the planning image, and the intervention target region is mapped into the intervention-guiding image data using the derived m-rep deformation. For example, the boundary/outline of the object may be accurately defined in an ultrasound image, and the intervention target region within the object, such as potentially cancerous or otherwise suspicious areas, may be carried from the planning image into the intervention-guiding image by the use of the relation between patient-specific m-rep in the planning image and the patient-specific model in the intervention-guiding image. In addition to biopsy applications (e.g., allowing physicians to plan and execute needle trajectories passing through target regions), mapping non-ultrasound images of an object to ultrasound images of the object using a deformed object model may have therapeutic and/or other diagnostic uses as well. It is appreciated that the correlation may be accomplished by an optimization process driven by probability distributions and may take into account changes in prostate shape and differences between fundamental properties of planning and intervention-guiding imaging technologies, as well as probability distributions on each, in relation to an anatomic model of the anatomic object of interest.

Additionally, the object model, as well as the intervention target region may be simultaneously displayed and overlaid onto the intervention-guiding images of the object for guiding physicians during surgical procedures. For example, during a prostate biopsy, an m-rep model and the intervention target region may be mapped to and overlaid on a TRUS image of the prostate for accurate needle placement during biopsy.

According to another aspect, the subject matter described herein includes a method based on texture analysis for the image-match term that appears in the objective function for mapping a patient-specific m-rep to TRUS images.

According to another aspect, the subject matter described herein includes a method of generating statistics on within-patient deformations between MR/MRS imaging and TRUS imaging and of representing TRUS image intensities and textures relative to a prostate/suspicion volume model to enable deformable mapping between the MR/MRS and TRUS data sets. Each of these aspects will now be described in greater detail below.

Create Initial M-rep Object Model Image Data of an Object

As mentioned above, various types of objects may be represented in different types of digital images. These images may include planning images, such as MRI, MRSI, and CT images, or may be intervention-guiding images, such as ultrasound images (e.g., TRUS). One method for accurately identifying the boundaries of different anatomical objects represented in digital images includes manual 2D image segmentation wherein a human expert determines a boundary of an object represented in each slice of the image.

In addition to identifying the boundary of an object in a 2D image by image segmentation, a 3D model of an object may be created using multiple images of the object capturing different aspects of the object's geometry. For example, images of prostates obtained from different patients, images of an individual patient's prostate obtained at different times, images of prostates obtained from different angles, and/or images of prostates obtained with different imaging technologies may all be used to build a 3D model of a prostate. Model-based methods, as will be described in greater detail below, may be divided into appearance-based models, anatomy-informed models, mechanical models, statistical models, and combinations thereof.

One example of a type of digital image suitable for creating an object model is a magnetic resonance image (MRI). An MRI is generated by using electromagnetic fields to align the nuclear magnetization of atoms in the body in order to create an image representation of an object. After the initial alignment, radio frequency fields may be used to systematically alter the alignment, causing nuclei in the object to produce a rotating magnetic field that is detectable by a scanner. Additional magnetic fields may also be used to further manipulate the alignment in order to generate sufficient information to reconstruct an image of the object. For example, when a person lies in an MRI scanner, hydrogen nuclei (i.e., protons) may align with the strong main magnetic field. A second electromagnetic field oscillating at radio frequencies perpendicular to the main field may then be pulsed in order to push some of the protons out of alignment with the main field. When these protons drift back into alignment with the main field, they emit a detectable radio frequency signal. Since protons in different tissues of the body (e.g., fat vs. muscle) realign at different speeds, different structures of the body can be revealed in the MRI.

Similar to MRI, magnetic resonance spectroscopy imaging (MRSI) may be used as the image data from which levels of suspicion for cancer may be derived. MRSI combines both spectroscopic and imaging methods to produce spatially localized spectra from within the object. For example, magnetic resonance spectroscopy may be used to measure levels of different metabolites in body tissues. Because an MR signal produces a spectrum of resonances that correspond to different molecular arrangements of an isotope being “excited”, a resonance spectrum signature may be determined and used to diagnose or plan treatment of various diseases.

A third exemplary type of digital image suitable for creating an object model according to the subject matter described herein includes a computed tomography (CT) image. Conventional CT imaging includes the use of digital geometry processing to generate a three-dimensional (3D) image of an object from a plurality of two-dimensional (2D) X-ray images taken around an axis of rotation. However, it is appreciated that other types of CT imaging may include, but are not limited to, dynamic volume CT, axial CT, cine CT, helical/spiral CT, digitally reconstructed radiograph (DRR) CT, electron beam CT, multi-slice CT, dual-source CT, inverse geometry CT, peripheral quantitative computed tomography (pQCT), synchrotron X-ray tomographic microscopy, and X-ray CT, without departing from the scope of the subject matter described herein.

FIGS. 3A and 3B are a pair of segmented CT images illustrating expert-generated boundaries of a bladder and a prostate, respectively, according to an embodiment of the subject matter described herein. Referring to FIG. 3A, boundary 300 may correspond to the boundary of a bladder in a human male as determined by a physician through, for example, a visual inspection of pixel intensities in the image. Similarly, in FIG. 3B, boundary 302 may correspond to the boundary of a prostate in a human male based on pixel intensity patterns in the image. The manual 2D segmentations illustrated in FIGS. 3A and 3B may provide a 3D region to which a model may be fit and then used in learning probability distributions on the shape of the anatomic shape and on image information relative to the object, or it may represent a target or goal which, while achievable through slow and laborious manual methods, may be similarly (or even more accurately) performed using automated image processing and statistical object modeling methods described herein. Therefore, one measure of the accuracy of the methods described herein for segmenting planning- and intervention-guiding digital images of anatomical structures includes an image fit analysis or comparison to expert-generated manual segmentations, such as those shown in FIGS. 3A and 3B.

Create Object Model

In addition to determining a 2D boundary of an object in a 2D image, multiple 2D images of the object may be used to create a 3D model of the object. Therefore, using one or more of the high resolution digital images listed above, an initial object model may be created. As will be described in greater detail below, object models may be created according to a variety of methods. However, the goal of any of these methods is to create the most accurate description of the shape (surface and volume) and possible shape changes of the object.

Good—Static Models

A static model of an object is a mathematical representation of an object, wherein the relationship of every portion of the model does not change with respect to other portions (i.e., does not deform).

However, because anatomical objects (like prostates) are rarely the same shape or size for a long period of time (i.e., not static), models of anatomical objects must also not be static if they wish to provide an accurate model of such objects.

Better—Deformable Shape Models

Deformable-shape models (DSMs) are representations of both the shape and the shape variability of objects, including anatomical structures. Within the category of DSMs, however, a specialized class of DSMs, called m-reps, may best represent the specific geometries associated with anatomical objects, such as prostates.

As stated above, an “m-rep” refers to a medial-axis representation object model that is an explicit mathematical representation (i.e., a model) of an object's geometry using a set of connected continuous medial manifolds, wherein each continuous segment of the medial manifold represents a medial figure. The medial manifold may be sampled over a spatially regular lattice, wherein elements of this lattice are called medial atoms. This lattice may be in the form of a 2D mesh, or it may be a 1D chain of atoms each consisting of a cycle of spokes. As a result, m-reps may provide an efficient parameterization of the geometric variability of anatomical objects and therefore may provide shape constraints during image segmentation.

FIG. 4 is a three dimensional representation of an m-rep object model according to an embodiment of the subject matter described herein. Referring to FIG. 4, medial geometry may describe objects in terms of centers and widths. A 3D object (e.g., a prostate) may be represented by a 2D curved sheet lying midway between opposing surfaces of the object, and spokes extending to the object boundary from both sides of the sheet. A position on the medial sheet, called a hub, and its two spokes are called a medial atom. Thus, using m-reps, the full object surface can be recreated from the atom spoke ends.

In FIG. 4, hub 400 may be connected to 2D curved surfaces 402 and 404 via spokes 406 and 408, respectively, and together represent m-rep 410. Multiple m-reps 410 may be combined in order to model the volumetric geometry of an object, such as object 412. Finally, a wire mesh 414 may be applied to m-rep model 412 for indicating a surface of the object. A more detailed discussion of m-reps and the process of creating m-reps is included in U.S. Pat. No. 7,200,251 which is incorporated by reference herein in its entirety.

As mentioned above, after an initial m-rep has been created using one or more high resolution digital images of an object, the m-rep may be statistically trained in order to more accurately represent the geometry of the object and/or the ways in which the object may be deformed under various conditions. The process of training an m-rep through a series of deformations and statistical computations will be described in greater detail below.

DSMs—Appearance-Based

Appearance models describe images in locations that are taken relative to a geometric model. Appearance models used in segmentation are based on image intensity values. For example, voxel list and regional intensity quantile function (RIQF) methods are image intensity-based image segmentation methods. An RIQF component represents the inverse of the cumulative histogram of intensities in a particular region of an object. An RIQF may be formed by one or such components. Each component may, for example, be a 1D array of quantile function values, where each value is the intensity value for a given quantile. Each component may correspond to a different intensity, for example, the single intensity in an image, such as CT or ultrasound, or multiple intensities measured in a multi-intensity image, such as certain MR images. In addition, one or more intensities derived from the measured image, such as intensities derived by applying texture filters, may be the basis for RIQF components. There will be an RIQF for some number of different, possibly overlapping object-model-relative regions. In the embodiment shown in FIG. 5 there exists an interior region 502 and an exterior region 504. In another embodiment shown in FIG. 6 there exists 606 an interior region and an exterior centered on each spoke of an m-rep model.

DSMs—Statistically Trainable

Statistically-trainable deformable shape model (SDSM) approaches represent the underlying geometry of anatomical objects and use a statistical analysis to describe the variability of that geometry. It is appreciated that while several different geometric representations may be used to model anatomical objects, the simplest such geometry includes representing the underlying geometry in a Euclidean vector space.

Once an object is modeled, statistical shape analysis may be used to mathematically express the modes of variation of that object (i.e., describe variability of a population of geometric objects). Principal component analysis (PCA) is one example of a statistical shape analysis tool for providing an efficient parameterization of a Euclidean vector space-based object model's variability.

PCA is a vector space transform used to reduce multidimensional data sets to lower dimensions for analysis. However, PCA may only be useful for describing geometric models existing in Euclidean space (e.g., parameterized by a set of Euclidean landmarks or boundary points). As a result, PCA may not be well-suited to describing non-Euclidean representations of shape, like m-reps. Because the medial parameters used in m-reps are not elements of a Euclidean space, standard shape analysis techniques (e.g., PCA) do not apply. Therefore, a modified (i.e., generalized) form of PCA must be used for describing the variability among these types of objects.

One such modified form of PCA is called principal geodesic analysis (PGA). PGA is a generalization of PCA for curved manifolds and, like PCA, PGA may involve the calculation of the eigenvalue decomposition of a data covariance matrix in order to reduce multidimensional data sets to lower dimensions for analysis. For example, the geometry of an object may be represented by m, the shape variability of the object may be represented by a probability density p(m), and the image knowledge may be captured by p(I relative to m), Estimation of p(m) may be accomplished by PGA, which computes object and object inter-relationship means in a way recognizing the special non-Euclidean mathematical properties of measures of orientation and size, and which computes modes of variation on an abstract flat (Euclidean) space tangent at the mean to the abstract curved space of m-reps.

As mentioned above, while m-reps may describe the geometric variability of an object in terms of bending, twisting, and widening, medial parameters are not elements of a Euclidean vector space. As a result, conventional methods for deforming m-rep models may also need to be modified to apply to a non-Euclidean vector space.

Statistically-Trained Appearance Models

Appearance models derived from DSMs m fit to training images I may be analyzed by principal component analysis to learn probability distributions on appearance p(I relative to m) to be used in computer-based segmentation. Such probability distributions can be learned both relative to planning images and relative to intervention-guiding images.

Best—Statistically-Trained (Pre-Biopsy) Patient-Specific M-Reps and Appearance Models

Given a set of training images, a primary goal of training an m-rep object model is to accurately model the geometric variability of the object. Therefore, m-reps may be trained against models built by expert humans to generate statistical probability distributions that define the range of object shapes (e.g., prostates, bladders, and rectums) and the range of each object's intensity patterns in medical images across the human population of interest. As a result, the quality of the fit of an m-rep model to an image may be increased by training the m-rep.

An initial m-rep model may be automatically deformed to match target image data by altering one or more of its medial atoms so as to optimize a function formed by the sum of log p(m), log p(I relative to m), and other terms providing faithfulness to initializations of the DSM. Exemplary alterations that may be performed on the medial atoms include resizing medial atoms to increase or decrease the girth of the initial model, rotating each of the spokes of the medial atoms together or separately to twist the surface of the initial model, and moving the medial atoms to bend or elongate the initial model. Because initial models may be deformed corresponding to natural operations, such as elongating, bending, rotating, twisting, and increasing or decreasing girth of the model, the amount of time and processing required to deform the m-rep to fit target image data is reduced.

In one scenario, m-rep training may include fitting m-rep models to a training population of known objects and computing a PGA, wherein PGA may be used to restrict the shape of the model to statistically feasible instances of the object.

Training procedures for an object's geometry, given by p(m), and the geometry-to-image match, given by p(I relative to m), use training cases in which an expert user has produced a careful manual segmentation of the object(s), from which binary images can be produced. In each training case, an m-rep is closely fitted to the binary image for each object, and PGA of the aligned fits yields p(m). To find p(I relative to m) the resulting fits are used to extract the prostate-related RIQFs for all model-relative regions from the corresponding training image. The method for fitting m to the binary images is a variant of the greyscale segmentation method described above: the objective function, log p(I relative to m)+log p(m), is replaced by a function of the same form, image match+geometric typicality, but where binary image match is computed from distances between the binary and the boundary implied by m, and where geometric typicality rewards smoothness, non-folding, and geometric regularity of m.

After an m-rep is initialized, that is, roughly positioned and oriented to correspond to the object, the m-rep may be deformed by optimizing an objective function that includes, for example, image intensity and/or non-intensity terms that compute a measure of the quality of the match between the deformed DSM and the target image.

Segmentation Step 1 Initialization

Shape is often defined as the geometry of an object that is invariant under global translation, rotation, and scaling. Therefore, in order to ensure that the variability being computed is from shape changes only, one must align the training objects to a common position, orientation, and scale. One common alignment technique includes minimizing, with respect to global translation, rotation and scaling, the sum-of squared distances between corresponding data points.

Thus, an exemplary m-rep initialization algorithm may include steps consisting of (1) translation, (2) rotation and scaling and (3) iteration. For example, translation may include centering a plurality of m-rep models to an image by translating each model so that the average of its medial atoms' positions is located at an arbitrary origin point. A particular model may then be selected and aligned to the mean of the remaining models. This process may be repeated until the alignment metrics cannot be further minimized.

Another exemplary m-rep initialization algorithm may include not only translation, rotation, and scaling, but also, after that, deformation within statistically feasible instances of the object, as described in the next section, with iteration between the translation, rotation, and scaling, on the one hand, and deformations, on the other hand.

Segmentation Step 2 Deformation

Methods based on analysis of shape variation are becoming widespread in medical imaging. These methods allow a statistical modeling of prior shape knowledge in tasks where the image information itself often is not strong enough to solve the task automatically. One example of the use of deformable models in segmentation includes determining the preferred deformations by a statistical shape model. Another important task is shape analysis and classification, where a statistical shape model provides distributions of healthy and diseased organs for diagnostic methods.

The most common type of statistical shape model consists of a mean shape with deformations. The mean and the corresponding deformations are constructed through statistical analysis of shapes from a collection of training data. Each shape in the training set is represented by the chosen shape representation, and analysis of the parameters for the representation gives the mean and variations.

Another type of statistical shape model consists of an initialization from previous deformation of a mean and deformations from that initialization. An example is the deformations from an m-rep segmented from a planning image into an m-rep segmented from an intervention-guiding image.

It is appreciated that m-reps can be trained on shapes derived from different images of the same patient and/or across images of different patients. Same-patient training provides statistics that describe shape differences due mechanical forces acting on the prostate, e.g., different degrees of rectal and/or bladder filling, and by natural differences that occur between patients. Training from images from different patients provides statistics that describe anatomic differences between individual patients.

Deformation: Appearance-Based: Pixel Intensity

As mentioned above, PGA may be applied to m-reps, wherein the m-reps may express the geometry of the object in terms of p(m). PCA may be applied to RIQFs for regions relative to m, wherein the RIQFs express the object-relative appearance in terms of p(I relative to m). Therefore, in order to make I relative to m be amenable for PCA, pixel intensities for both exterior and interior regions adjacent to the object boundary may be represented by the RIQF describing a region near and interior to the object boundary implied by m, wherein the RIQF represents the inverse of the cumulative histogram of intensities in the region.

FIG. 5 is a triorthogonal representation of a prostate in a CT image suitable for sampling image intensity values according to an embodiment of the subject matter described herein. Referring to FIG. 5, triorthogonal CT display 500 includes region 502 for sampling image intensities inside of a prostate and region 504 for sampling image intensities outside of the prostate. Sampling image intensities in both the interior and exterior regions adjacent to the object boundary using a function describing a region near and interior to the object boundary by the RIQF for the object may be necessary for allowing the object's geometry to be appropriate for PCA.

FIG. 6 includes representations of an exemplary m-rep object model prepared for RIQF analysis according to an embodiment of the subject matter described herein. On the right, the bladder m-rep object model 600 is shown. On the left, object 600 may be defined by a boundary and spoke-related regions 602 defined based on the m-rep. Spoke-related regions 602 may then be used to form interior and exterior regions corresponding to each spoke, and these regions may be used to provide an RIQF-based appearance model for each region. The idea is that regions of homogeneous tissue are in the same positions relative to the target object in different patients will have comparable tissue compositions. An improved appearance model will be obtained by allowing exterior regions on the object boundary defined by these clusters in training to slide around when they are used to define exterior regions whose RIQFs contribute to the image match.

FIG. 7 is a series of plots of RIQFs (top) and histograms derived therefrom (bottom) for a bladder and for a prostate, derived from training images, according to an embodiment of the subject matter described herein. Referring to FIG. 7, image pair 700 illustrates RIQFs for a bladder object and image pair 702 illustrates RIQFs for a prostate object, wherein the first and third columns show the RIQFs and histograms from the training images, and the dashed curves in the second and fourth columns represent±1.5σ from the mean (represented by solid lines). Image pairs 704 and 706 represent histograms corresponding to RIQFs 700 and 702, respectively.

Deformation: Appearance-Based: Texture Features

In one example, for certain images, texture features provide important separators of an object from its background that are not possible using intensity variables alone. According to one aspect, as used herein, the terms “texture” and “texture feature” include a collection of one or more pixels in a digital image having related elements. According to another aspect, a texture can include a spatial arrangement of texture primitives arranged in a periodic manner, wherein a texture primitive is a group of pixels representing the simplest or basic subpattern. A texture may be, for example, fine, coarse, smooth, or grained, depending upon the pixel intensities and/or the spatial relationship between texture primitives.

For example, in CT, the pattern of bronchi and blood vessels within the lobes of the lung can be important for segmentation of the lobes or of the lung from the heart. Segmentation by posterior optimization of m-reps suggest that improved segmentations are obtained in some cases when RIQFs on the texture variables are concatenated with RIQFs on intensity variables.

According to one embodiment, six 2D rotationally invariant texture features in the plane perpendicular to the ultrasound probe may be used. Next, filters may be applied to the texture features, including but not limited to, a Gaussian and a Laplacian filter. Additionally, the maximum response may be taken over multiple orientations of either a first derivative or a second derivative Gaussian. Each texture feature may also be evaluated by its ability to discriminate the interior and exterior of the prostate based on the Mahalanobis distance of the mean interior description to the mean exterior description, and vice versa.

For example, using the experimental data described herein, the texture feature that performed the best at the global scale was based on a small-scale second-derivative Gaussian filter. It is appreciated that for global interior and exterior regions, this feature is 2.9 times more sensitive than using intensity alone. Thus, intensity RIQFs together with texture RIQFs may provide a basis for a strongly discriminating image match term for the objective function described in.

In this case, the texture feature was the maximum over orientations in the image planes of the result of convolution with the oriented 2nd derivatives of a Gaussian. This measure was chosen because it showed good separation between the RIQFs for a global region external to the prostate and a global internal region.

Deformation: Anatomy-Informed

In addition to using image intensity patterns alone, successful segmentation of an image may depend on both knowledge of the anatomy and the image intensity patterns relative to the anatomy. Therefore, segmentation may be limited to anatomically reasonable instances of the target object geometry. Knowledge of the anatomy may be captured by a probability density p(m) on the representation m of the geometry of the object(s) in question, and the image knowledge is captured by p(I relative to m), which we take to be equal to the conditional probability p(I|m).

Segmentation may then consist of:

i) limiting m to a space S covering only the primary modes corresponding to geometrically non-folding, smooth structures that reasonably represent the organ in question; and

ii) finding the most probable such m given the image intensity patterns represented by I.

That is, to segment the anatomic object(s) represented by m, arg maxmεsp(m|I)=arg maxmεS[log p(I|m)+log p(m)] may be computed by conjugate gradient optimization of (log p(I|m)+log p(m)) over the coefficients of the primary modes of variation exhibited in p(m). Alternatively, the function optimized may in addition include a term reflecting deviations from information provided at the time of initialization.

In order to allow the estimation of the functions, p(m) and p(I relative to m), it is necessary that the representation of both m and of I relative to m be appropriate for PGA and that both functions richly describe the geometry and the image intensity patterns, respectively. Therefore, as described above, m may be represented using m-reps and m may be described using RIQFs according to the subject matter described herein.

Experimental Results

FIG. 8 is a series of exemplary TRUS images illustrating prostate boundaries generated using various methods according to an embodiment of the subject matter described herein. Referring to FIG. 8, TRUS image 800 is an unsegmented image of a prostate sliced through the middle of the prostate, image 802 illustrates manual segmentation boundary 808, image 804 illustrates a 2nd derivative of Gaussian based texture parameter relative to the manual segmentation boundary 810, and image 806 illustrates a segmentation boundary 812 via posterior optimization of m-reps with an intensity-and-texture-RIQF based appearance model. It is appreciated that the inaccuracy of the segmentation boundary 812 shown in image 806 is the result of not accounting for the cutoff of the prostate in the image.

FIG. 9A is a surface representation of a bladder and a prostate. FIG. 9B is a representation of a prostate and bladder including an automated segmentation of each.

FIG. 10A is a surface representation of a bladder. FIG. 10B is a representation of a bladder including human expert segmentation and computer-based segmentation. FIG. 10C is a midaxial slice of a bladder including an automated segmentation.

FIG. 11A is a 3D representation of a prostate including human expert segmentation and computer-based segmentation. Referring to FIG. 11A, wire mesh 1100 illustrates a 3D prostate boundary automatically generated by software 128 according to an embodiment of the subject matter described herein. Surface 1102 illustrates a surface boundary of the prostate manually generated by a human expert. FIG. 11B is an CT slice of a prostate, wherein prostate boundary 1104 corresponds to boundary 1100 shown in FIG. 11A. Similarly, FIG. 12 is a 3D representation of a prostate including automated segmentation boundary 1200 and human expert segmentation boundary 1202.

FIG. 13 is a series of plots of RIQFs according to an embodiment of the subject matter described herein. Referring to FIG. 13, figure A illustrates texture RIQFs for interior and exterior global regions for a prostate. Figure B illustrates mean RIQFs for the prostate. Figures C and D illustrate mean RIQFs and ±1.5 a from the mean.

FIG. 14 is a series of samples from the first two eigenmodes computed from a plurality of m-reps fit to human expert generated contours according to an embodiment of the subject matter described herein.

Referring to FIG. 14, results of PGA are illustrated, wherein the mean m-rep is m-rep 1408. Rows (m-reps 1400-1404, 1406-1410, and 1412-1416) range from −1.5 a to +1.5 σ along the first mode and columns (m-reps 1400, 1406, 1412; 1402, 1408, 1414; and 1404, 1410, 1416) go from −1.5 a to +1.5 σ along the second mode. The first mode captures global scaling across patients while the second mode captures the saddle-bag appearance at the prostate/rectum interface.

FIGS. 15A-15C is series of CT scans of a male human pelvis. Referring to FIG. 15A, the CT slice illustrates a male pelvis phantom with a deflated rectal balloon. FIG. 15B illustrates a FEM-computed slice with an inflated rectal balloon. FIG. 15C illustrates an actual CT slice with an inflated rectal balloon.

Deformation: Mechanical Simulation

As described above, m-reps provide a framework for automatically building 3D hexahedral meshes for representing the geometry of anatomical objects such as prostates. In order to simulate a multitude of possible deformations of the m-rep, physically correct mechanical deformations may be computed, one example of which includes the finite element method (FEM). As used herein, FEM is a numerical technique for finding approximate solutions to partial differential equations and integral equations by either eliminating the differential equation completely or approximating the equation using ordinary differential equations, which are then solved using standard techniques. Other techniques may also be used without departing from the scope of the subject matter described herein.

Each quadrilateral in the medial atom mesh forming an m-rep forms two quadrilaterals of spokes, one on each side of the medial sheet. Each quadrilateral of spokes forms a 6-sided entity, which can be further subdivided by spoke subdivision and interpolation. The resulting refined mesh can be embedded in a mesh fit to the exterior of an anatomic object, possibly including other objects. To simulate the effect of forces on this region or to fit a deformation by mechanically reasonable deformations, the overall mesh can be used to solve the mechanical differential equations of the well-established FEM method.

Thus, object model deformations caused by internal forces may be learned by mechanical modeling techniques such as FEM, and statistics of the resulting deformations can be derived to m-reps extracted from the FEM results.

This capability has been validated using CT scans of a tissue equivalent pelvis phantom with metallic seeds implanted in the prostate and an endorectal balloon catheter. It is appreciated that calculations were 3D, but 2D results are shown for clarity. Seeds in the un-deformed prostate were mapped to corresponding positions in the prostate deformed by the inflated balloon with an accuracy limited only by CT voxel size. This method can be applied to compute physically correct deformations on m-reps fit to training images to generate additional m-reps for geometric training, and to validate the spatial accuracy of an atlas mapped via a trained m-rep.

Carrying the Intervention Target Region to the Intervention-Guiding Image

The intervention target region that has been extracted from the planning image or another image registered with it forms a subregion of the m-rep of the anatomic object segmented from the planning image. In one example the planning image is an MRI, the anatomic object is the prostate, and the intervention target region is tissue portrayed as suspicious for cancer in an MRS image. The deformation of the m-rep between its form segmented in the planning image and its form in the intervention-guiding image yield a transformation of all points in the object interior, according to the rule that corresponding positions, according to proportion of length, on corresponding spokes map into each other. This deformation can be applied to the intervention target region in the anatomic segmented from the planning image to provide the predicted intervention target region in the intervention-guiding image.

Training and Segmentation Overlay M-Rep onto Intervention-Guiding Image(s)

FIG. 16 is an exemplary TRUS image including an overlaid image of an m-rep boundary of an object of interest according to an embodiment of the subject matter described herein. Referring to FIG. 9, a 3D TRUS image of the prostate is shown where m-reps were fit to the human contours for the images yielding a median volume overlap (i.e., Dice coefficient) of 96.6% and a median average surface separation of 0.36 mm. As used herein, the Dice similarity coefficient (DSC) defines the area/volume overlap between sets A and B as 2|A ∩B|/(|A+|B|). A shape model representation derived from a noisy TRUS image should aim for an equally good overlap. The fits shown in FIG. 9 are excellent and meet the requirements for providing mapping a planning image of an object comprising an anatomical structure to intervention-guiding object image data and thereby providing a mapping of the intervention target region within the anatomical structure from the planning image into the intervention-guiding image.

Potential Applications

According to one aspect, the subject matter described herein may be used to guide image-guided needle interventions. For example, given one or more images, an m-rep model may be fit to an object within the planning image (e.g., a prostate) and the m-rep model fit to the object in the intervention-guiding image and the mapping of an intervention target region between the images may be used to guide a physician's biopsy needle during biopsy or surgery.

For example, FIG. 17 is a diagram illustrating the guiding of a biopsy needle into a prostate with the guidance of a patient-specific prostate m-rep and a target intervention region overlaid on a TRUS image according to an embodiment of the subject matter described herein. Referring to FIG. 17, transrectal ultrasound probe 1700 may be inserted into rectum 1702 of a patient. As a result, probe 1700 may generate a TRUS image 1704 of the area surrounding the patient's prostate and display the image on a display device. An m-rep model of the patient's prostate may then be overlaid on TRUS image 1704, wherein the m-rep may help define a prostate boundary 1706. Within prostate boundary 1706, the target intervention region 1708 may be portrayed. For example, a portion of the prostate that is suspicious for cancer may be biopsied using needle 1710. As described in greater detail above, biopsy needle 1710 may be guided to target object 1708 more accurately, and thus without disturbing other tissue, through the use of m-reps to accurately display object boundary 1706 on an intervention-guiding image, such as TRUS image 1704.

In another example, needles used to place radioactive seeds in the process called brachytherapy to treat cancer may be guided by techniques similar to those in the biopsy example above.

In another example, the subject matter described herein may be used for shape measurement of cortical and subcortical brain structures. The shapes of some structures in the brain are thought to be correlated with certain neurological conditions. The volume and shape properties of subcortical and cortical brain structures seen in MR images associated with schizophrenia and schizotypal disorders in adults. For example, the subject matter described herein may be used to investigate subcortical structures in MR images across many years starting from childhood in individuals who have been determined to be at risk for autism. Studies are being performed to monitor shape changes over time and to correlate these changes with clinical symptoms and for improving disease therapy.

In another example, the subject matter described herein may be used for improving the treatment of arthritic knee cartilage, such as developing methods for measuring the effect of arthritis drugs on the pattern of depths and curvatures of knee cartilage imaged in MRI.

In another example, the subject matter described herein may be used for creating a beating heart model to evaluate muscle damage. In one scenario, m-rep models of the beating heart of patients with ischemic disease (reduced blood flow to the heart muscle) may be used to localize regions of muscle where flow is reduced and determine how seriously the muscle is affected by comparison with a trained m-rep model of the normal beating heart. The models, made from so-called 4D images (3D plus time), can be generated by specialized CT, MR, and US technologies, will be investigated to. If successful this would be a safe non-invasive procedure to evaluate patients with ischemia and plan appropriate treatment.

In another example, the subject matter described herein may be used for studying and treating chronic obstructive pulmonary disease (COPD). M-reps may be used to study the progression of COPD in patients by automatically segmenting images of lungs using m-reps and CT images acquired at various parts of the breathing cycle. The shape measurement properties of m-reps may be used to localize and measure the volume of obstructed lung tissue.

In another example, the subject matter described herein may be used to improve treatment of tumors in the lung and in the liver that undergo motion and shape change while the patient is breathing during treatment. M-reps may be used to segment a target object and nearby organs in CT images acquired at multiple intervals over multiple respiratory cycles. A 4D model may then be built from the segmentations to predict changes across the patient's breathing cycle and to account for these changes during planning and treatment delivery.

In yet another example, the subject matter described herein may be used for improving radiation treatment planning of cancer in the head and neck. For example, m-reps may be used for planning the radiation treatment of tumors in the head and neck, a region where 15 or more objects on each side of the head and neck, together with lymphatic system sub-regions, are critically important anatomical structures.

Other potential applications include, but are not limited to, computer-aided design, simulated stress testing of critical manufacturing and construction components, and animated cartoons. It is appreciated that these applications are mentioned here only to underscore the wide range of potential product areas. The foregoing has included several ongoing or planned medical research projects where the subject matter described herein may be used either as a method to extract anatomical objects in medical images, as a statistically trained shape ruler, that is, as a way to measure and discriminate based on shape statistics, e.g., to distinguish one class of shapes (normal) from other classes (diseased), or both.

Advantages

The subject matter described herein may overcome the shortcoming with the prior art by mapping cancerous regions from non-ultrasound images to TRUS biopsy images in order to display visible targets, allowing the physician to more accurately aim his/her instrument. The net benefit to patients will be reduction in the number of repeat biopsies; easing of the mental stress related to the uncertainty of whether a negative biopsy is false, or whether a low Gleason score is falsely low; better data on which to base treatment decisions; and overall improvement in quality of life. Moreover, health care costs associated with repeat biopsies should be reduced.

The subject matter described herein may eliminate almost all human interaction by automatically extracting the patient model, and it does so in a way that is free from unwanted user variability.

The subject matter described herein may accurately model the deformations in the prostate caused by intra-rectal probes during the imaging studies and biopsy procedure, and to handle the different image properties between the TRUS and MR images.

Furthermore, it is appreciated that the subject matter described herein may be applicable to all of diagnostic, surgical, and other therapeutic interventions.

It will be understood that various details of the subject matter described herein may be changed without departing from the scope of the subject matter described herein. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation, as the subject matter described herein is defined by the claims as set forth hereinafter.

Claims

1. A method for mapping a model of an object comprising an anatomical structure in one image to image data of the object in another image, the method comprising:

creating an initial medial representation object model (m-rep) of an object comprising an anatomical structure based on image data of at least a first instance of the object;
creating a patient-specific m-rep by deforming the initial m-rep based on planning image data of at least a second instance of the object, wherein the at least a second instance of the object is associated with the patient;
identifying an intervention target region registered with the planning image data; and
correlating the patient-specific m-rep and the intervention target region to intervention-guiding image data of the at least a second instance of the object.

2. The method of claim 1 comprising identifying the intervention target region in the intervention-guiding image data using a mapping derived from a transformation between an m-rep in the planning image data and an m-rep in the intervention-guiding image data.

3. The method of claim 1 wherein the planning image data includes magnetic resonance imaging (MRI) data and magnetic resonance spectroscopy imaging (MRSI) data, wherein the MRSI data includes a image registered with the MRI data for determining the intervention target region.

4. The method of claim 1 wherein the intervention-guiding image data includes one or more of ultrasound (US) image data and transrectal ultrasound (TRUS) image data.

5. The method of claim 1 wherein the anatomical structure includes one of a bladder, a prostate, a heart, a brain structure, a liver, and a lung.

6. The method of claim 1 wherein creating the initial m-rep or creating the patient-specific m-rep includes limiting modes of shape variability of the object to anatomically possible variations.

7. The method of claim 1 wherein creating the initial m-rep or creating the patient-specific m-rep includes using principal geodesic analysis (PGA) to parameterize the modes of shape variability of the object.

8. The method of claim 1 wherein creating the initial m-rep or creating the patient-specific m-rep includes using a finite element method (FEM) to simulate and model possible deformations of the object.

9. The method of claim 1 wherein creating the initial m-rep includes deforming the initial m-rep based on training data for the object and wherein deforming the initial m-rep or deforming the patient specific m-rep includes deforming the m-rep based on image intensity values and wherein regional intensity quantile functions and principal component analysis are used to measure the match of the m-rep being deformed to either a planning image or an intervention-guiding image.

10. The method of claim 1 comprising pre-compiling a plurality of finite element method (FEM)-based shape variation scenarios and, during an intervention procedure wherein the object is deformed, performing a lookup for a matching shape variation scenario for the object deformation.

11. The method of claim 1 comprising computing a finite element method (FEM)-based shape deformation simulation based on the deformation of the object during a biopsy procedure.

12. The method of claim 1 comprising simultaneously displaying the patient-specific m-rep, the intervention target region, and the intervention-guiding image data, wherein the patient-specific m-rep and the intervention target region are overlaid on the intervention-guiding image data.

13. The method of claim 1 comprising simultaneously displaying and overlaying the patient specific m-rep and the intervention target region on the intervention-guiding image data and using the overlaid, displayed image data for guiding a surgical instrument during surgery.

14. The method of claim 13 comprising using the simultaneously displayed, overlaid patient specific m-rep for shape measurement of cortical and subcortical brain structures.

15. The method of claim 13 comprising using the simultaneously displayed, overlaid patient specific m-rep for creating a beating heart model to evaluate muscle damage.

16. The method of claim 13 comprising using the simultaneously displayed, overlaid patient specific m-rep to monitor or detect chronic obstructive pulmonary disease (COPD).

17. The method of claim 13 comprising using the simultaneously displayed, overlaid patient specific m-rep to perform radiation treatment for an anatomical structure under respiratory motion.

18. The method of claim 13 comprising using the simultaneously displayed, overlaid patient specific m-rep for image-guided prostate biopsy.

19. The method of claim 13 comprising using the simultaneously displayed, overlaid patient specific m-rep for image-guided brachytherapy, including image-guided prostate brachytherapy.

20. The method of claim 13 comprising using the simultaneously displayed, overlaid patient specific m-rep for radiation treatment planning of cancer in the head and neck.

21. The method of claim 1 wherein creating a patient-specific m-rep includes creating a patient-specific atlas of the object shape and internal suspicious regions.

22. A system for mapping a planning-image model of an object comprising an anatomical structure to intervention-guiding object image data, the system comprising:

an object model generator for creating a medial representation object model (m-rep) of an object comprising an anatomical structure and for identifying an intervention target region within the object based on planning image data of at least a first instance of the object; and
an object model deformation and mapping module for correlating the m-rep and the intervention target region to intervention-guiding image data of the at least a first instance of the object, deformed from the planning image.

23. The system of claim 22 wherein the object model generator is configured to generate the m-rep based on one or more of magnetic resonance imaging (MRI) data and computed tomography (CT) image data.

24. The system of claim 22 wherein the object model deformation and mapping module is configured to correlate the m-rep to one or more of ultrasound (US) image data and transrectal ultrasound (TRUS) image data.

25. The system of claim 22 wherein the object model generator is configured to create the m-rep based on one of a bladder, a prostate, a heart, a brain structure, a liver, and a lung.

26. The system of claim 22 wherein the object model generator is configured to limit the modes of shape variability of the object model to anatomically possible variations.

27. The system of claim 22 wherein the object model generator is configured to use principal geodesic analysis (PGA) to parameterize the modes of shape variability of the object.

28. The system of claim 22 wherein the object model generator is configured to use finite element method (FEM) to simulate and model possible deformations of the object.

29. The system of claim 22 wherein the object model deformation and mapping module is configured to deform the object model based on image intensity values.

30. The system of claim 22 wherein the object model deformation and mapping module is configured to pre-compile a plurality of finite element method (FEM)-based shape variation scenarios and, during a biopsy procedure wherein the object is deformed, perform a lookup for a matching shape variation scenario for the object deformation.

31. The system of claim 22 wherein the object model deformation and mapping module is configured to compute an finite element method (FEM)-based shape deformation simulation based on the deformation of the object during a biopsy procedure.

32. The system of claim 22 wherein the object model deformation and mapping module is configured to simultaneously display the patient-specific m-rep, the intervention target region, and the intervention-guiding image data, wherein the patient-specific m-rep is overlaid on the intervention-guiding image data.

33. The system of claim 22 wherein the object model deformation and mapping module is configured to simultaneously display and overlay the m-rep and an intervention target region on the intervention-guiding image data and using the overlaid, displayed image data for guiding a surgical instrument during surgery.

34. The system of claim 33 wherein the simultaneously displayed, overlaid m-rep is used for shape measurement of cortical and subcortical brain structures.

35. The system of claim 33 wherein the simultaneously displayed, overlaid m-rep is used to monitor knee cartilage under treatment for arthritis.

36. The system of claim 33 wherein the simultaneously displayed, overlaid m-rep is used for creating a beating heart model to evaluate muscle damage.

37. The system of claim 33 wherein the simultaneously displayed, overlaid m-rep is used to monitor or detect chronic obstructive pulmonary disease (COPD).

38. The system of claim 33 wherein the simultaneously displayed, overlaid m-rep is used to perform radiation treatment for an anatomical structure under respiratory motion.

39. The system of claim 33 wherein the simultaneously displayed, overlaid m-rep is used for radiation treatment planning of cancer in the head and neck.

40. The system of claim 33 wherein the simultaneously displayed, overlaid m-rep is used for image-guided prostate biopsy.

41. The system of claim 33 wherein the simultaneously displayed, overlaid m-rep is used for image-guided prostate brachytherapy.

42. The system of claim 22 wherein the object model generator is configured to create a patient-specific m-rep and a patient-specific atlas of the object shape and to represent internal suspicious regions in object-relative coordinates within the m-rep.

43. A computer readable medium having stored thereon computer executable instructions that when executed by a processor of a computer perform steps comprising:

creating an initial medial representation object model (m-rep) of an object comprising an anatomical structure based on image data of at least a first instance of the object;
creating a patient-specific m-rep by deforming the initial m-rep based on planning image data of at least a second instance of the object, wherein the at least second instance of the object is associated with the patient;
identifying an intervention target region registered with the planning image data; and
correlating the patient-specific m-rep and the intervention target region to intervention-guiding image data of the at least a second instance of the object, deformed from the planning image.
Patent History
Publication number: 20120027278
Type: Application
Filed: Oct 20, 2008
Publication Date: Feb 2, 2012
Patent Grant number: 8666128
Applicant:
Inventors: Edward L. Chaney (Efland, NC), Stephen Pizer (Chaple Hill, NC), Lester Kwock (Chaple Hill, NC), Eric Wallen (Chaple Hill, NC), William Hyslop (Chaple Hill, NC), Robert Broadhurst (Carrboro, NC)
Application Number: 12/738,572
Classifications
Current U.S. Class: Tomography (e.g., Cat Scanner) (382/131)
International Classification: G06K 9/00 (20060101);