Proceedings of the 48th Annual ASTRO Meeting
others. In this work we investigate a strategy of using a priori knowledge of the system to reduce the dimensionality of the deformable image registration problem and to speed up the registration calculation. Materials/Methods: The local deformation property is spatially heterogeneous and, in some special situations the local deformation and registration can be known a priori (such as the region in a bony structure). This knowledge can be incorporated to greatly facilitate the BSpline (or other models) deformable calculation. Our calculation consisted of two natural steps. First, a number of small cubic (0.5⬃1cm in size) control volumes are placed, either manually or automatically based on image intensity information, on the locally rigid regions (LRRs) of the moving image. A control volume typically resides in or close to a bony structure. Each control volume is mapped onto the moving image using a rigid transformation, which is computationally fast and robust. In the second stage, the pre-determined correspondence serves as a priori information for the BSpline deformable registration calculation. Specifically, the control volumes are included as part of the BSpline nodal points. However, in the process of warping the moving image to optimally match the two input images, only those deformations that do not modify the pre-established associations of the control volumes are permissible. This significantly reduces the search space and improves the convergence behavior of the gradient-based iterative optimization calculation. The proposed algorithm is evaluated by using digital phantoms and 4D CT images. Results: A novel method of incorporating prior knowledge into deformable image registration has been developed. Comparison with the conventional BSpline calculation suggested that the new method can improve the computational efficiency. More importantly, because of the inclusion of existing information, the convergence behavior of the calculation is greatly improved. The digital phantom study, where the “ground truth” transformation exists, indicated that the computational accuracy of the proposed technique is well within 2.5mm. The registrations of the 4D lung and liver cases indicated the same level of success. Conclusions: With the incorporation of a priori system knowledge, the deformable registration was made much simple and robust in comparison with the conventional “brute-force” approaches. Given the fact that there is an ever increasing need for efficient deformable image registration tools in IGRT, this technique may find widespread use in the clinics. Author Disclosure: S. Kamath, None; E. Schreibmann, None; D. Levy, None; D. Paquin, None; L. Xing, None.
2781
Multiscale Image Registration
D. C. Paquin, D. Levy, L. Xing Stanford University, Stanford, CA Purpose/Objective(s): Often in medical image processing, images must be spatially aligned to allow practioners to perform quantitative analyses of the images. The process of aligning images taken, for example, at different times, from different perspectives, or from different imaging devices is called image registration. Although numerous successful image registration techniques have been published, ordinary techniques are shown to fail when one or more of the images to be registered contains significant levels of noise. The purpose of this work is to develop image registration algorithms that produce accurate registration results in the presence of noise. Materials/Methods: Sample brain proton density slice and brain mid-sagittal slice images were obtained from the Insight Segmentation and Registration Toolkit (ITK), and known rigid and deformable transformations were applied to the images. Synthetic impulse (salt and pepper) and speckle (multiplicative) noise was added to the images, and image registration simulations were conducted to determine the precise noise levels at which image registration using ordinary techniques fails. Multiscale image registration algorithms were developed using the hierarchical multiscale image decomposition of E. Tadmor, S. Nezzar, and L. Vese, A multiscale image representation using hierarchical $(BV,L2)$ decompositions, Multiscale Modeling and Simulations, vol. 2, no.4, pp. 554 –579, 2004. Image registration simulations were conducted to demonstrate the accuracy and efficiency of the multiscale registration algorithms. Results: The multiscale image registration algorithms produced accurate registration results for noise levels significantly greater than those at which ordinary registration techniques failed. The multiscale techniques enable both rigid and deformed registration of noisy images, and the accuracy of the multiscale techniques is independent of the registration method used. Iterative multiscale registration techniques improved the computational efficiency of the registration algorithms. Conclusions: The multiscale image registration techniques are a significant improvement over ordinary registration techniques, and enable accurate and efficient registration of noisy images. Author Disclosure: D.C. Paquin, None; D. Levy, None; L. Xing, None.
2782
Deformable Image Registration for Cone-Beam CT (CBCT) Images for Implemantion of Image-Guided Adaptive Radiotherapy
L. Zhang, L. Dong, A. Ahamad, X. R. Zhu, A. S. Garden, K. K. Ang, R. Mohan MD Anderson Cancer Center, Houston, TX Purpose/Objective(s): The goal of this study was to implement a robust deformable image registration method for autosegmentation of previously delineated target volumes and normal avoidance structures in head & neck CBCT images. Materials/Methods: We extended our previously developed intensity-based deformable image registration algorithm from conventional CT images to CBCT images. Two scenarios were tested. In the first scenario, daily treatment CBCT images without modification were directly used for deformable image registration with the conventional planning CT images. In the second scenario, we proposed a wavelet-based dynamic window/level matching technique (DWLM) to map the voxel intensity from the CBCT image to the conventional CT image of the same patient. The purpose of this DWLM method is to automatically adjust the best window/level settings of the CBCT to match with the conventional CT prior to the deformable image registration. The window/level matching method minimized the effect of CT number (Hounsfield number) deviations in CBCT images. We tested these two approaches on 15 CBCT images from four head & neck cancer patients who underwent weekly CBCT acquisition. Quantitative analysis was performed between physician’s manual contours and the computer deformed contours for
S647
S648
I. J. Radiation Oncology
● Biology ● Physics
Volume 66, Number 3, Supplement, 2006
clinical target volumes (CTVs) and parotid glands using the following quantities: (1) an overlapping volume index (OVI), defined as the ratio of the overlapped volume to the combined volume between two region of interests (ROIs), and (2) the mean 3D surface distances between the two ROIs. Results: When compared to conventional CT images, CBCT CT numbers were usually inconsistent, especially in soft tissue regions and for cases with large body circumferences. The deformed contours from the DWLM method performed better to match with patient’s anatomy. Without making changes to the original CBCT images, the intensity-based deformable image registration algorithm performed poorly and inconsistently. Quantitatively, the deformable image registration using DWLM technique significantly improved the OVIs and shortened the 3D surface distances when compared with physician’s manual contours. The OVIs in poorly-calibrated CBCT images were only 52.6% for the high risk CTV, 34.9% for the intermediate risk CTV, and 18.3% for the parotid gland. By applying the DWLM technique, the OVIs for these structures using the same CBCT images increased to 88.9%, 87.4%, and 97.9%, respectively. Even for well-calibrated CBCT images, the DWLM-based deformable registration improved the OVIs by 10 to 20%. The mean 3D surface distances between the deformed contours and the manual contours were approximately 1 mm for the high risk CTV, 0.5 mm for the intermediate risk CTV and 0.1 mm for the parotid gland. Without applying DWLM technique, the 3D surface distances were large: 4 mm in poorly calibrated CBCT images and 1.5 mm in good quality CBCT images. Conclusions: We implemented an effective wavelet-based window/level matching algorithm for pre-processing CBCT images. The method allows for more robust deformable image registration between CBCT images and the conventional (planning) CT images. The algorithm can be directly used for volumetric CBCT-guided adaptive radiotherapy. Author Disclosure: L. Zhang, None; L. Dong, None; A. Ahamad, None; X.R. Zhu, None; A.S. Garden, None; K.K. Ang, None; R. Mohan, None.
2783
Intra- And Inter-Modality Registration of Four-Dimensional (4D) Images
E. Schreibmann, L. Xing Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA Purpose/Objective(s): Newly emerged 4D-imaging techniques such as 4D-CT, -MRI and -PET afford effective tools to reveal spatial and temporal details of patient’s anatomy. To utilize the 4D images acquired under different conditions or using different modalities, an algorithm for registering 4D images must be in place. Here we develop an automated 4D-4D registration method to take advantage of spatio-temporal information of 4D imaging. Materials/Methods: The development was done in an open source ITK/VTK platform. A 4D input (model or reference) consists of a number of 3D sets of images, each representing the patient anatomy at a phase point. In an ideal situation where the patient’s breathing pattern is repeatable, to match the two inputs both spatially and temporally, the task is to find the matching 3D image set in the model input for each phase point in reference. Instead of exhaustively searching for the best match for each phase, a search algorithm was implemented, which can simultaneously find the matches for all phases in the reference input with consideration of temporal relationship between the 3D image sets in the inputs. An interpolation scheme capable of rapidly deriving an image set based on two sets of temporally adjacent 3D images was implemented to deal with the situation where the discrete temporal points of the two inputs do not coincide. A BSpline deformable model was also investigated to deal with potential anatomical changes between the two sets of inputs caused by the instability of breathing pattern or difference of imaging condition. The performance of the algorithm was illustrated with a digital phantom and four patients requiring interor intra-modality 4D-4D registrations. Results: In the phantom study where the optimal match of the two 4D inputs is known, the proposed technique was able to reproduce the “ground truth” with high spatial fidelity (⬍1.5mm). In addition, the technique regenerated all deliberately introduced “missing” 3D images of different phase points in one of the inputs because of the use of temporal interpolation. In a patient registration of gated-MRI and 4DCT, the rigid registration enabled us to optimally select the CT phase. We found the deformable 4D-4D registration is useful for the registration of two sets of 4DCTs acquired at different times. In this situation, a spatial accuracy of less than 3.5 mm was achieved. Conclusions: The automated voxel-based procedure automatically finds the best spatio-temporal anatomical match between two 4D datasets by compensating the differences resulting from different breathing patterns or acquisition parameters. Author Disclosure: E. Schreibmann, None; L. Xing, None.
2784
Quantification of Prostate Motion Based on 3D/3D Image Registration Using Daily Cone Beam CT (CBCT)
R. Hammoud1, J. Kim2, S. Li2, S. Patel2, D. Pradhan1, Q. Chen1, N. Wen1, B. Lord2, M. Ajlouni2, B. Movsas2 1 Downriver Center for Oncology, Brownstown, MI, 2Henry Ford Hospital, Detroit, MI Purpose/Objective(s): Delivering higher dose to prostate cancer patients requires the design of tight margins around the target minimizing the dose to the surrounding structures. The purpose of this study is to determine the extent of prostatic motion in patients evaluated with daily CBCT during prostate intensity modulated radiation therapy (IMRT) following 3D/3D image registration. Materials/Methods: A total of 140 CBCT scans from five localized prostate cancer patients undergoing daily CBCT on an IRB approved study were used for this study. Each patient was educated about a bowel regimen to assure an empty rectum prior to simulation and daily treatment. A simulation CT (sim-CT) scan was performed with the patient in the supine position. A CTV consisting of prostate and the proximal 1 cm of the seminal vesicles (SV) was created. A margin of 10 mm all around except 6 mm posteriorly was used to define the PTV and to generate a nine-field IMRT plan. Prior to daily treatment, a CBCT using the Varian On-Board Imaging System™ (OBI) was acquired. Prostate, SV, rectum, and bladder were delineated on the daily CBCT. Intensity-based 3D/3D rigid-body image registration was performed between the sim-CT and the daily CBCT. The