Vector Analysis of Patient Setup with Image Guided Radiation Therapy (IGRT) via Kv Cone Beam CT (CBCT)

Vector Analysis of Patient Setup with Image Guided Radiation Therapy (IGRT) via Kv Cone Beam CT (CBCT)

Proceedings of the 50th Annual ASTRO Meeting 3049 Vector Analysis of Patient Setup with Image Guided Radiation Therapy (IGRT) via Kv Cone Beam CT (C...

39KB Sizes 0 Downloads 98 Views

Proceedings of the 50th Annual ASTRO Meeting

3049

Vector Analysis of Patient Setup with Image Guided Radiation Therapy (IGRT) via Kv Cone Beam CT (CBCT)

J. R. Perks, R. L. S. Jennelle, A. Chen, S. Franklin, T. Liu, B. Palo, J. A. Purdy UCDMC Cancer Center, Sacramento, CA Purpose/Objective(s): Analyze CBCT images in terms of CTV vector displacement and compare shifts to the PTV margins. Determine whether correcting the data set patient by patient for the systematic error of the first five days reduces the number of days that setup error was greater than vector. Ascertain whether the images taken every 5th day represent the overall patient scalar and vector shifts and determine if patient imaging dose could be reduced by decreasing the frequency of imaging. Materials/Methods: To date over 400 patients have been treated with combination IMRT and IGRT at UC Davis. In this study 996 CBCT images were assessed from 30 patients. Of the patients analyzed 15 were prostate (11 with intact gland, 4 post prostatectomy), 8 were head & neck and 7 were primary brain tumors. All patients were treated with IMRT and imaged daily with CBCT. Each patient was initially set to the isocenter based on marks / tattoos made at CT simulation. A CBCT was taken and shifts in the three orthogonal axes were applied to correct any displacement over 1 mm. The three scalar shifts (cranial-caudal, anterior-posterior and left-right) were processed as a shift vector but taking the root sum of the squares. Two effects were considered: the average scalar shift of the first five images was removed from all subsequent images for a given patient and the number of days in which the shift vector was greater than the 3D-CTV-PTV margin (5mm created with rolling ball technique) recorded. Separately, without any corrections, the scalar shifts from every fifth fraction were averaged to determine if the shift vector would be adequately sampled if CBCT was not performed daily. Results: Systematic and random errors were determined (van Herk method), the maximum error being 3.6mm for the lt / rt shift of prostate cases. From the initial 996 CBCT scans there were 439 cases (mean 14.6 per patient, range, 1 - 31) were the vector shift was initially greater than 5mm. By correcting each patients data set by the average shift of their first five images the total was 305 out of 996 (mean 10.2 per patient, range, 1 - 34). However, this reduction was not statistically significant (p = 0.07). Then by considering only every 5th image of each patient data set it was determined that the mean scalar shifts were almost always (97%) equivalent to the full data set. In spite of this, the vector shifts were grossly under sampled as only 96 out of the total 439 would have caught by imaging every 5th day, yielding a significant underestimation of 3.1 cases per patient (p \ 0.001). Conclusions: In order to know the 3D (vector) position of the CTV and to ensure that it is always within the PTV margin daily imaging is required. Correcting the data set by the average shift from the first five days does reduce the overall shift and the number of outlier days but not be a statistically significant amount. Author Disclosure: J.R. Perks, None; R.L.S. Jennelle, None; A. Chen, None; S. Franklin, None; T. Liu, None; B. Palo, None; J.A. Purdy, None.

3050

Robust and Efficient NTCP Estimation from Functional Measurements using a Bayesian Approach

A. van der Schaaf, P. van Luijk, C. Schilstra, J. A. Langendijk, A. A. van ’t Veld Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands Purpose/Objective(s): To allow treatment plan optimization and comparison of new irradiation techniques, estimation of normal tissue complication probabilities (NTCP) based on available treatment-response data is required. Traditionally, maximum likelihood (ML) estimation is often used for this purpose. Our hypothesis is that Bayesian methods are more robust and more efficient than classical ML estimation. Materials/Methods: We tested a Bayesian and a ML method on a dataset of breathing rates as a function of irradiation dose to lung in rat (50% irradiated volume, dose 15-23 Gy, n = 68) and a second dataset of saliva flow as a function of dose to the parotids in rat (100% irradiated volume, dose 10-30 Gy, n = 30). The complication endpoint was defined by a threshold value derived from a nonirradiated control group. The datasets were split in smaller subsets to evaluate the effect of sample size and to test for prediction accuracy. We implemented the Bayesian approach by method of probability-weighted averaging with random sampling of model parameters. The ML method used numerical steepest descent optimization to find the model parameter values. The predictive performance of both methods is compared using the log-likelihood measure in cross-validation, i.e., with the parameters estimated from a different subset than the test set. Results: The Bayesian method was numerically stable for all tested subset sizes (n = 34 to n = 5), while the ML method was unstable for subset sizes below n = 11 (lung) or n = 15 (parotid). The predictive performance of the Bayesian method at subset size n = 11 was comparable to that of the ML method at n = 34, showing that the Bayesian method is more data-efficient. The actual value of the cross-validated log-likelihood was on average close to the expectation value for the Bayesian method, but deviated significantly for the ML method (mean deviation -0.01 vs. -0.21 log-units per prediction), indicating that the Bayesian approach is unbiased, and the ML method is not. These findings relate to two conceptual differences between the Bayesian and ML approach. First, the Bayesian method takes uncertainty of the model parameters into account in the NTCP estimate, while the ML method assumes certainty of the model parameters. Second, the ML method can only be fitted to the threshold endpoints, while the Bayesian method is able to use all information of the non-threshold functional measurements. Conclusions: In the presented examples the Bayesian approach was more stable, data-efficient, and unbiased compared to the traditional ML approach. Author Disclosure: A. van der Schaaf, None; P. van Luijk, None; C. Schilstra, None; J.A. Langendijk, None; A.A. van ’t Veld, None.

S641