Poster Viewing Abstracts S819
Volume 90 Number 1S Supplement 2014
3551 Listening to Sinogram: Detecting and Estimating Actual Discrepancy During Treatment D. Ruan, X. Qi, D. Low, and M. Steinberg; UCLA David Geffen School of Medicine, Los Angeles, CA Purpose/Objective(s): To investigate and develop methods to detect and estimate the actual faults or significant discrepancy between clinical intention and execution that occurred during tomotherapy treatment, based on validation sonogram data. Our preliminary development has focused on identification of patient/plan mismatch, delivery/prescription dose discrepancy, large setup error and significant patient geometry change, as caused by large weight loss. Materials/Methods: The validation sinogram in the tomotherapy archive, recorded during treatment procedure, is interpreted to infer the state of treatment. The detection of patient/plan mismatch and dose mismatch is formulated in a hypothesis-testing framework and the null-hypothesis is rejected when excessive deviation from normal delivery dose patterns is observed. Specifically, an erroneous dose level is detected by testing for a non-unity global scale factor on the whole sinogram, whereas patient misidentification is detected by performing paired student-t test on the sonogram data pattern between fractions. The estimation of setup errors and significant patient geometry changes involves (1) forward emulation to evaluate the manifestation of each cause on the sinogram data on a specific patient and (2) likelihood estimation to assess the odds of such incidence for a specific sonogram observed on a particular day. The preliminary developed has been validated with phantom experiments (for setup) and clinical findings (for weight changes). Results: A software toolbox has been developed to process the sonogram data from the tomotherapy archive. In an initial test for H&N with 36 Fxs, we have established with high confidence that inter-patient variation is significantly larger than inner-patient variation due to tolerable shift and normal anatomy changes. All simulated global dose level change with more than 5% difference and wrong patient was detected 100% of the time. Upon training for tolerance level, the detected geometry changes also concurred with clinical findings of severe weight loss/gain. At the current stage, patient offset with shift for more than 5mm and rotation higher than 3degs can be detected, yet accurate quantitative estimation is still work in progress, potentially due to the existence of nontrivial kernel for sinogram observations. Conclusions: The preliminary principle for detecting and estimating actual discrepancies during tomotherapy treatments based on sonogram records has been developed and validated. Further investigation is being performed improve estimation functionality, and to further validate with more sites and larger cohorts. The implementation is made publicly available to support community-wise use, assessment, and collaborative contribution. Author Disclosure: D. Ruan: None. X. Qi: None. D. Low: None. M. Steinberg: None.
3552 Patient Treatment and Prognosis Information Extraction With Adaptive Self Learning Medical Form Generating System S. Zheng,1 F. Wang,1 H. Gan,2 J. Lu,1 S. Jabbour,3 N. Yue,3 and W. Zou3; 1 Emory University, Atlanta, GA, 2University of Nebraska Medical Center, Omaha, NE, 3Rutgers Cancer Institute of New Jersey, New Brunswick, NJ Purpose/Objective(s): Studies in radiation therapy usually require the correlation of the patient treatment history and the patient prognostic data. These data are often presented in documents of various formats. Manual extraction of the needed information can be labor-intensive. An Adaptive Self Learning Medical Form Generating System (ASLForm) was developed to automate such process and reduce the extraction time. Materials/Methods: 30 Non-Small Cell Lung Cancer patients were retrospectively identified for the study. In patient database, the patient treatment history and multiple follow up documents were extracted. The system first started with the regular manual annotation process while learning the natural language processing features. The system then proposed potential values for each target attribute. Such learning process was performed transparently in the backstage by analyzing the system generated answers and user’s revision
decision. The system performance was consistently improved to generate higher precision results. After reaching acceptable precision rates, the system ran in batch model to process the following reports. Results: Information was extracted from narrative medical reports, and then standardized with controlled vocabulary. With a small number of training reports, information such as patient treatment history, toxicities and side effects, recurrence were quickly extracted with high precision and presented in structured format. The processing time for a typical patient with 1 treatment history summary and 5 follow-up documents was usually within few seconds. Data validation was performed based on comparison with manually annotated results. The study will be furthered with more patient records. Conclusions: With adaptive learning and training, ASLForm can be developed to extract patient treatment history and prognosis documents and sort through them in needed format. Such system greatly reduced the process time and is very suited for studies with large-scale patient database. Author Disclosure: S. Zheng: None. F. Wang: None. H. Gan: None. J. Lu: None. S. Jabbour: None. N. Yue: None. W. Zou: None.
3553 Evaluation and Comparison of Segmentation Algorithms in Low Contrast FET-PET Scans for Gross Tumor Volume Delineation J.S. Kraft,1 T. Fechter,1 I. Go¨tz,1 T. Papke,1 R. Modzelewski,2 C. Lemarignier,2 A. Chirindel,1 I. Gardin,2 A. Grosu,1 and U. Nestle1; 1 Universita¨tsklinik Freiburg, Freiburg, Germany, 2CHB, Rouen, France Purpose/Objective(s): There are several semi-automatic tools for the delineation of a Gross Tumor Volume (GTV) in PET scans for radiation therapy planning. A problem is that in low contrast amino acid PET images, like FET-PET, the contours made by these algorithms are not uniform. Recent work on random walk (RW) algorithms showed promising results for low-contrast-PET GTV delineation. The aim of this work is to compare and evaluate RW algorithms with algorithms already in clinical use for low-contrast-PET GTV delineation and with delineations made by a clinical experienced physician. Materials/Methods: Ten FET-PET scans of patients with recurrent glioblastoma after surgical and radio-oncologic therapy were used. We used three different RW based algorithms (RW1, RW2, RW3) from different centers with different foreground, background and edge weight determination methods. For comparison the 1.6 opposite mean (OM) and the 40% , 50 % respectively 60 % of the maximum SUV methods were chosen. The reference contour was done by one experienced physician by using an individual adapted OM method. The evaluation was done by comparing the following parameters: single contoured volume (VSC), common contoured volume (VCC) and the kappa statistic (K). Results: According to K RW2 provides the most similar delineations with the reference contour. The mean K (K*) Z 0.68 (95% CI 0.57 - 0.79) [Substantial observer agreement according to Landis and Koch (OG)]. The 1.6 OM (K* Z 0.58, 95% CI 0.41 - 0.75) [Moderate OG], the 40 % (K* Z 0.44, 95% CI 0.23 - 0.65) [Moderate OG] , the 50 % (K* Z 0.53, 95% CI 0.34 - 0.72) [Moderate OG] respectively 60 % (K* Z 0.57, 95% CI 0.45 0.69) [Moderate OG] method have a less high K value. The VSC was lowest in the RW algorithm, which means the lowest number of falsely positive segmented voxels and translating into higher K. The VCC varies from 56 % to 87 % in the different algorithms, but has no significant influence on the K values. Conclusions: The presented work suggests that RW algorithms may provide clinical more useful delineations than threshold based algorithms in low contrast PET scans. The smaller CI of RW2 indicates that this algorithm is more stable and reliable than the other algorithms. Nonetheless experience showed that an issue of RW algorithms is the definition of foreground and background. RW2 the best performing RW algorithm uses a method that takes the SUV distribution of the whole brain into account in contrast to local methods used by the other algorithms. This may allow the assumption that a holistic approach outperforms local methods. Further research will explore this and the applicability of the RW algorithms on bigger datasets and in clinical practice.