Abstracts
337
Dosimetry Review Process for Head and Neck (H&N) Cancer Trials Conducted by Radiation Therapy Oncology Group (RTOG) T h o m a s F. Pajak, Paul E. Wallner, Victor A. Marcial, Robert A. Lustig, J a m e s D. Cox Radiation therapy Oncology Group, Philadelphia, Pennsylva.-ia (P-05) The RTOG has developed a two step centralized RT review because a 1978 retrospective analysis found a 19% (37/198) major deviation rate from the protocol prescription which could seriously jeopardize the identification of better treatments. Prior to or during the first week of RT, the treatment plan for each patient is checked by a consulting radi6h,erapist to determine protocol compliance. This review is called the initial dosimetry revL~,, (IDR). For randomized studies only, the RT study chairman reviews each patient for the actual RT given relative to the protocol. This review is called the final dosimetry review (FDR). All nine RTOG H&N trials conducted between 1979 and 1987 had IDR for 1,340 patients entered. The IDR analyses found a steep learning curve. In the first two years, 98% of the patients needed their plans modified and thereafter <2 %. FDR analysis on 426 patients resulted in revising the criteria for unacceptable RT deviation from the protocol to include prolonged elapsed treatment time. Using the-revision, 53 (12%) patients with unacceptable RT had significantly shorter survival (p = .003) than the patients with acceptable RT even after adjusting for other known prognostic factors. In conclusion, a sampling plan for IDR on future RTOG H&N trials is planned and the FDR should be performed on all H&N studies.
Management of Multi-Center Trials: A Comparison of Investigator Initiated Studies and Other Administrative Models P. D e a n Surbey, M a r s h a B. M c D o n a l d , Richard H. G r i m m , Jr. a n d R o n a l d J. Prineas, for the T O M H S Research G r o u p University of Minnesota, Minneapolis, Minnesota (P-06) Treatment of Mild Hypertension Study (TOMHS)--Phase 1 is a 4 center clinical trial with a central administrative and coordinating center funded as an ROI grant by NHLB1. This investigator initiated project has resulted in a management structure that differs significantly from other multi-center studies funded and administered through other mechanisms such as contracts. Differences include (1) process of clinical center selection, (2) ability to reallocate funds among study components, (3) accountability for use of funds, (4) level of commitment from Principal Investigators and clinical staff, (5) integration of administration, statistical and data coordination, and clinical centers, (6) scope and manner of quality assurance and ability to provide technical assistance, (7) nature of interactions with clinical centers, (8) nature of interactions with funding source(s). Benefits and weaknesses resulting from the administrative models will be highlighted. Recommendations for management of future collaborative efforts will be discussed.
Site Visit Methodology in a VA Cooperative Study Carol Fye, Dolly Koontz, D o m e n i c Reda, Barbara Lizano, Clair H a a k e n s o n , a n d Barry J. M a t e r s o n VA Cooperative Studies Program, Albuquerque, New Mexico (P-07) The VA Cooperative Studies Program (CSP) consists of one pharmacy and four biostatistical coordinating centers at AIbuequerque, NM; Hines, IL; Palo Alto, CA; Perry Point, MD; and West, Haven, CT, respectively. The centers at Albuequerque and Hines are now coordinating CSP #290, a study comparing efficacy of six medications to placebo used as monotherapy for hypertension. It is being conducted at fifteen hospitals and three central laboratories with a target sample size of 1,400 patients. Due to the size and complexity of the study, need for extensive site visiting soon became apparent but, as with most clinical trials, resources were limited. At many hospitals, major problems identified included drug dispensing, as well as data collection and patient management. Because of the limited resources and the nonstatistical nature of many of the problems, we (1) developed criteria to evaluate and grade the performance of each hospital, thus identifying those hospitals needing a site visit and (2) established a multidisciplinary site visit team composed of a pharmacist, a medical forms reviewer, and occasionally, a study nurse. To determine problem sites, hospitals were rated on such factors as misrandomizations, missing and overdue data, drug dispensing errors, and timeliness of error rectification. At each hospital the visiting team determined the nature of the problems and recommended solutions. A followup site visit was done to review compliance with the team's recommendations. This unique