326
Abs~a~s one-sided confidence intervals for PPV. The lowest Doppler value with PPV confidence interval above some pre-set level was chosen as cutpoint. In cases of sufficient data for reliable direct est~nate of PPV, the regression estimates were extremely close and with tighter confidence bounds.
Interim Analysis Through an Independent Data Monitoring Board Jay H e r s o n , F r a n k W. R o c k h o l d
Applied Logic Associates, Inc., Houston, Texas (31) For ethical reasons a pharmaceutical firm included an early termination point in the design of a double-blind placebo-controlled clinical trial of an emergency room drug. The company wanted an i n d e p e n d e n t data monitoring board (DMB) to make the decision to terminate or continue at the interim analysis. While DMBs have been used before in large federally-sponsored clinical trials, the concept was relatively new for a pivotal study sponsored by a pharmaceutical firm. This paper describes the process of drafting the DMB's charter, dealing with sources of bias and patients whose response classification is questionable, planning and rehearsing the logistics of the interim analysis. DMB-sponsor communications between meetings, liability issues, regulatory issues and submittal of a final report to the sponsor. A checklist is presented of key issues to be addressed in planning a clinical trial with an independent DMB.
The Statistical Component of Pfizer Pharmaceuticals PC-Based Computer Assisted NDA (CANDA) System W i l l i a m C a s h , Jack M a r d e k i a n
Pfizer, Incorporated, New York City, New York (32) This paper will discuss the statistical component of Pfizer Pharmaceutical's PC-based CANDA which was recently submitted to the FDA. The menu driven SAS-based system is designed to facilitate the statistical review of Pfizer's NDA submission. The system provides the statistical reviewer easy access to all data and statistical models behind tables presented in a statistical report. Patients excluded from analyses and data elements and manipulation in data sets actually analyzed are readily identified. The system provides the capability to interactively explore various hypotheses using either Pfizer or reviewer specified models. In addition, the reviewer can readily rerun analyses after changing the eligibility of Pfizer excluded patients. Experience with training the statistical reviewer and required support for the system will also be discussed.
Are Coding Systems to Handle Text-Data in Clinical Trials Still Necessary? M . M . Rainisio, G. Stein, G. Decoster*, M. Ranieri
Statistics for Research & *Hoffmann-LaRoche Department of Clinical Research, Basle, Switzerland (33) In the past, coding systems were created to deal with medical terminology in clinical trials due to the limited capabilities of computer programming languages in handling text-data. Such coding systems usually allow for classification built into the codes, eliminate noise and are relatively easy-to-handle in programming. However, 1) the difficulty to trace back errors, 2) misinterpretation and/or loss of information, 3) the need for skilled staff to code the data at or before data entry are major disadvantages of coding systems. Entering original terms into databases (DBs) and managing them using a parallel dictionary DB provides 1) no loss of information, 2) a centralized system for grouping original terms, and 3) limits the necessity of looking up coding lists and interpreting the texts at data entry. A dictionary DB was created to handle all text-data such as diagnoses, adverse events and treatments by allocating a standard preferred-term (SPT). The dictionary DB consists of four hierarchical levels (original term, SPT, low class, and high class). Ca. 10,000 unique original terms from the trial DBs were classified using the dictionary DB and these were reduced by 82% to ca. 1800 SI>Ts categorized in 125 classes. In cases where no SPT could be allocated special SPTs such as "UNREADABLE" or "MISPLACED" were used. In conclusion, the use of a dictionary DB to handle text-data in clinical trials provides a system to reduce the volume of text-data without any loss of information, but with consistency of interpretation, transparency and minimal need for h u m a n resources.