Basics of designing a clinical trial: Part I

Basics of designing a clinical trial: Part I

SPOTLIGHT ON RESEARCH Basics of Designing a Clinical Trial: Part I JoAnne D. Whitney, PhD, RN H a s the growing emphasis on evi- dence-based practi...

49KB Sizes 0 Downloads 81 Views

SPOTLIGHT ON RESEARCH Basics of Designing a Clinical Trial: Part I JoAnne D. Whitney, PhD, RN

H

a s the growing emphasis on evi-

dence-based practice raised your interest in studies designed to test specific therapies? Are you intrigued by the effect of new or old therapies on clinical outcomes pertinent to wound, ostomy, and continence care? If that is the case, then you have likely been thinking along the lines of planning a clinical trial. This column has covered several common study designs over the last few issues. In this issue, Spotlight focuses on the current “gold standard” for testing therapies, the randomized, controlled, clinical trial (RCT). RCTs are experimental designs that incorporate a number of specific features in their planning and execution. Experimental designs involve (1) at least 2 groups, one that is referred to as the control group and the other as the experimental or intervention group, (2) random assignment of subjects to groups, and (3) an intervention. RCTs are recognized for achieving as much control as possible of confounding variables that may influence results so that interventions and their true effects can be determined. The term “efficacy” is used in conjunction with RCTs and is understood as the ability of the treatment being studied to produce the intended clinical effect. Efficacy is determined in the context of highly controlled, experimental research and is contrasted with effectiveness. Effectiveness of a therapy is determined by how well the treatment works when applied to the less controlled, “real” world of clinical practice, where diversity abounds and control of factors that might influence treatment success is limited. It is worth noting that following the recommended methods for designing and conducting the “perfect” RCT is not always possible because of a number of factors.1 Available resources (time, money, personnel), the realities of clinical practice, and the diversity and complexity of human beings can limit the extent to which control is introduced and maintained during a trial.

Still, when planning a trial researchers attempt to think through and plan a study that is as near perfect as possible when all other factors are considered. Clinical trials comparing therapies are not a recent development. An early version relative to wound healing is described by Lind,2 who served as physician on the ship HMS Salisbury. He used a sample of 12 patients, as similar in characteristics as possible, who had scurvy and assigned them to 1 of 6 treatments that they received for 6 days. Of course, the 2 that received the treatment of 2 oranges and 1 lemon on each day of the trial had a speedy resolution of lassitude, putrid gums, body sores, and generalized weakness. Although this study did not involve randomization of subjects, it is an example of the early use of a comparative, concurrent control design to test the effect of several therapies on a serious threat to health. RCTs are prospective, comparative, and randomized, as mentioned. Additional aspects of designing trials include specific approaches for determining sample size, eligibility, randomization, and appropriate masking.

SAMPLE SIZE A major issue in the design of trials is establishing a sample size that is sufficient to demonstrate an effect of the therapy being tested, if one exists. Many trials produce inconclusive results because of inadequate sample size. Systematic reviews of specific therapies have identified small sample sizes as a limitation of many clinical studies. An adequate sample size is based on the expected difference between the arms (groups) of the study. This is referred to as the “effect size” of a treatment. A common problem is the overestimation of the true effect size.3 The general case is that the larger the effect size of a given treatment, the smaller the sample needed. Thus, if a sample size is estimated based on a large treatment effect when in

J WOCN 2000;27:257-9. Copyright © 2000 by the Wound, Ostomy and Continence Nurses Society. 1071-5754/2000/$12.00 + 0 21/1/109565 doi:10.1067/mjw.2000.109565

257

JWOCN September 2000

258 Whitney

reality there is only a small to moderate effect, the study will be underpowered to detect whether the treatment is beneficial. Determining sample size is based on prestudy calculations known as a “power analysis.” In brief, this is a calculation performed for the purpose of estimating the sample size needed to test adequately the difference between two (or more) therapies and establish if one is superior to the other. The details of power analysis calculations are beyond the scope of this article but will be covered in a future column. In the context of designing trials, power analysis is an important component in planning and can often be best achieved by consulting a statistician. Information to have for a consultation on power and sample size includes (1) how big an effect is expected with the new treatment (eg, compared with controls, how many weeks to healing are expected for those receiving the new treatment?); (2) how much variability there is in the outcome variable; and (3) what percent of the sample is expected to drop out. Existing studies or any pilot data you might have are useful means of identifying this information.

ELIGIBILITY Clear criteria for who is eligible to be in the trial are essential. It is important to strike a balance between carefully defining the population of interest and not making inclusion criteria too restrictive. If inclusion criteria are too stringent it becomes very difficult to obtain the needed sample size to test the intervention. One guiding principle is to select patients based on the intended future use of the therapy.4 That is, define the desired sample based on the therapies that are being compared, making sure that the treatments are appropriate for the types of conditions experienced by the potential subjects. For example, comparing wound treatments for stage IV pressure ulcers would require selecting therapies designed to treat this stage of ulcer. In addition, potential subjects need to be recruited from a population that is likely to benefit from the treatment. This is particularly true for interventions that focus on preventing health problems. In this case selecting subjects from those with characteristics placing them at risk for a particular outcome is important. It would make little sense to study an intervention that is hypothesized to reduce wound infection if the sample is drawn from patients for

whom there is low likelihood of this outcome. In determining eligibility criteria, the diagnostic criteria related to the condition are also important. This helps to establish a reasonably homogeneous sample. For example, it is not useful to include many types of chronic wounds in one study. Wounds of the same cause are appropriate for comparison, but comparing healing response in venous ulcers with healing in neuropathic ulcers is mixing apples and oranges. Informative conclusions cannot be drawn from results of studies in which the diagnostic inclusion criteria are not well-defined.

RANDOMIZATION Random assignment is intended to limit investigator bias and to distribute demographic and individual characteristics equally between study groups. For this to occur, it is necessary that group assignment be truly random. Methods in which patients are alternately assigned to treatment or control do not achieve this objective and are inadequate. Acceptable methods for randomizing subjects include using a table or computer-generated random numbers. Details of using this type of process can be found in most research textbooks and are easy to implement. This procedure is known as simple randomization. With simple randomization, it is always possible to have imbalance in the number of subjects in each group. Blocked randomization technique circumvents this problem. With this method, specific block sizes are identified, and within each block the order of group assignment is randomized.5 For example, if the block size was set at 8, then within this block a random pattern of 4 subjects to each group would be established. Details of blocked randomization are provided by Friedman and colleagues.5 When randomization is based on the blocked method, it is important that those enrolling not be aware of the block size. Use of varied block sizes and preparation of the random assignments within blocks by someone not involved in recruitment are recommended to ensure sources of bias are limited. During recruitment it is critical that any persons involved in screening or enrolling subjects not know group assignment. The focus when screening should be that the control treatment or the intervention is appropriate for any potential subject. If

JWOCN Volume 27, Number 5

Whitney 259

assignment is known, then there is always a risk that subjects will be selected because the clinician or study coordinator thinks they would be a good match for the particular treatment.

MASKING Individuals who evaluate subjects for outcome events can be the most objective if they are unaware of treatment assignment. Masking also limits the introduction of bias into the evaluative components of a study. It may not be possible to mask subjects or study personnel conducting the trial to the treatment group. However, individuals masked to group assignment can perform outcome measures. To do this requires planning in advance for each person’s role on the project and establishing procedures for how masking will be maintained. It may necessitate additional personnel and so becomes an item to include in the process of budget allocation for the study. Planning of RCTs involves prospective

attention to a number of study design elements. As elaborated here, it is important to conduct a trial that has adequate numbers of subjects and is conducted with methods to reduce the introduction of bias that could cloud or place results of a trial in question. In the next issue of the Journal, part II of “Basics of Designing Clinical Trials” will cover the areas of outcome measures, compliance, and data analysis. REFERENCES 1. Bigby M, Gadenne A-S. Understanding and evaluating clinical trials. J Am Acad Dermatol 1996;34:550-90. 2. Lind J. A treatise of scurvy. Edinburgh: Snads, Murray and Cochran; 1753. 3. Applegate WB, Curb JD. Designing and executing randomized clinical trials involving elderly persons. J Am Geriatr Soc 1990;38:943-50. 4. Bailey A, Crook A, Machin D. Statistical methods for clinical trials. Blood Reviews 1994;8: 105-12. 5. Friedman LM, Furberg CD, DeMets DL. Fundamentals of clinical trials. 3rd ed. St Louis: Mosby; 1996.

CALL FOR CASE DISCUSSIONS FOR JWOCN ON-LINE We need your contributions to this new and popular educational tool on the WOCN Web page! We are looking for Case Discussions in all clinical areas of WOC nursing. Enjoy the benefit of sharing your experiences and knowing your colleagues will learn from them. Submit your most interesting case to our Web site feature—CASE DISCUSSIONS—and demonstrate applications of the principles of WOC specialty practice nursing. Submissions are reviewed by members of the Editorial Board and are posted for approximately 1 year. Please direct all Case Discussions to Mikel Gray, Editor, Editorial Office, 1391 Delta Corners, Lawrenceville, GA 30045; fax 770-978-0748; e-mail [email protected]