Single-Factor Repeated-Measures Designs: Analysis and Interpretation

Single-Factor Repeated-Measures Designs: Analysis and Interpretation

CLINICIANS’ GUIDE TO RESEARCH METHODS AND STATISTICS Deputy Editor: Robert J. Harmon, M.D. Single-Factor Repeated-Measures Designs: Analysis a...

69KB Sizes 0 Downloads 56 Views

CLINICIANS’

GUIDE

TO

RESEARCH

METHODS

AND

STATISTICS

Deputy Editor: Robert J. Harmon, M.D.

Single-Factor Repeated-Measures Designs: Analysis and Interpretation JEFFREY A. GLINER, PH.D., GEORGE A. MORGAN, PH.D., AND ROBERT J. HARMON, M.D.

The analyses discussed in this column are used in a design with one independent variable, two or more levels or conditions, and participants are measured under all conditions. These designs are referred to as within-subjects, dependent-samples, or repeated-measures designs and we use these terms interchangeably here. This means that participants undergo all conditions of the study or participants are matched on some variable(s) assumed to be related to the dependent variable. The types of research approaches used with single-factor, within-subjects/repeated-measures designs are most often randomized experimental or quasi-experimental. The comparative approach can be used with repeated-measures designs if the participants are matched (e.g., students with and without attention-deficit/hyperactivity disorder are matched on IQ and gender). To facilitate our discussion of within-subjects designs, we provide an example by Compton et al. (2001) from this Journal. These researchers assessed the therapeutic benefits, response pattern, and safety of sertraline in children with social anxiety disorder. Sertraline, a selective serotonin reuptake inhibitor, is used as a pharmacological treatment of pediatric obsessivecompulsive disorder. In their study, 14 adolescents participated in an open 8-week sertraline drug trial. After an initial fixed dosage level, the dosage during the study was adjusted by the treating psychiatrist with the intent to find an optimal dose for each client. Numerous outcome variables were recorded for this study. All outcome variables were recorded at baseline (pre) and at the end of the 8 weeks (post). Some of the outcome measures also were recorded at the second, fourth, and sixth

weeks. For those outcome variables only measured at pre and post, paired-samples t tests were performed. For those outcome variables that included the five measurement periods, singlefactor, repeated-measures analyses of variance (ANOVAs) were performed, accompanied by post hoc comparisons following statistically significant results. ADVANTAGES OF WITHIN-SUBJECTS DESIGNS

An obvious advantage of using a within-subjects/repeatedmeasures design is that fewer participants are needed in the study. If the Compton et al. (2001) study had used a betweengroups design, they would have needed a separate group of adolescents for each dosage. The repeated-measures design saves time in recruitment of participants, especially, as in our example, if participants have characteristics that are not common. A more important reason for selecting a repeated-measures design is that variability among participants should be reduced. The statistical analysis of single-factor designs is usually a t test or ANOVA, which conceptually can be thought of as a ratio of the variability between groups to the variability within groups. When we perform a repeated-measures design, each participant undergoes all of the conditions. Therefore, it is expected that any changes from condition to condition (dosages in this example) are due to the nature of a particular condition (treatment), and not variability among participants (error), because the same participant is experiencing each of the conditions. This reduction in error variance would increase the size of the t or F ratio and result in a greater probability of finding a statistically significant difference if one is actually there. DISADVANTAGES OF WITHIN-SUBJECTS DESIGNS

Accepted March 11, 2002. Dr. Gliner is Professor of Occupational Therapy and Education, Colorado State University, Fort Collins. Dr. Morgan is Professor of Education and Human Development, Colorado State University, Fort Collins, and Clinical Professor of Psychiatry, University of Colorado School of Medicine, Denver. Dr. Harmon is Professor of Psychiatry and Pediatrics and Head, Division of Child and Adolescent Psychiatry, University of Colorado School of Medicine, Denver. The authors thank Linda White and Nancy Plummer for manuscript preparation. Parts of the column are adapted, with permission from the publisher and the authors, from Gliner JA and Morgan GA (2000), Research Methods in Applied Settings: An Integrated Approach to Design and Analysis. Mahwah, NJ: Erlbaum. Permission to reprint or adapt any part of this column must be obtained from Erlbaum. Reprint requests to Dr. Harmon, CPH Room 2K04, UCHSC Box C268-52, 4200 East Ninth Avenue, Denver, CO 80262.

1014

Although within-subjects/repeated-measures designs are advantageous in reducing error variance, there are two distinct disadvantages of using repeated-measures designs. First, repeatedmeasures designs (with the exception of matching) cannot be used in situations in which a lasting effect of a treatment might take place. The problem is often referred to as carry-over effects. For example, studies of educational or psychological interventions would not use repeated-measures designs because once participants experienced the treatment condition, they could not be expected to “unlearn” the treatment. Because of carryover effects, repeated-measures designs are not seen as frequently

J . A M . A C A D . C H I L D A D O L E S C . P S YC H I AT RY, 4 1 : 8 , AU G U S T 2 0 0 2

RESEARCH METHODS AND STATISTICS

as between-group designs in the clinical literature. In studies like that of Compton et al. (2001), the effect of a specific dosage is assumed not to be long-lasting. One method of circumventing carry-over effects while still gaining the advantage of reducing error variance is to use a matching procedure. Participants are grouped into pairs (dyads) or triplets (triads) on the basis of some characteristic that should be related to the dependent variable, for example, intelligence. After participants are matched, they are assigned (optimally randomly) to groups and then the study is carried out. Conceptually, the idea of matching is to make each member of the pair or triad as though he or she were the same participant undergoing all conditions. Therefore, designs that use matching are considered to be within-subjects designs and use similar statistical procedures. A second disadvantage of repeated-measures designs is that the degrees of freedom in the study are reduced. If one did a repeated-measures study with two conditions, it would take half as many participants to gather the same amount of data because each person would be measured twice. For example, consider a between-groups study that compares an intervention condition to a control condition, with 20 participants in each condition. Then, there are 40 participants, or 38 degrees of freedom (df = [n1 – 1] + [n2 – 1] for an independent-samples t test). On the other hand, suppose that a repeated-measures design was used. There would be 20 participants in each condition, but because each participant undergoes both conditions, there would be only 20 participants. The degrees of freedom would be n – 1 or only 19; thus statistical power may be reduced. The decision of whether to use a between-groups design or a within-subjects design involves a tradeoff between increased sample size (df ) and reduced error variance. ANALYZING SINGLE-FACTOR WITHIN-SUBJECTS DESIGNS WITH PARAMETRIC STATISTICS

Now we will discuss the t test for paired samples and the single-factor repeated-measures ANOVA, which are used with single-factor within-subjects/repeated-measures designs. t Test for Paired or Related Samples

This t test is used when there is one independent variable, with two levels, and the participants undergo both conditions, or pairs of participants have been matched on a relevant variable. The dependent variables are normally distributed (often said to be interval or ratio scale), and the variances are approximately equal. Compton et al. (2001) used the t test for related samples to compare outcome variables measured only at the baseline (pre) and 8 weeks (post) periods. An example of one of the outcome variables used for pretest-post test data in this study was the behavior avoidance test (BAT). Participants rated their level of

J . A M . A C A D . C H I L D A D O L E S C . P S YC H I AT RY, 4 1 : 8 , AU G U S T 2 0 0 2

subjective distress on a scale from 0 to 100, after two tasks involving social anxiety. The t test was selected because there was one independent variable with two repeated measures (pre and post). Compton et al. (2001) concluded, with respect to the BAT, “A pre-post analysis of subjects’ average ratings of subjective distress for these two tasks revealed a significant decrease in subjective distress after the 8-week treatment of sertraline (t13 = 4.27, p < .001)” (p. 568). Notice that the subscript, 13, is the degrees of freedom for this study (N – 1). The mean and standard deviation are based on the difference scores between pre and post measures. The notation p < .001 indicates that the probability of this outcome or one more extreme, assuming a true null hypothesis, is less than 1 in 1,000, a very rare occurrence. The null hypothesis of no difference between conditions is rejected. Single-Factor ANOVA With Repeated Measures

The single-factor ANOVA with repeated measures is performed in designs with one independent variable, two or more levels, and the participants undergo all conditions or levels of the study. The dependent variable is distributed normally, and variances are similar for each condition. The Compton et al. (2001) study used this statistic to analyze outcome measures recorded for each participant at all five times: baseline (pre), second week, fourth week, sixth week, and eighth week (post). One of the dependent variables, used to measure overall clinical improvement, was the Clinical Global Impressions Scale-Improvement (CGI-I), with scores ranging from 1 (very much improved) to 7 (very much worse). Compton et al. (2001) stated, “A repeatedmeasures ANOVA on the psychiatrist’s mean CGI-I ratings was conducted to determine in which week a subject’s change in clinical status was significantly different from baseline. The main effect for time was significant, indicating that CGI-I ratings declined over time (F4,52 = 19.54, p < .001)” (p. 567). The subscript following the F indicates the two different degrees of freedom used in ANOVA. Four is the number of time periods minus one, whereas 52 is the number of participants measured per time period minus one multiplied by the number of time periods minus one. The finding that p was < .001 indicates that the probability of this outcome or one more extreme, assuming a true null hypothesis, is less than 1 in 1,000. Therefore, the null hypothesis that the means for the five time periods were equal was rejected. However, to determine where these differences among times took place, a post hoc test must be performed. Compton et al. (2001) stated this outcome as follows: “Using a Bonferroni adjustment, tests of within-subject comparisons across time revealed that a significant change in clinical status relative to baseline occurred by week 6 of the trial (4.00 versus 2.64, p < .001)” (p. 567). There are other post hoc comparisons that could be performed following a statistically significant repeated-measures ANOVA, depending on the question asked. 1015

GLINER ET AL.

For example, Compton et al. (2001) stated, “Trend analyses were also conducted to identify the form or shape of the relationship between the independent and dependent variable of interest” (p. 567). The assumptions of independence, homogeneity of variance, and normality discussed in the previous column (Gliner et al., 2002) also need to be considered for the t test for paired samples and the single-factor repeated-measures ANOVA. However, in addition to these assumptions, an additional assumption, sphericity, also must be considered for the single-factor repeatedmeasures ANOVA. Conceptually, the sphericity assumption is satisfied when the correlations among the scores of the different levels are equal. In the Compton et al. (2001) study, because there were five levels of time in the single-factor repeatedmeasures design, the correlation between baseline and week 2 must be similar to the correlation between baseline and week 4, which must be similar to the correlation between weeks 2 and 4, etc. Usually, however, the farther apart two time periods are, the lower the correlation coefficient. If the assumption is violated, the type I error is inflated. However, most computer statistical programs have a correction for the violation of the sphericity assumption. ANALYZING SINGLE-FACTOR WITHIN-SUBJECTS DESIGNS WITH NONPARAMETRIC STATISTICS

Nonparametric statistics are used with within-subjects/repeatedmeasures designs when one of the assumptions underlying use of parametric statistics has been violated. We will discuss briefly four nonparametric statistics. Wilcoxon Signed Ranks Matched Pairs Test

The Wilcoxon matched pairs test is used in a design in which there is one independent variable, with two levels, and the participants undergo both conditions or pairs of participants have been matched on a relevant variable. The dependent variable data are ordinal (and not normally distributed), or there have been violations of assumptions of the t test for paired samples. Friedman Two-Way ANOVA by Ranks

The Friedman test is used in a repeated-measures design when there is one independent variable, three or more levels, and the dependent variable is ordinal (and not normally distributed) or violations of ANOVA assumptions have occurred. Because the Friedman test is carried out on data with more than two levels or conditions, a statistically significant result must be followed by some post hoc comparison to determine specific differences. The Wilcoxon test may be used as a post hoc test in

this situation if the Friedman test is statistically significant. Note, however, that using the Wilcoxon test after the Friedman test is analogous to using the least significant difference test after an ANOVA; it is somewhat “liberal” (see Gliner et al., 2002). The McNemar Test

The McNemar test is used in designs similar to that for the paired t or Wilcoxon test, but the dependent variable is nominal or categorical data. The McNemar test is similar to the χ2 test in that frequencies are the unit of measurement and they can be visualized in a cross-tabulation table. However, because each participant undergoes both conditions of the study, there are important differences from the χ2 test for independence. Cochran Q Test

The Cochran Q test is used in designs similar to that for the repeated-measures ANOVA or Friedman test, but, again, the dependent variable is nominal or categorical data and can have only two categories. A statistically significant Cochran Q test, like other analyses that compare more than two levels or conditions, must be followed by a post hoc test to determine specific differences among conditions. SUMMARY

In this column we discussed the selection and interpretation of appropriate statistical tests for single-factor within-subjects/ repeated-measures designs and provided an example from the literature. The parametric tests that we discussed were the t test for paired or correlated samples and the single-factor repeatedmeasures ANOVA. We also mentioned four nonparametric tests to be used in single-factor within-subjects/repeatedmeasures designs, but they are relatively rare in the literature. The Compton et al. (2001) article did not provide effect size measures, but they could be computed from the means and standard deviations. Remember that a statistically significant t or ANOVA (even if p < .001) does not mean that there was a large effect, especially if the sample was large. In the Compton example, the sample was quite small (N = 14), and the findings do reflect a large effect size. REFERENCES Compton SN, Grant PJ, Chrisman AK, Gammon PJ, Brown VL, March JS (2001), Sertraline in children and adolescents with social anxiety disorder: an open trial. J Am Acad Child Adolesc Psychiatry 40:564–571 Gliner JA, Morgan GA, Harmon RJ (2002), Single-factor between-groups designs: analysis and interpretation. J Am Acad Child Adolesc Psychiatry 41:742–744

The next article in this series appears in the October 2002 issue: Basic Associational Designs: Analysis and Interpretation Jeffrey A. Gliner, George A. Morgan, Robert J. Harmon 1016

J . A M . A C A D . C H I L D A D O L E S C . P S YC H I AT RY, 4 1 : 8 , AU G U S T 2 0 0 2