Sample size determination in case-control studies

Sample size determination in case-control studies

J Clin Epidaaiol Vol. 44, No. 6, pp. 609-612, 1991 0895-4356/91 $3.00 + 0.00 Copyright 0 1991 Pergamon Press plc Printed in Great Britain. All right...

304KB Sizes 0 Downloads 61 Views

J Clin Epidaaiol Vol. 44, No. 6, pp. 609-612, 1991

0895-4356/91 $3.00 + 0.00 Copyright 0 1991 Pergamon Press plc

Printed in Great Britain. All rights reserved

Letters to the Editors COHEN’S KAPPA

Some recent issues of the Journal have included mention of Cohen’s kappa-the nominal scale measure of agreement. See, for example, Donner and Li [l, p. 8281. Their reference to J. L. Fleiss’s Statistical Methodr for Rates and Proportions allows one to read in detail on the topic of agreement as well as on the standard error of kappa. Nonetheless, I thought your readers might appreciate the primary citations to Jacob Cohen’s work on kappa [2,3] the first of which was noted by Feinstein and Cicchetti [4] in the June 1990 issue of the Journal.

JAMTSJ. DIAMOND ThomasJefferson Medical College, Philadelphia, PA 191074099,

REFERENCES Donner A, Li KYR. The relationship between chisquare statistics from matched and unmatched analyses. J Clla Epidemiel 1990; 43: 827-831. Cohen J. A &Eicient of agreement for nominal scales. Educ F’svchol Meas 1960: 20: 37-46. Cohen i Nominal scale agreement with provision for scaled disagreement or partial credit. Psycho1 Bull 1968; 70: 213-220. Feinstein AR, Cicchetti DV. High agreement but low Kappa: I. The problems of two paradoxes. J Clin Epidemiol 1990; 43: 543-549.

SAMPLE SIZE DETERMINATION

In 1985, McKeown-Eyssen and Thomas provided a good introduction on how the distribution of exposure influences sample size determination in case-control studies [l]. Their sample size formulae, however, did not consider confounding variables. Although for simplicity we often ignore the influence of confounding factors in sample size determination, in practice, it is important for us to assess how much effect resulting from confounding factors on sample size determination would be expected. In this letter, to measure this effect in the presence of a confounder, we use the difference between the desired power and the expected power given by McKeown-Eyssen and Thomas sample size formula. To help epidemiologists

U.S.A.

IN CASE-CONTROL

STUDIES

to appreciate the concern raised here, we have included a quantitative discussion of the effect of a potential confounder in different situations. The discussion and the results presented here should be useful and interesting to practitioners. As discussed in McKeown-Eyssen and Thomas [l], we assume that the disease rates are exponentially related to X, and X, in the general population (i.e. r(X,, X,) = exp(a + bX, + C-X,)), where X,, the exposure variable, and X,, the confounder variable, are distributed as bivariate normal with means of ~1, and pl, variances of ai and CT:, and the covariance c,*. We use p to denote the simple correlation between X, and X,. 609

610

Letters to the Editors

If X, was a confounder (i.e. both c # 0 and p # 0) in the above model, then the power for McKeown-Eyssen and Thomas sample size n,, that was 2(2, + ZB)*/6*o: corresponding to a given power 1 -/I at a-level, would not equal the desired power 1 - /I. To quantify the magnitudes of the effect resulting from the confounder X2 on the power for no under different situations, we can use arguments provided by Rao and Becker [2,3] and a formula given by Lachin [4] to calculate the corresponding power PnO to McKeown-Eyssen and Thomas sample size formula (details of the derivation about P*, available upon request). We summarize the power P,, corresponding to McKeown-Eyssen and Thomas sample size formula calculated on the basis of the desired power of 0.90 at 0.05 (two-sided) level, for the correlation p between the confounder and the exposure variables ranging from -0.50 to 0.50, the exposure and the confounding effects, b and c, ranging from 1 to 2, and their variances 0: (i = 1, 2) ranging from 1 to 4, in Table 1. For example, in Table 1, when p = - 0.50 and b = c = of = 1, the power given by McKeownEyssen and Thomas sample size formula is only equal to 0.37, which is significantly less than the desired power 0.90. Furthermore, this power may further be reduced to 0.02 in the above situation when the confounding effect c

increases to 2. In general, if both the exposure and the confounding variables increased the risk of the disease (i.e. b and c are positive, as discussed in Table 1) and if the correlation p was less than 0, the corresponding power to McKeown-Eyssen and Thomas sample size formula would be less than the desired power (Table 1). In the same situations as above except for p > 0, however, the former power would be larger than the latter power (Table 1). Readers should not, however, interpret the latter findings that for increasing the power, we should not adjust the confounding effect. In fact, estimating the exposure effect without adjusting the confounding effect can be very misleading. At a given a = 0.05 level (two-sided), for the desired power 0.90 in the situations of the exposure effect b = 1, 0: = 1, c = 1, a: = 1, and p = 0.1, the required sample size would increase from 2 1, which was obtained by the McKeown-Eyssen and Thomas sample size formula, to 28, which was calculated taking into consideration the confounding effect in sample size calculation [5]. The details for deriving the exact sample size formula that considers the confounding variable for the case-control study appear elsewhere [5]. Note also that for fixed p, 6, and e2, the expected power given by the McKeown-Eyssen and Thomas sample size formula would be further away from the desired power when the

Table 1. In the presence of a confounder, the corresponding power to the McKeownEyssen and Thomas sample sizes with a desired power of 0.90 at 0.05 (two-sided) level for the correlation, p, ranging from -0.50 to 0.50, the exposure effect, b, ranging from 1 to 2, the confounding effect, c, ranging from 1 to 2, and the variances, ~2, ranging from 1 to 4 Correlation (p)

-0.1

Confounding effect (c) variance (u:)

0.1

-

1

2

1

2

1

4

1

4

1

4

1

4

Exposure effect variance (i) :

(q:) 41

0.83 0.87

0.74 0.83

0.74 0.83

0.49 0.74

0.95 0.93

0.97 0.95

0.97 0.95

1.00 0.97

2

4

0.89

0.87

0.87

0.83

0.91

0.93

0.93

0.95

Correlation (p)

-0.5

Confounding effect (c)

0.5

1

1

2

2

variance (a:)

1

4

1

4

I

4

1

4

Exposure effect variance (b) (4:’ 1 1 4 2 1 2 4

0.37 0.68 0.68 0.81

0.02 0.37 0.37 0.68

0.02 0.37 0.37 0.68

0.00 0.02 0.02 0.37

1.00 0.98 0.98 0.95

1.00 1.00 1.00 0.98

1.00 1.00 1.00 0.98

1.00 1.00 1.00 1.00

Letters to the Editors

relative effect of the confounder to that of the exposure increases (Table 1 or formula (1)). This is consistent with the fact that if X, is a strong confounder (i.e. c is large), the sample size calculation wihout considering X, is certainly inaccurate. Discussions on sample sizes in the presence of confounders in the other situations can also be found elsewhere [6-81. KUNGJONG LUI Department of Mathematical Sciences College of Sciences San Diego State University San Diego, CA 92182-0314, U.S.A. REFERENCES

1. McKeown-Eyssen GE, Thomas DC. Sample size determination in case control studies: the influence of the distribution of exposure. J Chron Dls 1985; 38: 559-568.

611

2. Rao RR. Sample size determination

in case control studies: the influence of the distribution of exposure.. Letter to the Editors. J Cbron Dis 1986; 39: 941-943. 3. Becker S. Sample size determination in case control studies. Letter to the Editors. J Chron LIis 1987; 40: 1141-I 143. 4. Lachin JM. Introduction to sample size determination and power analysis for clinical trials. Contr Clln Trials 1981; 2: 93-113. 5. Lui KJ. Sample size determination for cam-control studies: the irdluence of the joint distribution of exposure and confounder. Centers for Disease Control; No. 89-0227, 1989. 6. Wilson ST, Gordon I. Calculating sample sizes in the presence of confounding variables. Appl Stat 1986; 35: 207-213. 7. Gail M. The determination of sample sizes for trials involving several independent 2 x 2 tables. J Chroa Dis 1973; 26: 669-673. 8. Lubin JH, Gail MH, Ershow AG. Sample size and power for case-control studies when exposures are continuous. Stat Med 1988: 7: 363-376.

Response Dr Lui [l] has pointed out an important and often neglected element of sample size determination for case-control studies-the need to account for confounding variables in either the design or analysis. In order to focus our paper [2] on the sample size needed for modelling dose-response relationships for continuous exposure variables, we chose to ignore this aspect. Obviously any analysis that failed to account for a strong confounder would be wrong. Dr Lui’s table indicates just how wrong one could be for various plausible combinations of the relevant factors. Interpretation of his table is difficult, however, because he presents estimates of power, not bias. Power is a meaningless concept when the size of a test (the type I error rate) is biased: failure to control for confounding means the probability of rejecting the null hypothesis will be different from the nominal level even if the null hypothesis is true. One should have no interest in a significance test with 100% power if the test size were not equal to its nominal level. To take an extreme example, an attractive significance test might be the following; “always reject the null hypothesis (whatever the data say)“; this test has 100% power, but unfortunately it is not a test with a size of 5%. Use of a test that is not controlled for confounding would fall into this class. A more meaningful comparison would be to present the power of significance tests derived

from case-control studies in which confounders have been controlled, either by matching on the confounder(s) or by an appropriate multivariate model. One of us [3] has done this for the matched and unmatched-but-adjusted logistic regresssion model with normally distributed exposure and confounding variables, and Dr Lui has apparently done this independently for other types of distributions [4], although the details of his work have not yet been published. Unfortunately, while it may be desirable to allow for confounding at the design stage of investigation, it may prove difficult to perform appropriate sample size determinations because the necessary information on the joint distribution of exposure and confounder(s) and the magnitude of the association between confounder(s) and disease may not be available. How then is the epidemiologist to proceed? We offer the following strategy as as rough approximation. Since the parameter of interest is the slope coefficient of the exposure effect adjusted for confounders, it is the anticipated value of this adjusted slope that should be specified in our sample size formula [2]. Likewise, instead of using the moments (variance and covariance) of the marginal distribution of exposure in the population, one should use the conditional moments of the exposure distribution within confounder strata. With these two modifications, we would expect that our formulae would provide a first approximation to the correct sample size, provided the degree