Computers in Biology and Medicine 34 (2004) 113 – 125 www.elsevierhealth.com/locate/compbiomed
Sample size calculator for cluster randomized trials Marion K. Campbell∗ , Sean Thomson, Craig R. Ramsay, Graeme S. MacLennan, Jeremy M. Grimshaw Health Services Research Unit, University of Aberdeen, Polwarth Building Foresterhill, Aberdeen, Scotland AB25 2ZD UK Received 30 August 2002; received in revised form 24 March 2003; accepted 24 March 2003
Abstract Cluster randomized trials, where individuals are randomized in groups are increasingly being used in healthcare evaluation. The adoption of a clustered design has implications for design, conduct and analysis of studies. In particular, standard sample sizes have to be in1ated for cluster designs, as outcomes for individuals within clusters may be correlated; in1ation can be achieved either by increasing the cluster size or by increasing the number of clusters in the study. A sample size calculator is presented for calculating appropriate sample sizes for cluster trials, whilst allowing the implications of both methods of in1ation to be considered. ? 2003 Elsevier Ltd. All rights reserved. Keywords: Cluster randomized trial; Sample size; Calculator; In1ation factor
1. Introduction Traditionally the patient randomized controlled trial has been widely accepted as the method of choice for evaluating new healthcare interventions [1,2]. In many situations, however, the use of a cluster randomized trial (where groups rather than individuals are randomized) may be advantageous. For example, when evaluating interventions targeted at the health professional (such as the impact of educational training on good clinical practice), randomizing at the level of the professional rather than the individual patient may be the only feasible method of conducting a randomized trial in this 7eld [3]. Similarly, when assessing a dietary intervention, it is common to randomize families as an intact unit, to avoid the possibility of di8erent members of the same family being assigned to di8erent interventions. ∗
Corresponding author. Tel.: +44-1224-554480; fax: +44-1224-663087. E-mail address:
[email protected] (M.K. Campbell).
0010-4825/$ - see front matter ? 2003 Elsevier Ltd. All rights reserved. doi:10.1016/S0010-4825(03)00039-8
114
M.K. Campbell et al. / Computers in Biology and Medicine 34 (2004) 113 – 125
The adoption of a cluster design impacts on the design, conduct and analysis of the study. One particular impact is the in1uence of clustering on sample size requirements, as the clustering reduces the eDciency of the design. The main reason for this is that standard sample size calculation for patient-based trials only accommodate for variation between individuals. In cluster studies, however, there are two separate components of variation— variation among individuals within clusters, but also variation in outcome between clusters. This paper presents the development of a sample size calculator for the planning of cluster randomized trials. The impact of clustering on sample size calculations will be outlined, together with potential strategies to achieve the increased sample size required. The algorithms developed for use within the calculator will then be described, together with examples of the use of the calculator. 2. Impact of clustering on sample size A fundamental assumption of the patient-randomized trial is that the outcome for an individual patient is completely unrelated to that for any other patient—they are said to be ‘independent’. This assumption no longer holds when cluster randomization is adopted, because patients within any one cluster are more likely to respond in a similar manner. For example, the management of patients in a single hospital is more likely to be consistent than management across a number of hospitals. A statistical measure of this intracluster dependence is the ‘intracluster correlation coeDcient’ (ICC) which is based on the relationship of the between to within-cluster variance [4]. For example, in a study that randomized by hospital, the ICC would be high if the management of patients within hospitals was very consistent but there was wide variation in practice across hospitals. Because standard (i.e. individually-based) sample size formulae do not account for the betweencluster variation, their direct use for cluster trials results in sample size estimates that are too small, resulting in under-powered studies. For a completely randomized design, to achieve the equivalent power of a patient-randomized trial, standard sample size estimates require to be in1ated by a factor 1 + (nH − 1)
(1)
to accommodate for the clustering e8ect, where nH is the average cluster size, and is the estimated ICC (assuming the clusters are of a similar size) [5]. This in1ation factor is commonly known as the ‘design e8ect’ or the ‘variance in1ation factor’ [3,6]. The impact of randomization by cluster on the required sample size can be substantial. For example, consider a study to evaluate the e8ectiveness of educational training on asthma management for general practitioners. The study is planned to detect a change in appropriate management of asthma patients from 40% to 60%. Randomization by general practice is considered the most appropriate design (and it is known that approximately 20 asthma patients would be managed in each practice). Patient-based sample size calculations would suggest that a total of 194 patients are required— assuming standard statistical levels of 80% power and a 5% signi7cance level [7]. However, when appropriate adjustment for clustering by general practice is taken into account, even assuming a fairly small clustering e8ect (ICC = 0:05), the true sample size requirement is 400 (a doubling of the original estimate). From Eq. (1) above, it can be seen that both the ICC and the cluster size in1uence the calculation. Thus, for even small values of ICC, the design e8ect can be substantial when cluster sizes are large.
M.K. Campbell et al. / Computers in Biology and Medicine 34 (2004) 113 – 125
115
3. Strategies for increasing sample size To achieve the increased sample size required, it is possible either to increase the number of clusters to be recruited to the study or to increase the number of subjects included from each cluster. Compared with increasing the number of clusters to be included within a study, however, the e8ect on power of increasing the cluster size is minimal [8]. Whilst increasing the number of clusters is theoretically the more e8ective method in redressing the loss of eDciency caused by clustering, this is not always feasible. Practical considerations, such as the cost of recruiting extra clusters and extra staDng costs require to be taken into account [9]. It would, therefore, be helpful for researchers and clinicians to be able to explicitly consider this trade-o8 when planning cluster trials. A further problem often faced by researchers is a 7xed number of clusters available. This is often seen in regional studies of primary care interventions, where there is, for example, a set number of general practices available within a health region. In this scenario, the researcher is faced with a slightly di8erent sample size problem. Rather than pre-specifying the desired detectable di8erence, pragmatic limitations require that the researcher ask ‘What di3erence can I hope to detect given a 5xed sample size?’. Re1ecting these issues, the sample size calculator was developed to both: • calculate appropriate sample size requirements for cluster trials whilst allowing explicit trade-o8s between number of clusters and cluster size to be considered; and • calculate the minimum di8erence detectable for a given 7xed cluster sample size. 4. Sample size calculator development 4.1. Scope of calculator The sample size calculator was developed to address sample size issues for two group comparisons of either means or proportions, assuming: • a completely randomized design; • 1:1 randomization; and • equal cluster sizes. Re1ecting the desire for clinicians and researchers to consider the trade-o8 between increasing the cluster size or increasing the number of clusters, the sample size calculator was designed to allow the trade-o8 to be explicitly considered, through presentation of both options in a spreadsheet-type layout. 4.2. Sample size formulae adopted 4.2.1. Comparison of means For the comparison of means, the following standard sample size formula was used to calculate the initial unadjusted sample size requirements. To detect a di8erence, , in
116
M.K. Campbell et al. / Computers in Biology and Medicine 34 (2004) 113 – 125
means: m=
4[z1−(=2) + z1− ]2 ; d2
(2)
where (3) is the standardized di8erence ( being the standard deviation of the measure), is the chosen signi7cance level, 1 − is the desired power, z-values are standard Normal deviates, and m is the required total number of individuals required to be studied [7]. d=
4.2.2. Comparison of proportions Similarly, for the comparison of proportions, the following standard sample size formula was used to calculate the initial unadjusted sample size requirements. To detect a di8erence between proportions pA and pB : 2 2 z1−(=2) 2q(1 − q) + z1− pA (1 − pA ) + pB (1 − pB ) ; (4) m= (pA − pB )2 where p A + pB 2 and m, , , and z values as are before [7]. q=
(5)
4.2.3. In:ation for clustering To calculate appropriate sample sizes adjusting for clustering, standard sample size estimates for individually randomized designs were then in1ated by the design e8ect (Eq. (1)). 4.2.4. Calculation of minimum detectable di3erences To calculate minimum detectable di8erences, the above formulae were rearranged to return the e8ect size. Re-arrangement of the formula for proportions required the solving of a quadratic equation. As is the case when solving a quadratic equation, two possible roots were available. By default, the calculator was programmed to return the root that re1ected a positive minimum detectable di8erence. 4.3. Algorithm used The calculator was programmed to produce a table of number of clusters required for varying estimates of ICC and cluster size. Ten default cluster sizes (5, 10, 15, 20, 30, 50, 75, 100, 150 and 200) were chosen to represent a range of typical cluster sizes seen in health studies; however, the calculator was also developed to be 1exible enough to allow users to specify their own cluster sizes if desired. ICC values between 0.01 and 0.6 were presented in steps of 0.01. An upper limit of 0.6 was chosen as very few ICC estimates greater than 0.6 have ever been reported. For the comparison of means, the user is required to specify certain aspects of the sample size problem: (a) the minimum di8erence to be detected; (b) the standard deviation; and (c) the desired
M.K. Campbell et al. / Computers in Biology and Medicine 34 (2004) 113 – 125
117
signi7cance and power settings. Similarly for the comparison of proportions, the user must specify: (a) the expected control group proportion; (b) the expected proportion in the intervention; and (c) the desired signi7cance and power settings. Once these initial values are established, an algorithm was developed to calculate the appropriate adjusted sample size (see Fig. 1). For the calculation of the minimum di8erences detectable, slightly di8erent factors require to be pre-speci7ed, re1ecting the reversed process. For continuous variables, the user is required to specify: (a) the number of clusters available; (b) the cluster size; and (c) an estimate of the ICC. For dichotomous variables, the control group proportion is also required. Program code for the calculator was written in Visual Basic v6. On completion, the calculator was made freely available via the Internet and can be accessed at: http://www.abdn.ac.uk/hsru/epp/sampsize (correct as at January 2003). 5. Use of sample size calculator 5.1. Comparison of means As detailed in Section 4.2.1 above, to calculate the sample size requirements to detect the di8erence between two means, the user is required to specify (a) the minimum di8erence to be detected; (b) the standard deviation; and (c) the desired signi7cance and power settings. For example, consider a study that is evaluating the implementation of a clinical guideline for the management of blood pressure. A di8erence of 5 mmHg is deemed to be a clinically important change to detect and previous literature estimates suggest that an appropriate estimate of standard deviation for blood pressure is 15 mmHg. In this case, we would have to input the following data into the sample size calculator: (a) 5 as the minimum di8erence to be detected; (b) 15 as the standard deviation; and (c) 5% as the desired signi7cance, and 80% as the desired power. The results from the sample size calculator are shown in Fig. 2. The table produced by the calculator presents the total number of clusters (assuming a two arm trial) required for di8erent estimates of ICC and cluster size. Fig. 2 shows, that for an ICC of 0.05, equivalent power could be achieved in a number of di8erent ways, for example by recruiting 10 patients from each of 42 clusters, by recruiting 20 patients from each of 28 clusters, or by recruiting 100 patients from each of 18 clusters. 5.2. Comparison of proportions To calculate the sample size requirements to detect the di8erence between two proportions, the user is required to specify (a) the expected control group proportion; (b) the expected proportion in the intervention; and (c) the desired signi7cance and power settings. Consider a cluster randomized trial where we were trying to detect a change from 40% to 60% in compliance with a clinical guideline (with 80% power and a 5% signi7cance level) between an
118
M.K. Campbell et al. / Computers in Biology and Medicine 34 (2004) 113 – 125
START User inputs values for calculation: (minimum difference and standard deviation, or expected proportions; significance level; and power) Set up vector of 10 pre-set cluster sizes
Calculate unadjusted sample size for each group using inputted values of significance and power
Set counter, c = 1
Read in cluster size at vector position number, c
Set ICC to 0.01
Calculate design effect
Multiply unadjusted sample size by design effect to give adjusted sample size
c=c+1
Divide adjusted sample size by cluster size to get number of clusters required per group ICC = ICC + 0.01
Multiply number of clusters per group by two to get total number of clusters required
Print total number of clusters required to the results table
Yes
ICC≤0.6? No
Yes
c ≤ 10? No STOP
Fig. 1. Algorithm used to calculate adjusted sample size.
M.K. Campbell et al. / Computers in Biology and Medicine 34 (2004) 113 – 125 Estimated standard deviation
Desired significance level
119
Required sample size unadjusted for clustering Desired power
Minimum difference to be detected
Range of cluster sizes
Range of ICC values
Total number of clusters required to be recruited for a given ICC and a given cluster size
Fig. 2. Output from sample size calculator for comparison of means.
intervention group who received a computerized reminder and a control group who did not, we would input: (a) 0.4 as the control group proportion (b) 0.6 as the expected proportion in the intervention group; and (c) 5% as the desired signi7cance level, and 80% as the desired power. The results from the sample size calculator are shown in Fig. 3. Again a number of ways of achieving the desired power are presented. For example, for an ICC of 0.05, equivalent power could be achieved by recruiting 10 patients from each of 30 clusters, by recruiting 20 patients from each of 20 clusters, or by recruiting 100 patients from each of 12 clusters. 5.2.1. Minimum di3erence detectable—continuous variables To calculate the minimum di8erence detectable for continuous data, the user is required to specify (a) the cluster size; (b) the number of clusters available; and (c) the estimated ICC. The sample size calculator will then calculate the proportion of the standard deviation detectable (the standardized di8erence = ) for di8ering levels of signi7cance and power. For example, consider a planned cluster randomized trial of a guideline intervention strategy where only 10 clusters of average size 25 are available. The outcome measure of interest between the intervention and the control group is patient quality of life, e.g. SF36 score. The assumed ICC for SF36 score is 0.01. Given the constraints on patient and cluster numbers, the researcher is
120
M.K. Campbell et al. / Computers in Biology and Medicine 34 (2004) 113 – 125 Expected proportion in the experimental group
Desired significance level
Required sample size unadjusted for clustering Desired power
Expected proportion in the control group
Range of cluster sizes
Range of ICC values
Total number of clusters required to be recruited for a given ICC and a given cluster size
Fig. 3. Output from sample size calculator for comparison of proportions.
unsure whether the minimum di8erence which could be detected would be of clinical relevance, and questions whether the trial should be conducted. In this scenario, we would input: (a) 25 as the cluster size; (b) 10 as the number of clusters available; and (c) 0.01 as the estimated ICC. The results from the sample size calculator are shown in Fig. 3. The results show the minimum di8erence in standard deviations detectable for given levels of statistical signi7cance and power. For example, assuming standard statistical settings (5% signi7cance and 80% power), the trial would be able to detect a di8erence in SF36 score of 0.39 standard deviations between the trial groups. The researcher could then assess whether the di8erence in standard deviation change that could be identi7ed, given the practical constraints, would be worth the investment. (Fig. 4) 5.2.2. Minimum di3erence detectable—dichotomous variables To calculate the minimum di8erence detectable for a binary outcome, recall that the user must specify (a) the cluster size; (b) the number of clusters available; (c) the estimated ICC; and (d) the control group proportion. To allow the programme to return the minimum di8erence detectable in the correct direction, the user is required to frame the problem in terms of looking for an increase from the control group population. As such, if the study under consideration was looking to detect a decrease in a negative outcome, say death, where the control group mortality rate was 30%, for
M.K. Campbell et al. / Computers in Biology and Medicine 34 (2004) 113 – 125
Average cluster size
Number of clusters available
121
Estimate of ICC
Significance levels
Range of power levels
Minimum difference detectable in standard deviation units (ie standardized difference detectable)
Fig. 4. Minimum di8erence detectable—continuous outcomes.
the calculator to return the di8erence detectable in the appropriate direction, this would have to be reframed as looking for an increase in survival from a control proportion of 70%. Consider a planned cluster randomized trial of a guideline intervention strategy where only 10 clusters of average size 25 are available. The outcome measure of interest between the intervention and the control group on this occasion is compliance with good practice outlined in the guideline. Compliance in the control group is estimated at 30%, and the assumed ICC for that outcome is estimated at 0.1. With these parameters, the researcher wishes to explore the minimum di8erence in compliance which could be detected. In this scenario, we would input: (a) (b) (c) (d)
25 as the cluster size; 10 as the number of clusters available; 0.1 as the estimated ICC; and 0.3 as the control group population.
The results from the sample size calculator are shown in Fig. 5. On this occasion, the results show the minimum experimental group proportion detectable for given levels of statistical signi7cance and power. For example, assuming standard statistical settings (5% signi7cance and 80% power), the trial would be able to detect a change in compliance rate from 30% to 60% (i.e. a doubling). Sometimes the sample size constraints imputed by the user can be too severe to allow any meaningful di8erence in proportions to be detected, given a particular observed control group proportion. In these situations, the sample size calculator would return a value of 1 or above, indicating that the sample size is too small to detect meaningful di8erences. For example consider a trial with a control prevalence rate of 40% (i.e. a proportion of 0.4). Only a very small sample size is available, and sample size equations suggest that a di8erence of 70% is the smallest di8erence that can be detected given the constraints. This would indicate that the experimental prevalence rate would have
122
M.K. Campbell et al. / Computers in Biology and Medicine 34 (2004) 113 – 125
Average cluster size
Number of clusters available
Estimate of ICC
Control group proportion
Significance levels
Minimum experimental group proportion detectable
Fig. 5. Minimum di8erence detectable—dichotomous outcomes.
to be 110% (an impossible value). In this situation the sample size calculator would return a value of 1 (indicating that the sample size constraints inputted were implausibly low). 6. Discussion To date, the quality of the design, conduct and analysis of cluster randomized trials has been relatively poor. For example, Simpson et al. [10] showed that of 21 cluster trials identi7ed in two major public health journals, only 4(19%) had accounted for the clustering in the planning of the trial. Similarly, a review by Divine et al. [11], which examined studies of physicians’ patient-care practices, observed that 70% of studies identi7ed had not appropriately accounted for the clustered nature of their study data. Until recently, software packages to aid researchers and statisticians design and conduct cluster randomized trials have not been routinely available. This is now changing, however, with the introduction of macros within standard packages, such as StataTM [12] to undertake analysis adjusted for clustering. Software packages, such as ACluster TM [13], which have been speci7cally designed for the management of clustered data, are now also becoming more readily available. The provision of sample size software to help researchers decide how to trade-o8 whether to recruit more clusters or to increase cluster size to achieve the required increased sample size for cluster trials has, however, been lacking. The development of the calculator described in this paper allows that need to be addressed.
M.K. Campbell et al. / Computers in Biology and Medicine 34 (2004) 113 – 125
123
The concept of trading-o8 whether to recruit more clusters or a greater cluster size has been raised in the literature before, and it is one of the key decisions researchers face when planning a new cluster trial. Flynn’s interviews with trial co-ordinators, mentioned earlier, outlined the practical issues which researchers should consider to aid this decision-making process [9]. He showed that cost considerations are often paramount, with staDng, data collection, travel and trial management costs all in1uencing the choice of strategy. The methods adopted in the development of this sample size calculator used standard sample size formulae widely available in the literature as its basis. This is a strength of the package, as the software can, therefore, be used routinely to calculate sample size requirements for standard, non-clustered designs. The sample size calculator does have certain limitations, however. Firstly, it is designed only to provide sample size calculations for the simplest of cluster trials (the two group completely randomized trial). The completely randomized design is deemed to be most suited for trials which plan to randomize a fairly large number of clusters, which will allow the randomization mechanism to have the greatest chance of ensuring balanced groups in terms of cluster and individual level covariates [3]. Where trials are randomizing a small numbers of clusters, however, one cannot be con7dent that randomization alone will ensure balanced groups. In such situations, use of a strati7ed or matched design is often advocated [3]. Further research is required to extend the package to more complex designs. The sample size calculator also assumes a common cluster size. For most cluster trials that are examining prevalent cases, this assumption is likely to be suDcient. For cluster trials which are dependent on incident cases or which have highly variable cluster sizes, however, this assumption may not be so appropriate. In such situations, the use of the average cluster size is likely to underestimate the true sample size requirements, and a more conservative approach in such cases would be to assume the maximum cluster size for use in the sample size calculations [5]. In addition, the formula used for calculating sample size for a proportion (Eq. (4)) assumed that the correlation within cluster was identical and known for all subunits. This is currently the most commonly used approach, however, this assumption might be invalidated in some studies with large cluster sizes. Further discussion and comparison of di8erent methods can be found in a number of papers [5,14,15]. For example, a beta distribution could be used with an appropriate weighted procedure to estimate the standard deviation of the responses [14]. Despite these limitations, however, the development of this calculator can, for the 7rst time, allow researchers explicitly to trade-o8 the options of achieving the appropriate sample size through the recruitment of a greater number of clusters or through increasing the cluster size. 7. Summary In this paper, the development of a sample size calculator for the estimation of appropriate sample sizes for cluster randomized trials is described. It allows researchers explicitly to trade-o8 the options of achieving the appropriate sample size through the recruitment of a greater number of clusters or through increasing the cluster size. It also allows researchers to identify the minimum di8erence detectable for a given 7xed cluster size. It is recognized, however, that the calculator has been developed around the simplest of cluster trial designs—the two-group, completely randomized
124
M.K. Campbell et al. / Computers in Biology and Medicine 34 (2004) 113 – 125
design—and further research is required to extent the system to other common cluster designs, such as the strati7ed and matched-pair designs.
Acknowledgements The Health Services Research Unit is funded by the Chief Scientist ODce of the Scottish Executive Health Department. The views expressed are not necessarily those of the funding body.
References [1] S.J. Pocock, Clinical Trials: A Practical Approach, Wiley, Chichester, 1983. [2] A. Bowling, Research Methods in Health, Open University Press, Buckingham, 1997. [3] A. Donner, N. Klar, Design and Analysis of Cluster Randomization Trials in Health Research, 1st Edition, Arnold, London, 2000. [4] A. Donner, J.J. Koval, Design considerations in the estimation of intraclass correlation, Ann. Hum. Genet. 46 (1982) 271–277. [5] A. Donner, N. Birkett, C. Buck, Randomization by cluster, Sample size requirements and analysis, Am. J. Epidemiol. 114 (6) (1981) 906–914. [6] L. Kish, Survey Sampling, Wiley, New York, 1965. [7] M.J. Campbell, S.A. Julious, D.G. Altman, Estimating sample sizes for binary, ordered categorical, and continuous outcomes in two group comparisons, Br. Med. J. 311 (1995) 1145–1148. [8] V.K. Diwan, B. Eriksson, G. Sterky, G. Tomson, Randomization by group in studying the e8ect of drug information in primary care, Int. J. Epidemiol. 21 (1) (1992) 124–130. [9] T.N. Flynn, E. Whitley, T.J. Peters, Recruitment strategies in a cluster randomized trial-cost implications, Stat. Med. 21 (2002) 397–405. [10] J.M. Simpson, N. Klar, A. Donner, Accounting for cluster randomization: a review of primary prevention trials, 1990 through 1993, Am. J. Public Health 85 (1995) 1378–1383. [11] G.W. Divine, J.T. Brown, L.M. Frazier, The unit of analysis error in studies about physicians’ patient care behavior, J. Gen. Internal Med. 7 (1992) 623–629. [12] StataCorp, Stata Statistical Software: Release 6.0, Stata Corporation, College Station, TX, 1999. [13] World Health Organisation, ACluster; Version 2.0. World Health Organisation, Geneva, 2000. [14] E.W. Lee, N. Dubin, Estimation and sample size considerations for clustered binary responses, Stat. Med. 13 (1994) 1241–1252. [15] S.H. Jung, S.H. Kang, C. Ahn, Sample size calculations for clustered binary data, Stat. Med. 20 (2001) 1971–1982. Marion K. Campbell is the Director of the Health Care Assessment Programme in the Health Services Research Unit, University of Aberdeen, UK. The Health Care Assessment Programme focuses on the evaluation of primarily non-drug technologies. Marion is trained in statistics and is a Chartered Statistician of the Royal Statistical Society. She has over 10 years experience in health services research and medical statistics. Her main interests are in the methodology of evaluative research, especially the 7eld of cluster randomized trials. She has been involved in the design and conduct of a number of cluster trials, and has also undertaken research into methodological aspects of cluster trials including factors which in1uence the magnitude and stability of intracluster correlation coeDcients. Sean Thomson graduated in 1993 from Aberdeen University in Electronics and Computing and began his career in the oil industry. He joined the Health Services Research Unit in 1996 as Information Systems ODcer and managed the Unit’s
M.K. Campbell et al. / Computers in Biology and Medicine 34 (2004) 113 – 125
125
computer network as well as programming key systems for a number of research projects. Sean is currently in the marine electronics industry managing the information technology infrastructure of a multi-site sales and manufacturing company. Craig R. Ramsay joined the Health Services Research Unit in January 1995 as a statistician. He took up the post of Senior Statistician within the E8ective Professional Practice Programme in 2000 and has recently taken over responsibility for overseeing the Unit’s statistics team. Craig is statistical editor for the Cochrane E8ective Practice and Organisation of Care group. He graduated in Mathematics and Statistics at Edinburgh University in 1993. After receiving a Postgraduate Diploma in Information Systems from Napier University, he joined the Information and Statistics Division of the Common Services Agency, Edinburgh, before joining the Unit. Graeme S. MacLennan joined the Health Services Research Unit in 1998 as a Statiscian having completed a PGCE(S) and a BSc degree in Mathematics at the University of Aberdeen. Current interests include design and analysis of cluster randomized trials and methods for incorporating data from such trials into meta-analyses. Jeremy M. Grimshaw is the Director of the Clinical Epidemiology Program of the Ottawa Health Research Institute and Director of the Centre for Best Practice, Institute of Population Health, and University of Ottawa. He holds a Tier 1 Canadian Research Chair in Health Knowledge Transfer and Uptake and is a Full Professor in the Department of Epidemiology and Community Medicine, University of Ottawa. Prior to this he held a Personal Chair in Health Services Research at the University of Aberdeen, UK and was the Programme Director of the E8ective Professional Programme within the Health Services Research. Dr. Grimshaw received an MBChB (MD equivalent) from the University of Edinburgh, UK. He trained as a family physician prior to undertaking a Ph.D. in health services research at the University of Aberdeen. He is the Coordinating Editor of the Cochrane E8ective Practice and Organization of Care group that aims to support systematic reviews of professional, organizational, 7nancial and regulatory interventions to improve health care delivery and systems. He has been involved in 14 cluster randomized trials and two interrupted time series of di8erent dissemination and implementation strategies. He has also undertaken research into statistical issues in the design, conduct and analysis of cluster randomized trials.