Calculating additive treatment effects from multiple randomized trials provides useful estimates of combination therapies

Calculating additive treatment effects from multiple randomized trials provides useful estimates of combination therapies

Journal of Clinical Epidemiology 65 (2012) 1282e1288 Calculating additive treatment effects from multiple randomized trials provides useful estimates...

170KB Sizes 3 Downloads 46 Views

Journal of Clinical Epidemiology 65 (2012) 1282e1288

Calculating additive treatment effects from multiple randomized trials provides useful estimates of combination therapies Edward J. Millsa,*, Kristian Thorlundb, John P.A. Ioannidisc,d a Faculty of Health Sciences, University of Ottawa, 43 Templeton Street, Ottawa, Ontario, Canada Department of Clinical Epidemiology & Biostatistics, McMaster University, 1280 Main Street West, Hamilton, Ontario, Canada c Stanford Prevention Research Center, Department of Medicine, Stanford University School of Medicine, 1070 Arastradero Road, Stanford, CA, USA d Department of Health Research and Policy, Stanford University School of Medicine, 1070 Arastradero Road, Stanford, CA, USA b

Accepted 27 July 2012; Published online 13 September 2012

Abstract Objective: Many clinicians and decision makers want to know the combined effects of treatments that have not been evaluated in combination. It is possible to determine such treatment effects by making assumptions about the additive effects. We discuss here the prerequisites and methods of applying additivity assumptions in synthesizing the evidence from randomized trials and multiple treatment meta-analyses. Study Design and Setting: Using statistical approaches, we demonstrate the utility of additivity of both pairwise randomized trials and multiple treatment comparison meta-analyses. Results: We present illustratively an example on estimating the treatment effects of drug combinations for chronic obstructive pulmonary disease. We confirm the additive treatment effects by comparing with direct combination treatment trial results. Conclusion: Additive effects may be a useful tool to estimate the effectiveness of treatment combinations. Ó 2012 Elsevier Inc. All rights reserved. Keywords: Additive; Multiplicative; Meta-analysis; Randomized clinical trials; Statistics; Drug therapy; Combination

1. Introduction For many conditions, patients may use more than one treatment. Usually, the aim of combining treatments is to exploit cumulatively the benefits conferred independently by each of them, but sometimes, two treatments may be used because they interact favorably somehow. Randomized clinical trials (RCTs), or meta-analyses thereof [1], may exist for single treatments but not their combinations. How would one estimate the effect of treatments when they are both given together to the same patient? To find the treatment effect of the combination, one simple approach

Conflict of interest statement: None declared. E.J.M. is supported by a Canada Research Chair from the Canadian Institutes of Health Research (CIHR). K.T. receives salary support via the CIHR Drug Safety and Evaluation Network via the NETMAN grant. Contributors: E.J.M. had the original idea, and all authors developed the concept; E.J.M. and K.T. performed the statistical analyses, and all authors interpreted the data; E.J.M. and J.P.A.I. wrote the original draft of the article, and K.T. critically reviewed it. All approved the final version. E.J.M. is the guarantor. * Corresponding author. Tel.: 778-317-8530; fax: 604-875-5179. E-mail address: [email protected] (E.J. Mills). 0895-4356/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jclinepi.2012.07.012

is to assume additive effects, that is, assume that if treatment A has an effect EA vs. control and treatment B has an effect EB vs. that same control, then the combination of A and B would have an effect EA þ EB. Additive effects assume that the interventions work independently and do not interact. For example, the early proponents of the cardiovascular polypill assumed the additive treatment effects of different interventions (statins, aspirin, blood pressure medications, and folic acid) to estimate the expected population-level effects of widespread treatment with a pill that could contain all these effective drugs [2,3]. The expectations were then tested against surrogate endpoint trials, and modifications of the components were made based on these results [4e6]. The polypill is now being evaluated in clinical endpoint trials [7]. Here, we discuss the prerequisites, applications, and caveats of estimating the additive effects of drugs. We show how to estimate the additive effect of a combined intervention, when different types of evidence exist about each of its components, with or without concomitant direct randomized comparisons of the combination itself against control. We will illustrate these approaches using the example of drug combinations for chronic obstructive pulmonary

E.J. Mills et al. / Journal of Clinical Epidemiology 65 (2012) 1282e1288

What is new? Key findings  Many clinicians and decision makers want to know the combined effects of treatments that have not been evaluated in combination. Under certain conditions, new methods exist to determine what treatment effect combinations of drugs may confer. What this adds to what was known?  This manuscript provides the necessary assumptions to consider combining data from single or multiple randomized trials to determine a combined treatment effect. What is the implication and what should change now?  This manuscript provides a strategy to determine what the possible combined effects of treatments may provide. Clinical trialists interested in evaluating the effect of combined treatments may use this strategy to estimate the treatment effects. Clinicians may use this strategy to estimate what treatment effect they provide when prescribing multiple interventions.

disease (COPD) exacerbations, a condition that frequently requires poly drug use. 2. Treatment combinations and prerequisites for the additive effects Table 1 provides an overview of the considerations necessary for determining whether two or more treatments may work in an additive way. We focus on drugs, but the same framework can be extended to other types of treatments [8]. Toews and Bylund [9] have provided a useful classification of treatment combinations. We focus on drugs, but the same framework can be extended to other types of treatments, for example, psychological or behavioral interventions that have two or more components [8]. In a Class I combination, two or more drugs target different aspects of the disease, and the interactions are least likely, as in antihypertensive therapies with drugs of different types (diuretics, beta-blockers, calcium channel blockers, etc.). In Class II combinations, two or more drugs target a similar biological pathway, and this can lead to an interaction and greater toxicity, for example, cotrimoxazole is composed of two drugs (trimethorpim and sulfamethoxazole), both of which target the folate pathway. Finally, a Class III combination is where an effective drug is paired with another drug that is not used for therapeutic effectiveness but may enhance the therapeutic effectiveness of the first

1283

Table 1. Considerations for obtaining and interpreting the additive effects for medical interventions 1. Additivity assumption should be reasonable. Drugs working on similar pathways are more likely to interfere with each other leading to interactions. For drugs working on different pathways, the interaction is less likely. Both for drugs and for other types of interventions, the clinical and other evidences should also need to be considered on whether interactions may exist. 2. Potential biases in effects of single treatments comprising the combination should be scrutinized. Biases affecting evidence on direct and indirect comparisons of single treatments can affect also secondarily the calculations of the effect sizes for combinations of these treatments. If such biases are demonstrated or speculated, the interpretation of additive effects obtained from biased component effects should be very cautious. 3. Whenever MTC is used to obtain additive effects, the assumptions underlying the MTC should be tenable. MTC makes several assumptions that underlie the ability to combine data from different trials and comparisons performed in different populations and settings with potentially different baseline risk and background standards of care among others; it is important that the combined pieces are consistent, and no major statistical inconsistency is demonstrated. 4. Additive effects should be expressed along with their accompanying uncertainty (95% CIs or CrI), and this can often be large. Large uncertainty may be common when the components of the evidence that feed in the additivity calculations have large uncertainty by themselves. In the presence of a large uncertainty, the interpretation should be extra cautious. 5. Wherever possible, the additive effects should be compared with direct combination effects. We need more empirical evidence on whether direct and additive effects tend to give similar or dissimilar results. Abbreviations: MTC, multiple treatment comparision; CI, confidence interval; CrI, credible interval.

drug because of the pharmacokinetic or pharmacodynamic properties, for example, ritonavir can boost levels of other protease inhibitors. Additive effects are most relevant for Class I combinations. The additivity is violated for Class II and Class III combinations, as well as in any other situation in which positive (synergism) or negative (antagonism) interaction is expected based on the clinical or other considerations. Additive effects are also more relevant to calculate when there is sufficient evidence about the treatment effect for each component of a combinationdotherwise, the combined uncertainty may be prohibitive to make useful inferences. Where possible, estimates derived from additive effects assumptions should be compared against direct evidence about the combination, if such evidence is available. If no such direct evidence exists, the assumption of no interaction will have to be informed by clinical expertise and accompanied by extra caution in the interpretation of the results it produces. Finally, one needs to decide what effect metric to use. Assuming additivity on one metric scale means that the results are not additive on some other metrics. Some metrics should preferably be avoided because they lack symmetry. For example, with relative risk (RR), the conclusions may differ depending on whether we focus on the risk of getting

1284

E.J. Mills et al. / Journal of Clinical Epidemiology 65 (2012) 1282e1288

an exacerbation or having no exacerbation. In the presence of large effects of a single treatment, metrics such as the risk difference (RD) and the incidence of RD can yield unbounded impossible results (e.g., if treatment A cuts exacerbations from 90% to 30% [RD, 60%] and treatment B decreases exacerbations from 90% to 40% [RD, 50%], the combined effect for A þ B is estimated to decrease exacerbations from 90% to 20%, an impossible value). Lack of symmetry and impossible results do not occur with metrics such as the odds ratio (OR), incidence rate ratio (IRR), mean difference, and standardized mean difference. 3. Modeling additive treatment effects We will consider three illustrative scenarios about the type of direct and/or indirect evidence that might be available, when one is interested in estimating the treatment effect of a drug combination. Scenario 1: One RCT comparing A vs. no-treatment control (e.g., placebo) and one RCT comparing B vs. control; no data exist on A þ B. Scenario 2: A meta-analysis of RCTs comparing A vs. control and a meta-analysis of RCTs comparing B vs. control; no data exist on A þ B. Scenario 3: A network of RCTs comparing some notreatment control (e.g., placebo), various treatments (A, B, C, D, etc.), and also various combinations of these treatments in subsets of trials; data on A þ B may or may not exist. Metrics of relative effects (RE) (e.g., OR and IRR) are analyzed in their normalized (i.e., log transformed) form. Under the additivity assumption, the normalized RE of A þ B (REA þ B) is equal to the sum of normalized REs of A (REA) and B (REB), that is, logðREAþB Þ5logðREA Þ þ logðREB Þ: The 95% confidence interval (CI) for log(REA þ B) is calcupffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi lated by logðREAþB Þ61:96, VarðREAþB Þ; where the variance is given by:

head-to-head in RCTs. Then, multiple treatment comparison (MTC) meta-analysis (also known as network metaanalysis) allows for simultaneous analysis of all data (Fig. 1). MTCs make use of both direct and indirect evidences making the premise that the effect estimates from the direct and indirect evidences do not differ beyond the play of chance, that is, there is no inconsistency (incoherence) [12]. Assuming that there is no documented inconsistency in the network, one can calculate the effects of A vs. control and B vs. control from the MTC and then calculate from them the effect of A þ B vs. control (REA þ B) using the additivity assumption. A þ B itself may have been already tested against various control and/or other comparators in one or more RCTs. Then, REA þ B can be calculated using the evidence from the trials that have evaluated the A þ B combination, from the evidence on A vs. control and B vs. control using the additivity assumption, or from both. MTCs are typically conducted in a Bayesian framework, using the WinBUGS software. It is beyond the scope of this article to provide an introduction to the MTC Bayesian modeling, and this is available in detail elsewhere [13e15]. See Appendix A available on the journal’s website at www.jclinepi.com for worked examples for the three scenarios.

VarðlogðREAþB ÞÞ 5 VarðlogðREA ÞÞ þ VarðlogðREB ÞÞ: On the non-normalized scale, this means that the combined effect of A þ B is simply the product of the two REs of A and B. The very same considerations also apply when drugs A and B have separately been compared with a mutual control (e.g., placebo) in more than one RCT (scenario 2). Estimates of the individual effects of A and B can be obtained with a meta-analysis [10]. These summary effects can then be used to calculate the effect of A þ B, and the associated CI is calculated using the same formula for the variance of A þ B as shown previously. When some RCTs include all three arms (A, B, and control), then one has to incorporate also a covariance estimate in the calculations, as described in standard textbooks [11]. In the more general case (scenario 3), many treatments are available for treating one condition, and these treatments may have been compared with a control or

Fig. 1. Diagram displaying the network of 10 treatments involved in the MTC analysis of the COPD data. Each treatment is a node in the network. The links between nodes are used to indicate a direct comparison between pairs of treatments. There are 10 treatments in total, with some trials being multi-arm trials (For multi-arm trials, all the possible pairwise comparisons of treatments in those trials are accounted for). Light blue denotes the single agent trials, darker blue the combinations, and purple the placebo. The size of the nodes is proportional to the amount of randomized evidence. MTC, multiple treatment comparision; COPD, chronic obstructive pulmonary disease; ICS, inhaled corticosteroid; LABA, long-acting betaagonist; PDE4-i, phosphodiesterase-4 inhibitor; LAMA, long-acting muscarinic agent (please refer the online version of the figure for colors).

E.J. Mills et al. / Journal of Clinical Epidemiology 65 (2012) 1282e1288

4. Example of additive effects of drugs used in COPD To demonstrate a clinically useful example of additivity, we will discuss an example of RCTs for the prevention of COPD-associated exacerbations. COPD management frequently uses combinations of treatments, some of which have been evaluated in single and combination agents within similar populations of patients. Details on the search, data extraction, and characteristics of individual trials have been reported elsewhere [16]. Below we take the reader through the considerations necessary before conducting an analysis of additive effects. 4.1. Is the additivity assumption reasonable? There are four main classes of drugs used for the reduction of COPD-related exacerbations: inhaled corticosteroids (ICSs, e.g., fluticasone or budesonide), phosphodiesterase-4 inhibitors (PDE4-i, e.g., roflumilast), long-acting betaagonists (LABA, e.g., formoterol or salmeterol), and longacting muscarinic agents (LAMA, e.g., tiotropium) [17]. ICSs are believed to exert a therapeutic effect through inflammatory pathways. PDE4-i targets the phosphodiesterase-4 immune cells. LABAs are selective beta-2 adrenoreceptor agonists that work via a receptor that increases cyclic adenosine monophosphate, which in turn decreases intracellular calcium and also increases potassium conduction at the cell membrane of muscle cells so that they are less likely to depolarize. LAMAs antagonize the muscarinic receptor in smooth muscle cells and some submucosa glands and inhibit muscle contraction by decreasing intracellular calcium. Although both LABAs and LAMAs exert a therapeutic effect by targeting smooth muscle, they work through different pathways, and the evidence from RCTs indicates that they offer different therapeutic advantages; thus, the interaction is unlikely [18,19]. In addition, both LABA and LAMA are used in combination clinically [17]. Therefore, the assumption of additivity seems reasonable. 4.2. Are there potential biases in the effects of the single treatments? Biases at the level of individual trials could affect any comparison. Trial-level limitations (such as failing to report methodological issues, small sample sizes, and large loss to follow-up) may bias results of RCTs [20]. One may consult, for example, the Cochrane Handbook for detailed advice on dealing with trial-level biases in meta-analysis [21]. Evidence from trials with small sample sizes and from those that measure subjective outcomes may be more susceptible to biases, and this may lead to wrong estimates in effects and inconsistency in both indirect and direct comparisons [22]. Multiple well-designed and well-conducted trials of the same agents would increase our confidence in effect estimates. Fig. 1 displays the geometry of the network of interest [23]. There are 49 comparisons involved across the 10 different regimens that have been used across 26 eligible trials

1285

(n 5 36,312 randomized participants). The cumulative evidence is very large, much larger than the average randomized evidence typically available on most medical questions, and there are several large randomized trials involved. As detailed by Borenstein et al. [11], there are no major concerns about the quality of these trials, and the outcomes are generally well defined. However, the evidence is not evenly spread across all comparisons. Five of the nine active regimens are combinations of two or three treatments, but only one of these combinations (ICSs þ LABA) have been directly compared against placebo in RCTs. All four single agents have been directly compared against placebo in RCTs. In six of the direct combination RCTs, there is only one RCT contributing data. For the effectiveness of four combinations against placebo, we have to resort to either indirect evidence (making an additivity assumption to combine evidence from the comparisons of each of the combination’s constituents against placebo) and/or the combined (MTC) additive evidence. 4.3. Are the assumptions underlying an MTC appropriate? MTCs require several important considerations [12]. First, is there sufficient within-agent homogeneity to combine RCTs of similar comparisons (homogeneity)? Second, are the populations, methods used, and outcomes assessed across agents sufficiently similar to compare (similarity)? Third, is there consistency (coherence) between evidence derived from direct evidence with that obtained from indirect evidence? In our clinical example, the populations included within each agent, those with moderate to severe COPD, are considered similar enough to combine in meta-analyses, and none of the direct comparisons have very high estimates of statistical heterogeneity [16,24]. The outcomes, exacerbation events, are similar enough to compare across interventions as participants were generally recruited from populations with moderate to severe COPD and definitions of exacerbation are compatible, if not identical, across trials, despite some lack of consensus on whether only the first event (binary endpoint) should count or the rate of exacerbations during follow-up should be the preferred measure of outcome [25]. We used a sensitivity analysis to ensure that the findings were similar depending on which type of endpoint was chosen. Finally, there was no statistical evidence of incoherence in the effect sizes of treatments according to direct and indirect evidences. 4.4. Are the treatment findings presented with accompanying uncertainty? In this COPD example, we present results using the IRR of exacerbations (ratio of the total number of exacerbations per patient-years in the two compared arms) with accompanying CIs. In Appendix B at www.jclinepi.com, we also display the results using the OR (ratio of odds of having

1286

Table 2. Comparison of direct evidence IRRs with indirect or MTC meta-analysis additive model IRRs Effects in direct comparisons Number of patients

Additive effects from combining direct evidence on each component of a combination

IRR (95% CIs)

PDE4-i vs. placebo LABA vs. placebo LAMA vs. placebo ICS vs. placebo PDE4-i þ LABA vs. LABA ICS þ LABA vs. LABA PDE4-i þ LAMA vs. LAMA LABA þ LAMA vs. LAMA ICS þ LABA þ LAMA vs. LABA þ LAMA ICS þ LABA vs. placebo ICS þ LABA þ LAMA vs. LAMA ICS þ LABA vs. LAMA

3 6 6 6 1 7 1 1 1

6,015 6,134 10,689 5,732 931 6,860 743 304 293

0.85 0.87 0.74 0.81 0.79 0.81 0.83 1.07 0.85

(0.78, (0.79, (0.64, (0.74, (0.70, (0.75, (0.72, (0.94, (0.74,

4 1 1

4,509 301 1,323

PDE4-i þ LABA vs. placebo

d

d

d

PDE4-i þ LAMA vs. placebo

d

d

d

LABA þ LAMA vs. placebo ICS þ LABA þ LAMA vs. placebo

d d

d d

d d

0.93) 0.96) 0.84) 0.90) 0.91) 0.86) 0.97) 1.22) 0.97)

0.72 (0.66, 0.79) 0.96 (0.80, 1.14) 0.97 (0.93, 1.02)

Comparisons used

Number of trials

Number of patients

IRR (95% CIs)

IRR (95% CrIs) 0.83 0.86 0.74 0.81 0.84 0.82 0.84 0.86 0.82

(0.74, (0.80, (0.67, (0.76, (0.74, (0.76, (0.74, (0.80, (0.76,

0.95) 0.93) 0.81) 0.88) 0.95) 0.88) 0.95) 0.94) 0.88)

d d d d PDE4-i vs. placeboa ICS vs. placeboa PDE4-i vs. placeboa LABA vs. placeboa ICS vs. placeboa

d d d d 3 6 3 6 6

d d d d 6,015 5,732 6,015 6,134 5,732

ICS vs. placebo þ LABA vs. Placebo ICS vs. placebo þ LABA vs. placeboa ICS þ LABA vs. placebo and LAMA vs. placebo PDE4-i vs. placebo þ LAMA vs. placebo PDE4-i vs. placebo þ LAMA vs. placebo LABA vs. placebo þ LAMA vs. placebo ICS þ LABA vs. placebo þ LAMA vs. placebo

11 11 10

11,866 11,866 15,159

0.70 (0.61, 0.81) 0.70 (0.61, 0.81) 0.95 (0.78, 1.16)

0.71 (0.64, 0.78) 0.71 (0.64, 0.78) 0.96 (0.84, 1.08)

9

12,149

0.74 (0.65, 0.84)

0.72 (0.62, 0.83)

9

16,704

0.63 (0.53, 0.74)

0.62 (0.53, 0.72)

12 10

16,823 15,198

0.64 (0.54, 0.76) 0.52 (0.44, 0.61)

0.68 (0.59, 0.79) 0.53 (0.44, 0.64)

0.85 0.81 0.85 0.87 0.81

d d d d (0.78, (0.74, (0.78, (0.79, (0.74,

Effects from MTC

0.93) 0.90) 0.93) 0.96) 0.90)

Abbreviations: IRR, incidence rate ratio; MTC, multiple treatment comparison; RCT, randomized clinical trial; CI, confidence intervals; CrI, credible intervals; PDE4-i, phosphodiesterase-4 inhibitor; LABA, long-acting beta-agonist; LAMA, long-acting muscarinic agent; ICS, inhaled corticosteroid. Data come from 26 RCTs for which IRR can be calculated. a Under the additivity assumption, treatments that are given in both arms cancel out, that is, A þ B vs. A is identical to B vs. placebo.

E.J. Mills et al. / Journal of Clinical Epidemiology 65 (2012) 1282e1288

Comparisons

Number of trials

E.J. Mills et al. / Journal of Clinical Epidemiology 65 (2012) 1282e1288

at least one exacerbation in the two compared arms) with accompanying intervals. Treatment effects in the MTC are presented with 95% credible intervals (CrI). The additive MTC model that we use here is similar to the additive main effects models considered by for combination of psychological treatments by Welton et al. [8]. The model structure for this analysis is available in Appendix C at www.jclinepi.com. 4.5. Are the additive effects consistent with the direct combination effects? Table 2 shows the results for the estimated treatment effect of each of the single agents and combinations according to the three different approaches: direct comparisons, additive effects from combining direct evidence on each component of a combination, and MTC with additive main effects model. The three approaches give quite similar point estimates for the treatment effects of examined regimens and comparisons with the exception of the comparison of LABA þ LAMA vs. LAMA, in which the direct estimate suggests no benefit of adding LABA to LAMA (IRR, 1.07; 95% CI: 0.94, 1.22), whereas the indirect and MTC estimates suggest a clear benefit (IRR, 0.87; 95% CI: 0.79, 0.96, and IRR, 0.86; 95% CrI: 0.80, 0.94, respectively) and the comparison of ICSs þ LABA þ LAMA vs. LAMA in which the three-drug combination shows no incremental benefit over LAMA alone in the single small direct trial (IRR, 0.96; 95% CI: 0.80, 1.14) but has substantial benefit in indirect and MTC calculations (IRR, 0.70; 95% CI: 0.61, 0.81 and IRR, 0.71; 95% CrI: 0.64, 0.78). The uncertainty in the point estimates also tends to be less (narrower CI or CrI) with the indirect and MTC methods compared with the direct comparisons, especially when there is only a single direct trial. In these cases, the indirect and MTC additive effects method incorporate far more evidence in the calculations, and this reduces the uncertainty about the effect sizes. 5. Interpretation The additive effects assumption permits the calculation of a combined effect of more than one intervention when evidence from direct evaluations is unavailable. As illustrated in Table 1 and our example, there are important assumptions made in such analyses, and one should think carefully about their validity. There are many biases that may affect the results of single trials, meta-analyses, MTC, and thus also additive effects. Comparison of the results obtained with additivity assumptions and large MTC analyses against direct randomized evidence would be useful. When results with different approaches disagree, it is not always certain which estimate may be more reliable [26e28]. Clinical and methodological considerations and the potential for biases on each comparison involved need to be considered on a case-by-case basis. Even if one wants

1287

to put more trust in direct RCTs, often such data may be very limited, whereas the additive approach and an MTC may use information from many more trials and participants. Evidence from indirect estimates of additive effects may sometimes be strong enough to guide clinical practice. When this evidence is more limited or tenuous, it may at a minimum inform the design of a large well-designed RCT to evaluate a combination of agents. The proposed methods may be useful to both Health Technology Assessment agencies and entities planning clinical trials. Given the large number of possible combination treatments available, using this approach may provide the best estimate of what treatment effect a combined intervention is likely to exhibit. Although RCT designs such as factorial trials permit the evaluation of individual and combination treatments, our proposed approach provides a best estimate in the absence of direct evidence. Clinicians who use combination treatments that have not been evaluated in direct RCTs may find this approach helpful in estimating the effect of a combined treatment. However, we recognize that in clinical practice, the use of combined treatments may be to increase the effectiveness of treatments once the first-line treatment has been exhausted and thus may violate the considerations listed in Table 1. Acknowledging these caveats, additive effects may be a useful tool to estimate the effectiveness of treatment combinations.

Appendix Supplementary material Supplementary data related to this article can be found online at doi: 10.1016/j.jclinepi.2012.07.012

References [1] Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Ann Intern Med 2009; 151:W65e94. [2] Yusuf S. Two decades of progress in preventing vascular disease. Lancet 2002;360:2e3. [3] Wald NJ, Law MR. A strategy to reduce cardiovascular disease by more than 80%. BMJ 2003;326:1419. [4] Rodgers A, Patel A, Berwanger O, Bots M, Grimm R, Grobbee DE, et al. An international randomised placebo-controlled trial of a fourcomponent combination pill (‘‘polypill’’) in people with raised cardiovascular risk. PLoS One 2011;6:e19857. [5] Yusuf S, Pais P, Afzal R, Xavier D, Teo K, Eikelboom J, et al. Effects of a polypill (Polycap) on risk factors in middle-aged individuals without cardiovascular disease (TIPS): a phase II, double-blind, randomised trial. Lancet 2009;373:1341e51. [6] Soliman EZ, Mendis S, Dissanayake WP, Somasundaram NP, Gunaratne PS, Jayasingne IK, et al. A polypill for primary prevention of cardiovascular disease: a feasibility study of the World Health Organization. Trials 2011;12:3.

1288

E.J. Mills et al. / Journal of Clinical Epidemiology 65 (2012) 1282e1288

[7] Lonn E, Bosch J, Teo KK, Pais P, Xavier D, Yusuf S. The polypill in the prevention of cardiovascular diseases: key concepts, current status, challenges, and future directions. Circulation 2010;122:2078e88. [8] Welton NJ, Caldwell DM, Adamopoulos E, Vedhara K. Mixed treatment comparison meta-analysis of complex interventions: psychological interventions in coronary heart disease. Am J Epidemiol 2009; 169:1158e65. [9] Toews ML, Bylund DB. Pharmacologic principles for combination therapy. Proc Am Thorac Soc 2005;2:282e9. discussion 90e1. [10] Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, et al. Grading quality of evidence and strength of recommendations. BMJ 2004;328:1490. [11] Borenstein M, Hedges L, Higgins JP, Rothstein H. Introduction to meta-analysis. Chichester, West Sussex: John Wiley and Sons; 2009:[Chapter 25]. [12] Ioannidis JP. Integration of evidence from multiple meta-analyses: a primer on umbrella reviews, treatment networks and multiple treatments meta-analyses. CMAJ 2009;181:488e93. [13] Salanti G, Higgins JP, Ades AE, Ioannidis JP. Evaluation of networks of randomized trials. Stat Methods Med Res 2008;17:279e301. [14] Sutton AJ, Higgins JP. Recent developments in meta-analysis. Stat Med 2008;27:625e50. [15] Lu G, Ades AE. Combination of direct and indirect evidence in mixed treatment comparisons. Stat Med 2004;23:3105e24. [16] Mills EJ, Druyts E, Ghement I, Puhan MA. Pharmacotherapies for chronic obstructive pulmonary disease: a multiple treatment comparison meta-analysis. Clin Epidemiol 2011;3:107e29. [17] American Thoracic Society, European Respiratory Society. Standards for the diagnosis and management of patients with COPD. Available at http://www.thoracic.org/clinical/copd-guidelines/resources/copddoc. pdf. Accessed February 24, 2012. [18] Vogelmeier C, Hederer B, Glaab T, Schmidt H, Rutten-van Molken MP, Beeh KM, et al. Tiotropium versus salmeterol for the prevention of exacerbations of COPD. N Engl J Med 2011;364:1093e103. [19] Vogelmeier C, Kardos P, Harari S, Gans SJ, Stenglein S, Thirlwell J. Formoterol mono- and combination therapy with tiotropium in patients with COPD: a 6-month study. Respir Med 2008;102:1511e20. [20] Devereaux PJ, Choi PT, El-Dika S, Bhandari M, Montori VM, Schunemann HJ, et al. An observational study found that authors of randomized controlled trials frequently use concealment of randomization and blinding, despite the failure to report these methods. J Clin Epidemiol 2004;57:1232e6. [21] Higgins JP, Green S. Cochrane handbook for systematic reviews of interventions. Oxford, UK: Wiley & Sons; 2008:[Chapter 9]. [22] Song F, Xiong T, Parekh-Bhurke S, Loke YK, Sutton AJ, Eastwood AJ, et al. Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study. BMJ 2011;343:d4909. [23] Salanti G, Kavvoura FK, Ioannidis JP. Exploring the geometry of treatment networks. Ann Intern Med 2008;148:544e53.

[24] Puhan MA, Vollenweider D, Steurer J, Bossuyt PM, Ter Riet G. Where is the supporting evidence for treating mild to moderate chronic obstructive pulmonary disease exacerbations with antibiotics? A systematic review. BMC Med 2008;6:28. [25] Rodriguez-Roisin R. Toward a consensus definition for COPD exacerbations. Chest 2000;117:398Se401S. [26] Madan J, Stevenson MD, Cooper KL, Ades AE, Whyte S, Akehurst R. Consistency between direct and indirect trial evidence: is direct evidence always more reliable? Value Health 2011;14:953e60. [27] Ioannidis JP. Indirect comparisons: the mesh and mess of clinical trials. Lancet 2006;368:1470e2. [28] Song F, Harvey I, Lilford R. Adjusted indirect comparison may be less biased than direct comparison for evaluating new pharmaceutical interventions. J Clin Epidemiol 2008;61:455e63.

Glossary Linear scale: A scale in which a change between two values is perceived on the basis of the difference between the values. Visually, a linear scale is a scale on which the divisions of equal size are uniformly spaced. For example, when measuring systolic blood pressure, a change from 95 to 90 would be perceived as the same amount of decrease as a change from 105 to 100. Log-linear scale: A scale in which a change between two values is perceived on the basis of the ratio between the values. For example, in older patients undergoing noncardiac surgery, a change from 1% to 2% strokes would be perceived as the same amount of relative increase as a change from 3% to 6% strokes, a doubling of the risk. If the logarithm is taken for values on a log-linear scale (i.e., log transformed), these values will be on a linear scale. This is exploited in additivity models to work with RR metrics. Additivity: Additivity refers to the situation where the effect of two treatments given in combination equals the sum of their individual effects. That is, if treatment A has an effect EA vs. control and treatment B has an effect EB vs. that same control, then the combination of A and B (vs. control) would have an effect EA þ EB. This necessitates that the employed effect metric is on a linear scale. Interaction: The interaction describes the extent to which the effect of two (or more) treatments given in combination deviate from their sum of their established individual effects. Antagonism: Antagonism refers to the situation where the effect of two treatments given in combination is less than the sum of their individual effects (provided the employed effect metric is on a linear scale). Synergism: Synergism (or synergistic effect) refers to the situation where the effect of two treatments given in combination is larger than the sum of their individual effects (provided the employed effect metric is on a linear scale).