Causal Reasoning in Epidemiology: Philosophy and Logic

Causal Reasoning in Epidemiology: Philosophy and Logic

Journal Pre-proof Causal Reasoning in Epidemiology: Philosophy and Logic George Maldonado, Louis Anthony Cox PII: S2590-1133(20)30004-3 DOI: https...

696KB Sizes 0 Downloads 85 Views

Journal Pre-proof Causal Reasoning in Epidemiology: Philosophy and Logic

George Maldonado, Louis Anthony Cox PII:

S2590-1133(20)30004-3

DOI:

https://doi.org/10.1016/j.gloepi.2020.100020

Reference:

GLOEPI 100020

To appear in:

Global Epidemiology

Received date:

11 February 2020

Accepted date:

11 February 2020

Please cite this article as: G. Maldonado and L.A. Cox, Causal Reasoning in Epidemiology: Philosophy and Logic, Global Epidemiology(2020), https://doi.org/ 10.1016/j.gloepi.2020.100020

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

© 2020 Published by Elsevier.

Journal Pre-proof

Causal Reasoning in Epidemiology: Philosophy and Logic George Maldonado1, Louis Anthony Cox2,3

of

1. University of Minnesota, School of Public Health, Division of Environmental Health Sciences, 420 Delaware St. SE, Minneapolis, MN 55455. [email protected]. 612-626-2104. (Corresponding author) 2. Cox Associates, 503 Franklin Street, Denver, CO 80218. 3. University of Colorado.

ACKNOWLEDGMENTS

-p

ro

Special thanks to Karen Goodman, Andrew Ward and Anne Jurek for helpful comments on several versions of this manuscript. Thanks also to Sander Greenland, Pamela Mink, Carl Phillips, Sharon Schwartz and the students of Epidemiology Methods III (SPH 766, School of Public Health, University of Alberta) for helpful comments.

re

SOURCE OF FUNDING AND CONFLICTS OF INTEREST

Jo

ur

na

TYPE OF MANUSCRIPT: Commentary

lP

We are grateful to the American Petroleum Institute (API) for grant number 2016-110225, which funded the writing of an early draft manuscript on this topic. The API was not involved in the writing of that draft manuscript or this manuscript, and they did not review either manuscript.

Fundamentals of Causal Reasoning

Journal Pre-proof

2

ABSTRACT

-p

ro

of

This commentary adds to a lively discussion of causal modeling, reasoning and inference in the recent epidemiologic literature. We focus on fundamental philosophical and logical principles of causal reasoning in epidemiology, raising important points not emphasized in the recent discussion. To inform public health decisions that require answers to causal questions, studies should be approached as exercises in causal reasoning. They should ask well-specified causal questions; and use estimators that approximate, given practical constraints, a “perfect” study, based on a clear definition of causation and a clear (and preferably, explicit) understanding of the philosophical basis for that definition. They should examine how the estimator falls short of approximating the “perfect” study design, conduct and analysis; adjust the study results for these shortcomings; and, in the publication of study results, clearly state the assumptions that were made in the design, conduct and analysis of the study, and discuss their plausibility for the topic under study. We argue that the explicit philosophical foundation for causal reasoning need not be counterfactual reasoning (currently in vogue in epidemiology), but should lead to a well-defined ideal study design for answering causal questions and a mathematical expression for a measure of causal effect. We argue that the perspective of causal reasoning is an indispensable aid in producing study results that are useful for answering causal questions. It is also an indispensable aid in developing and refining epidemiologic methods for answering causal questions, as well as in understanding the attributes required of a method that is truly causal.

Jo

ur

na

lP

re

Keywords: Causal reasoning; causation; bias; counterfactuals

Fundamentals of Causal Reasoning

Journal Pre-proof

3

INTRODUCTION This commentary adds to a lively discussion of causal modeling, reasoning and inference in the recent epidemiologic literature (1-16). We focus on fundamental philosophical and logical principles of causal reasoning in epidemiology, raising important points not emphasized in the recent discussion. Although many of the fundamental principles here have been previously described, we cover them in some detail to show where the arguments we make apply in the overall process of causal reasoning; they bear repeating, as some are often missing, especially in highly mathematized discussions.

FUNDAMENTALS OF CAUSAL REASONING IN EPIDEMIOLOGY

ro

of

Public health decisions often require answers to causal questions. Studies designed to inform these decisions should be approached as exercises in causal reasoning, and should do the following, as discussed in the following sections. Ask well-specified causal questions.



For each causal question, use an estimator 1 that approximates, given practical constraints, a “perfect” study, based on a clear definition of causal effect and a clear understanding of the philosophical basis for that definition.



Examine how the estimator falls short of approximating the “perfect” study design, conduct and analysis.



Adjust the study results for these shortcomings.



In the publication of study results, state the assumptions that were made in the design, conduct and analysis of the study, and discuss why they are believed to be plausible for the topic under study.

ur

na

lP

re

-p



Ask well-specified causal questions

1

Jo

Causal reasoning begins with a causal question, which for health studies has the following components (18): 

An outcome (e.g., the presence or absence of a disease or an injury).



An exposure contrast (i.e., the contrast in exposure whose causal effect we ask about; e.g., a contrast of ever exposed versus never exposed). (Note that by “exposure” we mean “exposed when” as well as “exposed to how much”.)



A target time period (i.e., the time period over which the outcome of interest is monitored to see whether and when it occurs).



A target population (i.e., the population about which we ask our causal question).

Here we use the term “estimator” to mean “…a rule that, when applied to a data set, produces an estimate of the parameter of interest” ((17), page 395).)

Fundamentals of Causal Reasoning 

Journal Pre-proof

4

A measure of causal effect (i.e., the form that the answer to our causal question will take; e.g., a ratio or difference of outcome frequencies such as a causal odds ratio, incidence-proportion ratio or person-time incidence rate ratio).

If any component is unclear or missing, the causal question is incompletely specified and hence unclear. Unclear questions may have answers of limited usefulness for informing public health decisions.2 Clearly define causal effect, and understand the philosophical basis for this definition

ro

of

It is difficult to estimate validly and precisely a quantity that is ill defined. 3 A clear definition of causal effect will likely require a clear philosophical basis. Many (but not all) epidemiologists currently favor counterfactual theory as the basis for defining causal effect (7, 9, 10, 15, 18, 20-22). The methodological development throughout the rest of this manuscript assumes counterfactual theory as the basis for defining causal effect.4 (Note, however, that with an alternative philosophical foundation the methodological development below would likely be different.) Approximate the “perfect” study design, conduct and analysis

re

-p

To answer a causal question, use an estimator of the desired effect that, within practical constraints, and given a philosophical basis for defining causal effect, approximates as closely as possible a “perfect” study design, study conduct and analysis.

lP

For example, under the philosophical basis of counterfactual theory, conduct a study that, given practical constraints, approximates a causal-contrast thought experiment in the target population of interest (7). The following section illustrates how counterfactual reasoning leads to a clear definition of causal effect, as well as to a clear mathematical description of a “perfect” study design for estimating it.

na

“Perfect” study for causal questions under counterfactual theory: causal-contrast thought experiment

ur

Let i denote an exposure pattern. For example, i = 1 might denote “ever exposed” to an exposure of interest and i = 0 might denote “never exposed”. In the target population during the target time period, let A denote the number of new cases of a health outcome of interest, and let B denote the denominator of an outcome-frequency measure.

Jo

A causal-contrast thought experiment (table 1) compares Ai/Bi under different exposure patterns. For example, A1/B1 is the outcome frequency that would occur if exposure pattern i = 1 had occurred in the target population, and A0/B0 is the outcome frequency that would occur in the same target population during the same target time period if instead exposure pattern i = 0 had occurred. (In practice, one (or both) of these outcome frequencies will be counterfactual.) A comparison of A1/B1 and A0/B0 is a causal contrast that measures the causal effect of the difference in exposure patterns 1 and 0.56 For example, 2

Consider the question, “Does smoking cigarettes cause lung cancer?” This question is not clearly specified, with the following important components omitted: the target population, the target time period, and the exposure contrast. Consequently, the answers to this unclearly specified question are: yes and no (e.g., yes for some heavy, long-term smokers (versus never smoking); and no for people who have smoked only one cigarette late in their lifetime (versus never smoking)). 3 As Yogi Berra said, “If you don’t know where you’re going, you might not get there” ((19), page 53). 4 Specifically, counterfactual theory as outlined in Maldonado and Greenland (21). 5 The comparison of A1/B1 and A0/B0 measures a causal effect at the level of the target population. In other words, this comparison will yield an average over all individuals in the target population of the individual causal effects of the difference in exposure patterns 1 and 0.

Fundamentals of Causal Reasoning

Journal Pre-proof

5

(A1/B1) ÷ (A0/B0) is a ratio causal contrast (measure of relative causal effect), and (A1/B1) – (A0/B0) is a difference causal contrast (measure of absolute causal effect). Table 1. Causal-contrast thought experiment: In the target population during the target time period, what outcome frequency would occur if exposure pattern i = 1 had occurred? If instead exposure pattern i = 0 had occurred? …If exposure pattern i = 0

A1

A0

B1

B0

A1/B1

A0/B0

Number of new cases of a health outcome Denominator of outcomefrequency measure Outcome-frequency measure

of

…If exposure pattern i = 1

re

-p

ro

A key element of a causal-contrast thought experiment is that it compares the outcome frequency associated with a specific exposure pattern in a target population during a target time period to the outcome frequency associated with a different exposure pattern in the same target population during the same target time period. This contrast isolates the causal effect of the difference in the exposure patterns from other factors that affect outcome frequency; the only possible reason for a difference in outcome frequencies7 is the difference in exposure patterns.

ur

na

lP

For example, a sufficiently large, perfectly executed and perfectly analyzed randomized experiment is, in theory, an estimator that closely approximates a causal-contrast thought experiment; see (7) for details. Note, however, that contrary to some recent writings (e.g., (10), p. 3), there are some aspects of a randomized trial that are not required in a causal-contrast thought experiment (and therefore not required for causal inference under counterfactual theory; e.g., an intervention that is humanly manipulable, and a positive probability of each level of the exposure contrast for all people in the study). We therefore consider a causal-contrast thought experiment, not a randomized trial, to be the goldstandard study design for epidemiologic studies that ask causal questions (i.e., under the philosophical basis of counterfactual reasoning).

Jo

The philosophical basis for causation need not, however, be restricted to counterfactual reasoning. In fact, given the limitations of counterfactual reasoning for estimating causal effects (e.g., the fundamental problem of counterfactuals), one might hope for a viable alternative, and some alternatives have been suggested (e.g., (22)). If you reject counterfactual reasoning and instead use a different philosophical basis for defining causation, then conduct a study that approximates the “perfect” study dictated by that philosophical basis. However, if your chosen philosophical basis does not lead to an ideal study design and a mathematical expression for a measure of causal effect, then it may not be sufficient for causal reasoning in epidemiology. 6

Contrary to Vandenbroucke ((10), p. 2, “…we do not have a precise—let alone quantitative definition of causation…”), we in fact do have a precise and quantitative definition of causation. The causal contrast described here and in other papers (e.g., (21)) is that quantitative definition (under the counterfactual philosophy of causation). 7 Or, if the outcome follows a stochastic outcome-occurrence model, a difference in the probability distributions of the outcome frequencies. In this manuscript, in our discussion of a causal-contrast thought experiment, we refer to exposure patterns and outcome frequencies that are observed without systematic or random error.

Fundamentals of Causal Reasoning

Journal Pre-proof

6

Examine how the estimator falls short of approximating the “perfect” study Few, if any, studies are perfectly designed, perfectly conducted and perfectly analyzed. It is therefore important to understand a study’s imperfections and how they might affect study results. Under the counterfactual theory of causation, for example, one should understand how a study falls short of a causal-contrast thought experiment. In particular, one should examine a study for the following potential study imperfections: confounding, non-random subject selection, errors in measuring the study variables, and model specification error. Confounding

of

Under the counterfactual definition of causation, “...confounding in an estimate of a causal contrast occurs when we use an actual (observable) outcome frequency as a substitute for a counterfactual outcome frequency (unobservable because it is a hypothetical alternative to what actually occurred), and the substitute is imperfect”((18), (page 745).

re

-p

ro

For example (table 2), in this manuscript let us assume that the target population experiences exposure pattern i = 1. Then outcome frequency A1/B1 occurs, and exposure pattern i = 0 is counterfactual, as is its corresponding outcome frequency A0/B0; we therefore cannot directly observe a causal contrast that compares A1/B1 and A0/B0. If a different population does experience exposure pattern i = 0, then E0/F0 does occur in that different population, and we can use E0/F0 in that different population as a substitute for the unobservable A0/B0. Our comparison of A1/B1 and E0/F0 is confounded for the causal-contrast comparison of A1/B1 and A0/B0 if E0/F0 is not equal to the counterfactual A0/B0. (Note that here we refer to exposure patterns and outcome frequencies that are observed without error.)8

ur

na

lP

We can, in principle, write a mathematical expression that quantifies how much error confounding causes in our estimate of causal effect. For example, suppose we would like to estimate a ratio measure of causal effect: (A1/B1) ÷ (A0/B0). But because of the problem of counterfactuals this quantity cannot be observed. Suppose we can, however, observe (A1/B1) ÷ (E0/F0). The degree of confounding in our observed ratio estimate of causal effect (A1/B1) ÷ (E0/F0) is equal to (A0/B0) ÷ (E0/F0), which is simply the ratio of the counterfactual outcome frequency divided by the substitute outcome frequency (24).9 Of course, because of the problem of counterfactuals, in practice we cannot directly calculate the magnitude of confounding.10

Jo

Table 2. Potential for confounding in a causal-contrast estimate: Outcome frequency in a substitute population is used as a substitute for a counterfactual outcome frequency in the target population. Substitute Substitute population that population that Target population experiences experiences …If exposure …If exposure exposure exposure pattern i = 1 pattern i = 0 pattern i = 1 pattern i = 0 Number of new A1 A0 C1 E0 8

For more details on the counterfactual definition of confounding, see (7, 18, 20, 21, 23, 24). See (18) for a discussion of confounding without “confounders”, “confounders” without confounding, and shortcomings of common confounder-identification strategies. 9

(A1/B1) ÷ (E0/F0) = (A1/B1) ÷ (A0/B0) multiplied by (A0/B0) ÷ (E0/F0). Except in special situations in which the counterfactual is easy to understand. For example, when skydiving from 10,000 feet would the skydiver have survived his jump if instead his parachute had failed to open? 10

Fundamentals of Causal Reasoning cases of a health outcome Denominator of outcomefrequency measure Outcomefrequency measure

Journal Pre-proof

7

B1

B0

D1

F0

A1/B1

A0/B0

C1/D1

E0/F0

Non-random subject selection

ro

of

Typically, only a subset of the subjects in the target and substitute populations are included in the dataset that is analyzed (e.g., a subset of subjects may be intentionally sampled into the study, or subjects may be lost to follow-up, refuse to participate, or be excluded from the analysis due to missing information). If the subject-selection process results in a non-random subset of subjects, selection error may occur.

lP

re

-p

Under counterfactual theory, if the target population experiences exposure pattern i = 1, before the selection process (outer 2x2 table in figure 1) the outcome frequencies under exposure pattern i = 1 and i = 0 would be A1/B1 and E0/F0,11 respectively. After the selection process (inner 2x2 table in figure 1), the observed outcome frequencies would be a1/b1 and e0/f0, respectively. In this example, selection error in a relative-risk estimate occurs if (a1/b1 ÷ e0/f0) is not equal to (A1/B1 ÷ E0/F0) (24-26). (Again, here we refer to exposure patterns and outcome frequencies that are observed without error.)

Target population experiences exposure pattern i = 1

na

Figure 1. Subject-selection process.

A1

Jo

ur

Number of new cases of a health outcome

Denominator of outcome-frequency measure

Substitute population experiences exposure pattern i = 0

B1

E0 a1

e0

b1

f0 F0

Measurement error Error can occur in measuring the exposure under study, the outcome of interest, or adjustment variables. We epidemiologists have been admonished to give more attention to the effect of 11

Because of the problem of counterfactuals, if the target population experiences exposure pattern i = 1 (which we assume in this manuscript), then the outcome frequency of the target population under exposure pattern i = 0 (i.e., A0/B0) is counterfactual, and we use outcome frequency E0/F0 as a substitute for the counterfactual A0/B0. Therefore, in this situation subject sampling for exposure pattern i = 0 is from the actual experience of the substitute population, not from the counterfactual experience of the target population.

Fundamentals of Causal Reasoning

Journal Pre-proof

8

measurement error on our study results (27). Nevertheless, too often it is ignored. In a 2006 publication, Jurek et al. (28) reported the results of a random sample of studies published in three major epidemiology journals. They concluded the following for exposure-measurement error (EME): “Overall, the potential impact of EME on error in epidemiologic study results appears to be ignored frequently in practice” (page 871). Brakenhoff et al. (29) reported similar results in a survey of studies that was published in 2016. Measurement error is a potentially serious concern in epidemiologic studies for several reasons. First, small amounts of measurement error can cause large amounts of error in study results (30). Second, measurement error can inflate or deflate an effect estimate, but unfortunately in most real situations there is no simple rule for judging whether the total error (e.g., the combined impact of measurement error, confounding, subject-selection error, random error, etc.) results in an over- or under-estimate of the true effect size.

re

-p

ro

of

We can judge the contribution of measurement error to the total error in special situations, however. For example, in the simplest case, we can judge that exposure measurement error has “pulled” an observed effect estimate toward a value of no effect if the exposure variable meets the following criteria: (a) it is naturally dichotomous (i.e., not a categorical or continuous variable that is collapsed into two levels)12, (b) exposure misclassification is exactly the same in diseased and non-diseased (i.e., exactly nondifferential (33, 34)), and (c) exposure misclassification is independent of disease misclassification. Other study errors, however, might have pulled the effect estimate away from a value of no effect, and consequently the direction of the total error might be away from the null, and even greater than the true (expected value) of the point estimate.13

ur

Model-specification error

na

lP

In more complicated situations the contribution of measurement error to the total error is more complicated to assess. For example, in a three-level exposure variable with exact nondifferentiality and independence from disease misclassification, the “pull” of exposure misclassification will not necessarily be toward the null (35). (Again, however, the direction of the total error depends on the combined impact of all study errors.)

Jo

Statistical modeling combines data with assumptions to yield a quantitative result that is a function of both (7, 17, 24, 36-39). As eloquently stated by Robins and Greenland (17), ”… a statistical model is a mathematical expression for a set of assumed restrictions on the possible states of nature” (page 393). If the model assumptions (“assumed restrictions on the possible states of nature”) are correct, the statistical modeling analysis yields a better answer to the study question than would be obtained from

12

Strictly speaking, this criterion is not absolutely necessary, as long as the dichotomized exposure is exactly nondifferential. We add this criterion, however, to remind us that dichotomizing a categorical or continuous variable can lead to differential misclassification (31, 32). 13 The often-cited heuristic about the impact on study results of nondifferential misclassification of a dichotomous exposure variable, although interesting theoretically in the special situations in which it is correct, has less practical utility than we typically ascribe to it; it implicitly ignores other study errors (e.g., confounding, disease misclassification, subject-selection error, random error, etc.); how often will a study have no uncontrolled and potentially important errors other than exposure misclassification?

Fundamentals of Causal Reasoning

Journal Pre-proof

9

the data alone. 14 If model assumptions are incorrect, however, model-specification error can cause error in study results. Models used in epidemiology to estimate exposure-disease relationships typically include the following assumptions (7, 17, 23, 24, 36, 37, 39):15 No confounding.



Random error is correctly modeled (i.e., correct specification of the “error distribution” or “sampling model”).



The model correctly describes the dependence of disease occurrence on the study exposure and covariates (i.e., correct specification of the “structural model form”). For example, the model correctly describes the dose-response relationship between the study outcome and the study exposure (e.g., exponential versus linear), the dose-response relationship between the study outcome and each adjustment variable, as well as the mathematical relationship between the model variables (e.g., multiplicative versus additive)).



Any study imperfections not accounted for by the analysis cause no important error to the study results.

-p

ro

of



re

Adjust study results to account for how short an estimator falls from the “perfect” study

lP

Use understanding of how a study departs from the “perfect” study to adjust observed results for the study’s imperfections (e.g., with multiple-bias modeling or bias (uncertainty) analysis (24, 41-44)). Assumptions, assumptions, assumptions

Jo

DISCUSSION

ur

na

Study results depend on design and modeling assumptions as well as on the study data. When reporting and interpreting study results, clearly state these assumptions, and discuss why they are believed to be plausible for the topic under study.

It is our impression that, in the epidemiologic literature on causal inference, fundamental philosophical and logical principles of causal reasoning are rarely emphasized, especially in highly mathematized discussions. Our goals in this commentary were (a) to outline these principles, and (b) in the process to draw attention to some important points not emphasized in, or absent from, this literature. We argue that causal reasoning requires a clear and explicit definition of causal effect, which we believe requires a clear and explicit philosophical foundation. We showed how the philosophy of 14

Of course, the combined impact of other study imperfections could cause study results to be misleading even if statistical modeling assumptions are correct. 15 Given all these assumptions, it might seem perfectly natural to wonder if modeling has a built-in “catch 22”: if we had as much information about our study topic as modeling requires, perhaps there would be no need to do the study. Given this potential “catch 22”, it might be wise to give serious attention to Vandenbroucke’s (40) question, “Should we abandon statistical modeling altogether?” and Greenland’s (37) question, “Are conventional statistics anything other than misleading?”.

Fundamentals of Causal Reasoning

Journal Pre-proof

10

counterfactual reasoning (currently favored by many epidemiologists) leads to a gold-standard study design (i.e., a causal-contrast thought experiment 16), and to a gold-standard mathematical definition of causal effect (i.e., a causal contrast). We note that a different philosophical foundation for causal reasoning, however, would likely lead to a different gold-standard study design and definition of causal effect. We welcome a viable alternative philosophical foundation, given the fundamental challenge of causal reasoning under the philosophy of counterfactual reasoning (i.e., the problem of counterfactuals); it is likely, however, that any alternative would have its own fundamental challenges. We caution that the alternative philosophical foundation may not be sufficient for answering causal questions in epidemiology if it does not lead to a gold-standard study design and a clear, mathematical definition of causal effect.

lP

re

-p

ro

of

The practical limitations of the real world, of course, will force us to a less-than-perfect study. One of the primary goals of data analysis should be to account for these study imperfections. From the perspective of causal reasoning, data analysis should account for the combined impact of all of the potentially important ones; after all, it’s the combined impact of study errors that pull our effect estimate away from the value we would have observed if we had been able to do the “perfect” study. How well does a typical data analysis do this? We often ignore the impact of measurement error on our study results (28, 29). When we do acknowledge it, we often invoke the heuristic about the impact of nondifferential exposure misclassification on study results, even when nondifferentiality and error toward the null are not guaranteed (31-35, 45), and in the process we often ignore the impact of other potentially important uncontrolled study errors. Rarely do we include enough information about our statistical modeling methods, and the assumptions that are often hidden in them, in the methods sections of our published reports to allow a reader to judge the possibility of model specification error. We consider “analysis” as separate from “bias analysis”, when from the perspective of causal reasoning a standard analysis should include both.

Jo

ur

na

From the perspective of causal reasoning, it appears to us that causal questions are best answered by studies that (1) ask well-specified causal questions, (2) are designed and conducted with a clear and explicit philosophical foundation in mind, (3) are analyzed with analysis techniques that employ assumptions that are believed to be plausible for the topic under study, and (4) account for the impact on study results of all important study imperfections. In addition, it appears to us that the perspective of causal reasoning is an indispensable aid in epidemiologic methodology, guiding us both (1) in the development and refinement of design and analysis methods, and (2) in the understanding of the attributes required of a method that is truly causal.

16

Not a randomized trial.

Fundamentals of Causal Reasoning

Journal Pre-proof

11

REFERENCES Dominici F, Greenstone M, Sunstein CR. Science and regulation. Particulate matter matters. Science 2014;344(6181):257-9.

2.

Zigler CM, Dominici F. Point: clarifying policy evidence with potential-outcomes thinking--beyond exposure-response estimation in air pollution epidemiology. Am J Epidemiol 2014;180(12):1133-40.

3.

Broadbent A, Vandenbroucke JP, Pearce N. Response: Formalism or pluralism? A reply to commentaries on 'Causality and causal inference in epidemiology'. Int J Epidemiol 2016;45(6):1841-51.

4.

Broadbent A, Vandenbroucke J, Pearce N. Authors' Reply to: VanderWeele et al., Chiolero, and Schooling et al. Int J Epidemiol 2016;45(6):2203-5.

5.

Chiolero A. Counterfactual and interventionist approach to cure risk factor epidemiology. Int J Epidemiol 2016;45(6):2202-3.

6.

Krieger N, Davey Smith G. The tale wagged by the DAG: broadening the scope of causal inference and explanation for epidemiology. Int J Epidemiol 2016;45(6):1787-808.

7.

Maldonado G. Re: "Estimating Causal Associations of Fine Particles With Daily Deaths in Boston". Am J Epidemiol 2016;183(6):594.

8.

Robins JM, Weissman MB. Commentary: Counterfactual causation and streetlamps: what is to be done? Int J Epidemiol 2016;45(6):1830-5.

9.

Schwartz S, Gatto NM, Campbell UB. Causal identification: a charge of epidemiology in danger of marginalization. Ann Epidemiol 2016;26(10):669-73.

10.

Vandenbroucke JP, Broadbent A, Pearce N. Causality and causal inference in epidemiology: the need for a pluralistic approach. Int J Epidemiol 2016.

11.

VanderWeele TJ. Commentary: On Causes, Causal Inference, and Potential Outcomes. Int J Epidemiol 2016;45(6):1809-16.

12.

VanderWeele TJ, Hernan MA, Tchetgen Tchetgen EJ, et al. Re: Causality and causal inference in epidemiology: the need for a pluralistic approach. Int J Epidemiol 2016;45(6):2199-200.

13.

Dominici F, Zigler C. Best Practices for Gauging Evidence of Causality in Air Pollution Epidemiology. Am J Epidemiol 2017;186(12):1303-9.

Jo

ur

na

lP

re

-p

ro

of

1.

Fundamentals of Causal Reasoning

Journal Pre-proof

12

Greenland S. For and Against Methodologies: Some Perspectives on Recent Causal and Statistical Inference Debates. Eur J Epidemiol 2017;32(1):3-20.

15.

Schwartz S, Gatto NM, Campbell UB. Heeding the call for less casual causal inferences: the utility of realized (quantitative) causal effects. Ann Epidemiol 2017;27(6):402-5.

16.

Maldonado G. The role of counterfactual theory in causal reasoning. Ann Epidemiol 2016;26(10):681-2.

17.

Robins JM, Greenland S. The role of model selection in causal inference from nonexperimental data. Am J Epidemiol 1986;123(3):392-402.

18.

Maldonado G. Toward a clearer understanding of causal concepts in epidemiology. Ann Epidemiol 2013;23(12):743-9.

19.

Berra Y. When You Come to a Fork in the Road, Take it!: Inspiration and Wisdom From One of Baseball's Greatest Heroes. Hyperion; 2002.

20.

Greenland S, Robins JM, Pearl J. Confounding and collapsibility in causal inference. Stat Sci 1999;14(1):29-46.

21.

Maldonado G, Greenland S. Estimating causal effects. Int J Epidemiol 2002;31(2):422-9.

22.

Dawid AP. Causal inference without counterfactuals (with discussion). Journal of the American Statistical Association 2000;95:407-48.

23.

Greenland S, Robins JM. Identifiability, exchangeability, and epidemiological confounding. Int J Epidemiol 1986;15(3):413-9.

24.

Maldonado G. Adjusting a relative-risk estimate for study imperfections. J Epidemiol Community Health 2008;62(7):655-63.

25.

Greenland S, Criqui MH. Are case-control studies more vulnerable to response bias? Am J Epidemiol 1981;114(2):175-7.

26.

Kleinbaum DG, Kupper, L.L, Morgenstern, H. Epidemiologic Research. Principles and Quantitative Methods. Belmont, CA: Lifetime Learning Publications; 1982.

27.

Michels KB. A renaissance for measurement error. Int J Epidemiol 2001;30(3):421-2.

28.

Jurek AM, Maldonado G, Greenland S, et al. Exposure-measurement error is frequently ignored when interpreting epidemiologic study results. Eur J Epidemiol 2006;21(12):8716.

29.

Brakenhoff TB, Mitroiu M, Keogh RH, et al. Measurement error is often neglected in medical literature: a systematic review. J Clin Epidemiol 2018;98:89-97.

Jo

ur

na

lP

re

-p

ro

of

14.

Fundamentals of Causal Reasoning

Journal Pre-proof

13

Ritchey M, West S, Maldonado G. Chapter 37. Validity of drug and diagnosis data in pharmacoepidemiology. In: Strom B, Kimmel S, Hennessy S, eds. Pharamcoepidemiology. West Sussex: John Wiley & Sons, 2018.

31.

Wacholder S, Dosemeci M, Lubin JH. Blind assignment of exposure does not always prevent differential misclassification. Am J Epidemiol 1991;134(4):433-7.

32.

Flegal KM, Keyl PM, Nieto FJ. Differential misclassification arising from nondifferential errors in exposure measurement. Am J Epidemiol 1991;134(10):1233-44.

33.

Maldonado G, Greenland S, Phillips C. Approximately nondifferential exposure misclassification does not ensure bias toward the null. American Journal of Epidemiology 2000;151(11):S39-S.

34.

Jurek AM, Greenland S, Maldonado G. How far from non-differential does exposure or disease misclassification have to be to bias measures of association away from the null? Int J Epidemiol 2008;37(2):382-5.

35.

Dosemeci M, Wacholder S, Lubin JH. Does nondifferential misclassification of exposure always bias a true effect toward the null value? Am J Epidemiol 1990;132(4):746-8.

36.

Greenland S. Randomization, statistics, and causal inference. Epidemiology 1990;1(6):4219.

37.

Greenland S. Interval estimation by simulation as an alternative to and extension of confidence intervals. Int J Epidemiol 2004;33(6):1389-97.

38.

Leamer EE. Specification Searches. New York: Wiley; 1978.

39.

Maldonado G, Greenland S. Interpreting model coefficients when the true model form is unknown. Epidemiology 1993;4(4):310-8.

40.

Vandenbroucke JP. Should we abandon statistical modeling altogether? Am J Epidemiol 1987;126(1):10-3.

41.

Greenland S. Multiple-bias modeling for analysis of observational data. J R Statist Soc A 2005;168:267-306.

42.

Lash TL, Fox MP, MacLehose RF, et al. Good practices for quantitative bias analysis. Int J Epidemiol 2014;43(6):1969-85.

43.

Phillips CV, Maldonado, G. Using Monte Carlo methods to quantify the multiple sources of error in studies [abstract]. American Journal of Epidemiology 1999;149:S17.

44.

Phillips CV. Quantifying and reporting uncertainty from systematic errors. Epidemiology 2003;14(4):459-66.

Jo

ur

na

lP

re

-p

ro

of

30.

Fundamentals of Causal Reasoning

14

ur

na

lP

re

-p

ro

of

Jurek AM, Greenland S, Maldonado G, et al. Proper interpretation of non-differential misclassification effects: expectations vs observations. Int J Epidemiol 2005;34(3):680-7.

Jo

45.

Journal Pre-proof

Figure 1