0895-4356189 $3.00 + 0.00 Copyright 0 1989 Pergamon Press plc
Clin Epidemiol Vol. 42, No. 9, pp. 825-826, 1989 Printed in Great Britain. All rights reserved
J
“DIRECTIONALITY”
IN EPIDEMIOLOGIC
RESEARCH
OLLI S. MIETTINEN McGill University, Faculty of Medicine, 1020 Pine Avenue West, Montreal, Quebec, Canada H3A IA2
In two recent critiques [ 1,2] of a previous paper on the fundamental choices in epidemiologic study design [3], all of them in this Journal, the key issue was whether “directionality” of study is one of the dimensions in which choices are to be made. The critics held that it is not. In an Editor’s Note attached to the first critique [l] it was asserted that “the dispute involves a fundamental paradigm in epidemiologic research”; and in an accompanying Editorial [4] it was explained that one thinks of “directionality of research” as either “crucially important” or “unimportant” according as one does or doesn’t hold “the randomized, doubleblind controlled experimental trial” as “an extrinsic ‘gold standard’ ” for nonexperimental epidemiologic research. As the author of one of the critiques, I now need to point out that in the Editorial the burden of the critiques was missed, totally. In neither critique was it said, or even implied, that directionality is “unimportant”. What was said, in sharp contrast to this, is that “directionality” is not one of the dimensions, aspects or topics on which choices are to be made in the design of epidemiologic studies-because, though indeed
“crucially important”
to appreciate, it involves to the illusion of options in this context is, as I have argued in other contexts already [4-61, founded on what I have termed the “trohoc fallacy” [5]-blinded by which the “case-controller” fails to think of, and through his mind’s eye see, the true study population in its ineluctable forward-directional motion in time. In his blindness, he suffers the delusion that the study population consists of “cases” and “controls”. While in the editorial there is a laudable emphasis on the need to think of the true study population and its forward time-course in all nonexperimental studies (with
no choices. The adherence
825
a longitudinal base), there is a touch of the “trohoc fallacy” in its references to “groups. . . assembled at the end rather than beginning of the causal pathway” and to “posteffect persons collected for the research”. In strict, proper terms, one does not ever “assemble post-effective groups”. Instead, in all nonexperimental occurrence research, one selects the study population (by selective “assembling” of study subcohorts within a source cohort, or by mere conceptual restrictions within a selected dynamic source population); one seeks to detect all cases occurring in a defined segment of the study population’s forward course (i.e. in the study base), that is, one assesses outcome on a census basis; and one obtains either a census or a sample of the study base (a sample in the context of a case-referent, case-base, or censussample-not “case-control”-strategy). The cases do not really represent a “group” of “persons” but a series of events; and the sample of the study base is not really a sample of a population but of population-time [7] and thus, it again is not really a “group of persons”. The Editorial reflects the “trohoc fallacy” explicitly in its distinctions between “cohort” and “case-control” studies, with no allusion to the fallacy in this. (Properly the alternative to a cohort is a dynamic population, and that to the case-referent strategy is a simple census [4,5,81.) In a somewhat different vein, the point in the Editorial that “in many epidemiologic studies, the research does not begin until after the agents have been received (or not received) and after the outcome effects have already occurred”, properly understood, is not relevant in this context; it only serves to obfuscate the issue. It must refer to those situations in which the study base is retrospective, i.e. temporally antecedent
826
OLLI S. MIETTINEN
to the research work. The issue here-the design choice between a retrospective and a prospective base-does, of course, represent a dimension of design decisions in nonexperimental contexts though not in experimental ones, as in an experiment the study base is inherently prospective. With reference to nonexperimental research, however, the design topic at issue here is the timing of the study base [8; pp. 22, 561 and not its “directionality,” specifically the relation of this timing to that of the research work. I should also wish to point out that my objection to the notion of directionality is not a consequence of a failure to entertain the clinical trial as a paradigm for nonexperimental epidemiologic research: I hold the clinical trial as a paradigm for all it is worth in this context [6]-though not for what it is not worth [6,9]. Moreover, the clinical trial is expressly illustrative of the point that I, and Greenland and Morgenstern [2]) were making; and what is said in the Editorial about clinical trials, and about the way “believers” in this paradigm look upon nonexperimental epidemiologic studies, is in accord with this. Returning to the main point, whereas the Editorial expresses concern about “scientific rigor and pragmatic ardor in epidemiologic research”, I wish to underscore that in any examination of a theoretical proposition about
epidemiologic research, rigor and ardor should first be applied toward secure comprehension of what the proposition is. To repeat, the proposition at issue here-nothing very new [4]-is that under “directionality” there are no choices to be made in epidemiologic study design; i.e. that it is a vacuous pseudodimension of design decisions, a nontopic in study design.
REFERENCES 1.
2.
3. 4.
5. 6.
7.
8.
9.
Miettinen OS. Striving to deconfound the fundamentals of epidemiologic study design. J Clin Epidemiol 1988; 41: 709-713. Greenland S, Morgenstern H. Classification schemes for epidemiologic research designs. J Clin Epidemiol 1988: 41: 715-716. Feinstein AR. Editorial. Directionality in epidemiologic research. J Clin Epidemiol 1988; 41: 705-707. Miettinen OS. Design options in epidemiologic research. An update. Stand J Work Environ Health 1982; 8 (Suppl. I): 7-14. Miettinen OS. The “case-control” study: valid selection of subjects. J Chron Dis 1985; 38: 543-549. Miettinen OS. The clinical trial as a paradigm for epidemiologic research. J Clin Epidemiol 1989; 42: 491496. Miettinen OS. Estimability and estimation in case-referent studies. Am J Epidemiol 1976; 103: 226-235. Miettinen OS. Theoretical Epidemiology, Principles of Occurrence Research in Medicine. New York: John Wiley; 1985. Miettinen OS. Unlearned lessons from clinical trials: a duality of outlooks. J Clin Epidemiol 1989; 42: 499-502.