EDITORIAL EDITORIAL
Research: What Are We Looking For? Timothy Rowe, MB BS, FRCSC Editor-in-Chief
“The truth is rarely pure, and never simple” Oscar Wilde he field of reproduction (or obstetrics and gynaecology, or women’s health, if you prefer) is ripe with pressing topics for research. Journalists and news editors know that new medical information, presented in a way that provokes anxiety on the part of readers, will reliably sell. But news that has to do with women and reproduction is a potential goldmine, and if it is potentially bad news, so much the better. Medical information that creates a stir is welcomed not just by the journalists and news editors who disseminate it, but also by the individuals who performed the study, their institutional superiors, funding agencies, and—dare I say it—the medical journal in which the information initially appears.
T
Unfortunately, this collective urge to create a stir may lead to careless science (or careless reporting, but that’s another issue). Equally, different approaches to a research question may lead to opposing conclusions, in which case lay observers may be excused for dismissing the research establishment as confused and untrustworthy. Some senior epidemiologists have been just as scathing about the direction of clinical epidemiology over the last couple of decades, accusing their colleagues of being more obsessed with methods and statistical processes than with health. More than two decades ago, one of them described “a continuing concern for methods, and especially the dissection of risk assessment, that would do credit to a Talmudic scholar.”1 Applying sophisticated methods of analysis to sets of data that are skewed or flawed will still give you the wrong answer. From the perspective of professionals providing health care to women, what we want to know is: what factors adversely affect the health of our patients? What are the origins of the diseases we see in women? And what options do we have for reducing, reversing, or eliminating these? Until quite recently, many of our responses to these questions were shaped by dogma learned during medical training and J Obstet Gynaecol Can 2007;29(11):875–876
unchanged during subsequent years of practice. Such dogma began as the observations of the most persuasive scientists and clinicians, but was subsequently revised or bolstered by observational data as epidemiology became a respectable part of medicine. But observational data, we are increasingly reminded, will only demonstrate associations, and cannot reliably ascribe cause and effect. Bradford Hill2 described a systematic approach to attempts to infer cause and effect from observational data, listing nine “aspects” of any association that he felt should be considered before concluding that the association actually represents cause and effect. He did not, however, suggest that the list was either complete or foolproof. Common sense still had to be applied. Bradford Hill (who, incidentally, had no qualification in either medicine or statistics) is also credited by Doll3 with being the first proponent of the randomized controlled trial (RCT). The first of these to be reported was a trial of streptomycin treatment for tuberculosis,4 and it lacked many of the features of a contemporary RCT; for example, no placebo was given (since this would have required multiple daily injections over four months) and no ethical approval from an external body was sought or obtained. Nevertheless, this study design was deemed to be a success, and it was subsequently applied to increasing numbers of applications. In critical areas such as cancer, a robust study design like the RCT was a boon; but with widening application came concerns that the results were not infrequently inconclusive or even contradictory, leading to calls to revert to the use of historical controls for comparisons.5 In retrospect, the reason for the poor performance of many RCTs was that they were simply too small. This realization, arising slowly, led to the development of the meta-analysis and the mega-trial,6 each of which had its strengths and weaknesses. However, these continuing developments in the approach to answering research questions bolstered the reputation of clinical epidemiology as a methodical basis for clinical practice and later led to the re-branding of clinical epidemiology as evidence-based medicine.7 Clinical questions today are answered primarily by referring to published experience, as NOVEMBER JOGC NOVEMBRE 2007 l
875
EDITORIAL
evidenced by the continuous publication of Clinical Practice Guidelines in this Journal. Although many components of reproductive health care have benefited greatly from research, many have not. As an example, the association between postmenopausal hormone therapy and cardiovascular disease has had at least two reversals of direction in the past five years.8 Some aspects of care, such as the long-term consequences of using oral contraceptives, cannot be assessed in RCTs or even prospective cohort studies, allowing critics to make repeated public statements questioning long-term safety. The fetal origins of adult disease have been diligently studied in settings such as the Hertfordshire cohort,9 but many major questions persist. In this issue of the Journal, Steven Koenen and colleagues draw our attention to the possible long-term effects of antenatal exposure to corticosteroids. As they describe, this is a large-scale global issue, and how this can be studied remains unclear. Similarly, the systematic review by Aleksey Kazmin and colleagues in this issue acknowledges some continuing uncertainty about the safety of statin use during pregnancy. Therein lies the concern with much research in reproductive issues: many study designs, including RCTs, are unsuitable to resolve many of the burning questions. Even Bradford Hill acknowledged the importance of using analogy rather than direct intervention in research, citing specifically the effects of rubella
876
l NOVEMBER JOGC NOVEMBRE 2007
infection and thalidomide during pregnancy as reasons to accept more readily any adverse effects of other viral infections or related drugs. So we will continue to consider case reports, images, new observations, cohort studies, case-control studies, RCTs, meta-analyses, and mega-trials for publication in this journal. The truth is indeed never simple. REFERENCES 1. Stallones RA. To advance epidemiology. Ann Rev Public Health 1980;1:69–82. 2. Hill AB. The environment and disease: association or causation? Proc R Soc Med 1965;58:295–300. 3. Doll R. Sir Austin Bradford Hill and the progress of medical science. BMJ 1992;305:1521–6. 4. Medical Research Council Streptomycin in Tuberculosis Trials Committee. Streptomycin treatment for pulmonary tuberculosis. BMJ 1948;ii:769–82. 5. Gehan EA, Freireich EJ. Non-randomized controls in cancer clinical trials. N Engl J Med 1974;290:198–203. 6. Yusuf S, Collins R, Peto R. Why do we need some large, simple randomized trials? Stat Med 1984;3:409–22. 7. Davey Smith G, Ebrahim S. Epidemiology—is it time to call it a day? Int J Epidemiol 2001;30:1–11. 8. Manson JE, Allison MA, Rossouw JE, Carr JJ, Langer RD, Hsia J, et al. Estrogen therapy and coronary-artery calcification. N Engl J Med 2007;356:2591–602. 9. Syddall HE, Aihie Sayer A, Dennison EM, Martin HJ, Barker DJP, Cooper, et al. Cohort Profile: the Hertfordshire Cohort Study. Int J Epidemiol 2005;34:1234–42.