What is evidence based medicine? An emerging science not fashionable rhetoric

What is evidence based medicine? An emerging science not fashionable rhetoric

Radiography (2001) 7, 7–10 doi:10.1053/radi.2000.0296, available online at http://www.idealibrary.com on GUEST EDITORIAL What is evidence based medi...

72KB Sizes 1 Downloads 20 Views

Radiography (2001) 7, 7–10 doi:10.1053/radi.2000.0296, available online at http://www.idealibrary.com on

GUEST EDITORIAL

What is evidence based medicine? An emerging science not fashionable rhetoric This editorial responds to the editorial published in the February edition of Radiography [1]. The aim is to allay some of the anxieties expressed about the robustness of evidence used to underpin the recommendations of the recently established National Institute for Clinical Excellence (NICE). Systematic reviews and meta-analyses will be the methods used by NICE to assimilate valid evidence from the research literature to provide a scientific basis for rational decision making. Before addressing the concerns these methods have raised, it is important to clarify what evidence based medicine (EBM) is, and its philosophical origins. The definition of EBM in the editorial was ‘medicine based on the rigorous, formal searching for and collecting of data largely from randomized controlled trials, and their subsequent metaanalysis’ [1]. Although this is the sort of evidence that NICE is likely to use EBM is not restricted to these methods. So the definition is misleading. EBM is the conscientious, explicit, and judicious use of robust evidence in making decisions about the care of individual patients. The practice of EBM means integrating individual clinical expertise with the current best external evidence available from systematic search [2]. EBM reflects both the judgement that individual clinicians acquire through clinical experience and practice underpinned by clinically relevant research. Therefore, external evidence is used to inform clinicians’ expertise which may be incorporated into a clinical decision. The philosophy of EBM extends back to mid19th century Paris and earlier [2]. However, metaanalysis as a technique is less than 30 years old [3], and the term itself is little more than 20 years old [4]. It is therefore not surprising that meta-analysis in particular, and systematic reviews in general, have been criticized for technical errors [5]. Nevertheless, the systematic and reproducible principles that govern meta-analysis are designed to minimize the biases inherent in the haphazard and subjective 1078–8174/01/070007+04 $35.00/0

approach to the conventional method of reviewing literature [6]. Similarly, although the randomized controlled trial is now some 50 years old [7], it would still be easy to find published trials that contravene different basic principles. However, their quality is continuing to rise and the proportion published that violate basic principles is continuing to fall. The main threat to meta-analysis and its young parent discipline of systematic reviewing lies in the inexperience of its exponents and the need for methodological development [8]. However, the goal of both systematic reviews and meta-analysis is to limit bias and improve the reliability and accuracy of recommendations and thus clinical decision-making [9]. Although EBM may be invoked within the Government’s recent commitment to deliver high quality care, it is neither political rhetoric nor a panacea for the NHS. EBM is an emerging science with real potential to inform NHS policy and clinical decision making. Goodman’s first worry was that EBM involves the collection of epidemiological data from populations, which is then applied to individual patients. His second worry was that too much emphasis is placed on evidence from randomized controlled trials, the conditions of which do not adequately reflect clinical practice [1]. Both of these issues can be addressed simultaneously. The basic principles of EBM are to ensure that clinical and other healthcare decisions are based on best evidence from populations and patients, as well as the laboratory. The first recommendation produced by NICE was that zanamavir (Relenza: Glaxo-Wellcome) should not be used to treat influenza. The first trials undertaken to evaluate such a drug should be primarily concerned with its safety and are usually performed on human volunteers. Then, explanatory trials, which are small-scale investigations, should investigate the efficacy and safety of the drug by monitoring patients under ideal conditions. Rigorously © 2001 The College of Radiographers

8

designed pragmatic trials should then estimate the net value in terms of costs and benefits of the new drug in comparison with current treatment for the same condition. This design reflects variations between patients that occur in practice and aims to inform choices between treatments. The results of such a trial based on a representative sample of individual patients in clinical practice may be generalized to a similar population of individual patients in clinical practice. Finally, large-scale and long-term surveys are undertaken to monitor morbidity (and mortality) after marketing. This categorization of drug trials serves to emphasize that there are important differences in the conduct of a trial depending on the purpose of the evaluation [10]. The strength of a systematic review is that the results of trials are not accepted without criticism, but appraised to assess their validity and graded accordingly to enable them to be given appropriate weight, if possible, during analysis [11]. It is not always necessary to use a trial to answer a clinical question. To establish the accuracy of a diagnostic test a cross-sectional study of patients suspected of having the relevant disease may suffice. To answer a question about prognosis one can follow up patients identified early in the course of their disease [2]. However, the advantages of randomization include the removal of biases inherent in judgmental methods of allocating patients to ‘treatments’ for the purposes of evaluation and to provide a probabilistic basis for statistical inference [12]. Goodman’s third anxiety concerned disagreements between experts about different methods of meta-analysis. Unlike a systematic review that qualitatively synthesizes the evidence from scientific studies, a meta-analysis is a statistical analysis of the results of two or more scientific studies to synthesize their findings [13]. Thus, an advantage of meta-analysis is increased power [9]: when data are pooled from studies, the total sample size increases together with confidence in the estimated mean effect. However, meta-analyses have been criticised both in principle and in practice. It is important to use valid criticisms to improve their conduct, as the uncritical use of any technique can be misleading. One common problem is the failure to investigate appropriately the sources of heterogeneity between the studies included [14]. Indeed, the process of discerning whether the conditions for meta-analysis have been met is arguably more important than any ensuing meta-analysis [8]. If following careful

Brealey

exploration quantitative synthesis is recommended, the choice of statistical model depends on the type of data [14]. Therefore, a prudent approach is more likely to lead to the correct choice of statistical model and a reliable answer to the clinical question. Goodman’s fourth worry that EBM is itself untested will diminish as EBM develops. For example, two meta-analyses of treatments for acute myocardial infarction have been tested both by funnel plots and subsequent mega trials. A funnel plot is a graphical display that plots the magnitude of the treatment effect against the number of patients from different trials and is used to investigate publication bias. When all the studies have been located, the distribution of the points should resemble a funnel. If there are gaps in the funnel shape it indicates that some studies may have not been published or located [15]. The funnel plot for the meta-analysis of streptokinase is symmetrical and confirms its validity, as did two mega trials. In contrast, the funnel plot for the meta-analysis of intravenous magnesium is asymmetrical and provides evidence of publication bias. A subsequent mega trial refuted the meta-analysis finding that this was an effective form of therapy [16]. The traditional method of reviewing literature and using a subjective selection of studies is even more likely to result in publication bias and erroneous conclusions. Therefore, although meta-analyses can mislead, this example demonstrates how investigating the possibility of missing data can protect against publication bias, a safeguard not available to unsystematic reviewers. There is also the potential to expedite the introduction of new and effective therapy, as a meta-analysis could have prevented a 20 year delay in the adoption of intravenous streptokinase [17]. Fifth, Goodman was concerned about basing treatment solely on clinical outcomes, such as physiology, pharmacology and pathology. The evolution of a conceptual framework for defining and assessing the quality of health care has generated a lot of literature over the last 30 years [18]. In 1966, Donabedian proposed three criteria by which to evaluate medical care: structure (the resources available, e.g. equipment, skills), process (the activities performed as part of patient care) and outcome (the resulting changes in patient health) [19]. He argued that measures of process were more relevant to whether medicine was properly practised but was correctly suspicious whether improvements in structure resulted in

Guest Editorial

improvements in outcome or even process. Soon afterwards Cochrane pointed out that little of current medical practice had been established as effective by rigorous scientific methods [7]. Outcome measures, therefore, began to be included in evaluations to validate the process measures [20]. Patient outcomes are not just measured in natural units such as points of blood pressure reduced. The effectiveness, or outcome of a treatment, can be expressed as value to patients, sometimes referred to as utility. This involves measuring the preferences of individuals or society by calculating the benefit of reducing blood pressure in terms of patient quality of life. Therefore, EBM is not only practised on the evidence of changes in clinical outcome data but also measures health improvement in quality-adjusted life-years (QALYs) gained, which permits broad comparisons across widely differing programmes [21]. Goodman also went on to say that ‘cost-effectiveness is a mirage: either a treatment is effective, or it’s not’ [1]. However, even if the consequences of two treatments for hypertension are identical then the less costly alternative should be chosen. Doctors’ decisions do have economic consequences. They are under constant pressure to use the scarce resource of time effectively and to weigh up the costs and benefits of what treatment to use. Cost-effectiveness is not an illusion as clinical decision-making is implicitly a crucial element of resource allocation. Goodman’s last worry was that important criticisms of EBM remain unanswered by the enthusiasts. This editorial has served to diminish concerns that EBM cannot provide robust evidence. The following addresses two further issues raised. First, Goodman complains of the inequity of rationing and the need for pragmatism and utilitarianism [1]. The government has set clear targets for improvement in the following areas of health care: heart disease and stroke, accidents, cancer and mental health [22]. What could be more beneficial than a long-term commitment to reducing the diseases which most burden society? Although uniformity is required to tackle these priorities, the heterogeneity of local circumstances may influence the rationing of healthcare. Indeed, the aim of local authorities, Primary Care Groups, and NHS Trusts is to work together in shaping services to reflect the needs of the community [23]. The Commission for Health Improvement will monitor the implementation of the national guidelines to ensure unacceptable differences in patient care do not

9

occur [24]. Thus, variation in healthcare does not necessarily mean inequality in health. Goodman’s second complaint was that NICE and other recent changes in the NHS are a political response to the media-driven concern about the persisting crisis. This raises the question of who is responsible. Are politicians at fault who have motives other than improving the health of the nation? Are EBM enthusiasts to blame with their own vested interests? Is it the incompetence of doctors? Is it the media and their scandalous reporting? Alternatively, is the public culpable, as they vote for the politicians, believe the media, trust the clinicians and are ignorant of research findings? We can all legitimately claim to act conscientiously, we simply vary in the weight we attach to our different objectives [25]. Ultimately, decisions have to be made either by politicians at a national level or clinicians at the patientpractitioner interface. If decisions are supported by evidence-based research, which seeks to incorporate patient preferences, actions will respond to currently available knowledge. In conclusion, the evaluation of possible improvements in the treatment of disease and the subsequent process of reviewing the evidence from research is historically a haphazard process. Only in recent years has it become widely recognized that properly conducted trials, which follow the principles of scientific experimentation, provide the most reliable basis for evaluating the effectiveness of treatments. Systematic reviews and metaanalysis are needed to combine the evidence from trials and similarly will become established techniques. The positive effect of EBM is beginning to emerge [26]. It will grow with the increase in undergraduate, postgraduate and continuing education programmes. Future policy recommendations will increasingly be based on the best available evidence. The challenge to the NHS is to practise healthcare using this evidence to ensure the efficient use of inevitably scarce resources. In particular the profession of radiography should invoke the principles of EBM. The use of techniques such as randomized controlled trials and systematic reviews should be encouraged if radiographic practice is to be underpinned by evidence-based radiography. Acknowledgement The author would like to acknowledge the invaluable advice of Ian Russell, Founding Professor of Health Sciences at the University of York.

10

Stephen Brealey, BSc (Diagnostic Radiography) Research Fellow in Health Sciences, Department of Health Sciences & Clinical Evaluation, University of York, York YO1 5DD, U.K. References 1. Goodman NW. NICE and the new command structure: some thoughts on evidence, competence and authority. Radiography 2000; 6: 3–7. 2. Sackett DL, Rosenburg WMC, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ 1996; 312: 71–2. 3. Light RJ, Smith PV. Accumulating evidence: procedures for resolving contradictions among different research studies. Harvard Educ Rev 1971; 41: 429–71. 4. Glass GV. Primary, secondary and meta-analysis of research. Educ Res 1976; 5: 3–8. 5. Eysenck HJ. Problems with meta-analysis. In: Systematic Reviews. London: British Medical Journal, 1995: 64–74. 6. Knipschild P. Some examples of systematic reviews. In: Systematic Reviews. London: British Medical Journal, 1995: 9–16. 7. Cochrane AL. Effectiveness and Efficiency: Random Reflections on Health Services. London: British Medical Journal for Nuffield Provincial Hospitals Trust, 1972 (reprinted 1989). 8. Russell I, Di Blasi Z, Lambert M, Russell D. Systematic reviews and meta-analyses: opportunities and threats. In: Evidence-based Fertility-treatment. London: RCOG Press, 1998: 15–64. 9. Mulrow CD. Rationale for systematic reviews. In: Systematic Reviews. London: British Medical Journal, 1995: 1–8. 10. Pocock SJ. Introduction: the rationale of clinical trials. In: Clinical Trials: A Practical Approach. Chichester: Wiley, 1983: 1–13. 11. Brealey S, Glenny AM. A framework for radiographers planning to undertake a systematic review. Radiography 1999; 5: 131–46. 12. Russell IT. The evaluation of computerised tomography: a review of research methods. In: Economic and medical evaluation of health care technologies. Berlin: Springer-Verlag, 1983: 298–316.

Brealey

13. Greenhalgh T. Papers that summarise other papers (systematic reviews and meta-analyses). In: How to Read a Paper: the Basics of Evidence Based Medicine. London: British Medical Journal, 1997. 14. Thompson SG. Why sources of heterogeneity in metaanalysis should be investigated. In: Systematic Reviews. London: British Medical Journal, 1995: 48–53. 15. NHS Centre for Reviews and Dissemination. Undertaking Systematic Reviews of Research on Effectiveness. University of York: NHS CRD, 1996. 16. Egger M, Davey Smith G. Misleading meta-analysis. BMJ 1995; 310: 752–4. 17. Lau J, Antman EM, Jimenez-Silva J, Kupelnick B, Mosteller F, Chalmers TC. Cumulative meta-analysis of therapeutic trials for myocardial infarction. N Engl J Med 1992; 327: 248–54. 18. Irvine D, Donaldson L. Quality and standards in health care. Proceedings of the Royal Society of Edinburgh 1993; 101B: 1–30. 19. Donabedian A. Evaluating the quality of medical care. Millbank Memorial Fund Quarterly 1966; 44: 166–206. 20. Hulka BS, Romm FJ, Parkerson GR, Russell IT, Clapp NE, Johnson FS. Peer review in ambulatory care: use of implicit criteria and explicit judgements. Medical Care 1979; 17 (S): 1–73. 21. Drummond MF, O’Brien BJ, Stoddart GL, Torrance GW. Cost-utility analysis. In: Methods for the Economic Evaluation of Health Care Programmes, 2nd edn. Oxford: Oxford University Press, 1987: 139–204. 22. Department of Health. The New NHS: Modern Dependable. London: Department of Health, 1998. 23. Secretary of State for Health. Saving Lives: Our Healthier Nation. London: Department of Health, 1999. 24. Bayley G. The new NHS: current health policy and practice. In: Radicalism and Reality in the National Health Service: Fifty Years and More. York: University of York, 1998. 25. Williams A. Medicine, economics, ethics and the NHS: A Clash of Cultures? In: Radicalism and Reality in the National Health Service: Fifty Years and More. York: University of York, 1998. 26. Shin JH, Flaynes RB, Johnston ME. Effect of problembased, self-directed undergraduate education on life-long learning. Can Med Assoc J 1993; 148: 969–76.