C H A P T E R
8 Evidence-Based Practice of Neurosurgery: Implications for Quality and Safety Stephen J. Haines1 Department of Neurosurgery, University of Minnesota School of Medicine, Minneapolis, MN, United States 1 Corresponding author e-mail:
[email protected]
O U T L I N E Introduction
98
Definitions
98
Understanding Variation: Noise and Bias
99
The Role of Process Improvement, Quality, and Safety Projects in Evidence and Evidence-Based Practice
100
Decisions, Decisions
101
Problems in an Evidence-Based Practice
103
Summary
105
Readings on Evidence-Based Practice
105
References
105
Quality and Safety in Neurosurgery https://doi.org/10.1016/B978-0-12-812898-5.00008-4
97
© 2018 Elsevier Inc. All rights reserved.
98
8. EVIDENCE-BASED PRACTICE OF NEUROSURGERY
The deepest sin against the human mind is to believe things without evidence. Thomas Huxley, English Biologist (1825–95)
INTRODUCTION The fundamental assumption of evidence-based practice is that basing decisions in practice on best available evidence will maximize the likelihood of correct diagnosis, effective treatment, and minimal complications. Therefore, evidence-based practice should result in high quality, safe neurosurgical practice. It is therefore sensible to review the evidence-based practice of neurosurgery as a way to support efforts to improve quality and safety in neurosurgical practice. In this brief chapter, we will review the principles of evidence-based practice in neurosurgery and the role that formal quality and safety efforts play in continuously improving the evidence-based practice of neurosurgery.
DEFINITIONS It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. Sherlock Holmes, a Scandal in Bohemia
The definition of evidence-based neurosurgery with which we have worked since 20061 is “a paradigm of practicing neurosurgery in which: (a) the best available evidence is consistently consulted first to establish principles of diagnosis and treatment that are, (b) artfully applied in light of the neurosurgeons’ training and experience informed by the patient’s individual circumstances and preferences, (c) to regularly produce the best possible health outcomes.” Encapsulated in this definition is that the evidence-based practitioner begins with generalizable knowledge about the condition to be treated, interprets that knowledge in light of the individual circumstances of the patient being treated, and is consistent in the application of these principles. The definition also recognizes that generalizable evidence cannot answer every question or deal with every individual circumstance the practitioner may confront and therefore allows for the artful application of principles to individual circumstances.
2. MEASURING AND IMPROVING PERFORMANCE
UNDERSTANDING VARIATION: NOISE AND BIAS
99
UNDERSTANDING VARIATION: NOISE AND BIAS “Onestaisement dupe parcequ’onaime.” (One is easily fooled by that which one loves.) Molie`re, le Tartuffe 4,3
This definition begs the question “what is evidence?” Evidence comes in many forms. A single observation can make the difference between knowing that something can happen or believing that it is impossible. A series of observations in apparently similar circumstances can lead to a tentative conclusion (or hypothesis) about a predictable phenomenon. Moving beyond simple sets of observations, it was not until the 1800s that physicians began to analyze collected series of observations in more rigorous ways. This began with counting (e.g., numbers of patients at risk and numbers of patients actually becoming ill) and tabulating (number of patients at risk, number of patients actually becoming ill, cross tabulated with gender). This allowed a more sophisticated analysis of collected observation. By the early 1900s, it became clear that such collections of observations were subject to some natural forms of variation. When the individual physician published his tabulated results, it was with the hope that other physicians could use those results to predict outcomes. It was recognized then that the physician who collected the results from his practice considered his patients to be a “sample” of the larger population of people susceptible to this disease. The study of sampling and the variation in tabulated results that could result from observing only a subset of the patients to whom the results might be applied led to the development of biostatistics. Statistics were developed to allow an analysis and understanding of the natural variation results or “noise” in the data. This was a major intellectual advance in understanding the observational evidence of medicine in that it allowed quantification of the accuracy of estimates obtained from samples of larger populations and estimates of the likelihood that differences found between sample populations were simply the result of chance variation or “noise” rather than real biological phenomena. The next major intellectual advancement in understanding collected clinical evidence was the realization that the methods of data collection could lead to important distortions in results, or “bias.” For example, collecting data from patients who came to see a famous consultant physician in a large city might produce results very different from those patients with similar complaints who came to see a general practitioner in a small town. The simple recognition of this possibility led to more careful interpretation of results. Methods for controlling for bias then began to be developed. The discipline of experimental design arose from the need
2. MEASURING AND IMPROVING PERFORMANCE
100
8. EVIDENCE-BASED PRACTICE OF NEUROSURGERY
to assure unbiased collection of data. By the middle of the 20th-century, additional methods such as blinding of observers to the applied experimental techniques and randomization of subjects were developed to minimize the likelihood that bias would lead to incorrect results. While the fields of clinical epidemiology, clinical biostatistics, and trial design evolve continuously, these techniques for controlling noise and bias in the collection, analysis, and interpretation of clinical data provide much greater assurance that the results accurately reflect reality.
THE ROLE OF PROCESS IMPROVEMENT, QUALITY, AND SAFETY PROJECTS IN EVIDENCE AND EVIDENCE-BASED PRACTICE Faith must have adequate evidence, else it is mere superstition. A. Alexander Hodge, Presbyterian Minister, 1823–86
The paradigm of evidence-based neurosurgical practice described above should produce a practice of neurosurgery that continuously improves its outcomes. In the early 21st century, we have recognized that the paradigm of evidence-based practice is lacking a very important component, a feedback loop that measures the effectiveness of that paradigm of practice and leads to its continuous improvement. Formal process improvement, the fundamental discipline of quality and safety practice, provides that feedback loop. One of the consistent criticisms of evidence-based practice is that it is theoretical. Since much of the theory of evidence-based practice has developed in an academic setting, the natural academic response to this criticism was to develop another form of research to provide that feedback loop. Comparative effectiveness research was the result of that awareness. As a method for providing feedback to evidence-based practice paradigms, however, it has the disadvantage of being slow because of its scientific rigor and institution oversight. However, since process improvement is intrinsically a local effort not intending to produce generalizable evidence and therefore less susceptible to issues of bias that would affect generalizability and requiring less institutional oversight, it provides a rapid feedback alternative for assuring that the local implementation of evidence-based principles of practice is producing the required results. It also provides an engine for continuous improvement of the local implementation. In our hierarchy of evidence, rigorously designed and conducted process improvement projects are essentially carefully designed and analyzed case series with minimal bias control, no blinding, or randomization required. Such projects can generally be designed and implemented with
2. MEASURING AND IMPROVING PERFORMANCE
DECISIONS, DECISIONS
101
much greater speed than formal clinical research projects and give relatively rapid feedback. Since such projects frequently involve the implementation of a relatively large number of related interventions simultaneously, it is generally not possible to determine which individual intervention is responsible for the desired outcome. The continuous improvement process, however, allows adjustments to be made and continuously adjust the bundle of interventions so that outcomes continuously improve. Therefore, formal process improvement is achieving an appropriate role in the evidence-based practice of neurosurgery by providing feedback and promoting continuous improvement of the combination of evidencebased principles and their wise application to neurosurgical patients. The discipline of formal process improvement is described elsewhere in this text. The remainder of this chapter will briefly review fundamental principles of evidence-based practice in neurosurgery.
DECISIONS, DECISIONS A wise man…proportions his beliefs to the evidence. David Hume (1748) An Inquiry Concerning Human Understanding, Chapter 10.4
We can classify the decisions that a neurosurgeon needs to make in the process of assessing a patient making a diagnosis deciding on intervention and carrying it out into several types of questions. This is important because the type of question determines the type of study that needs to be done to answer that question. Knowing what kind of evidence to look for to answer a specific question in clinical practice speeds the process of sorting through the mountains of evidence that are available and makes it more likely that the neurosurgeon finds an appropriate answer to their question. There are questions of assessment. This generally involves determining the presence or absence of certain facts about the patient that are important to the decision-making process. Physical findings are an excellent example. An assessment should be accurate and reliable. Accuracy refers to measuring with precision the phenomenon that it is intended to measure. One uses a thermometer to measure body temperature. Using a ruler would make no sense. Determining accuracy may simply be a matter of measuring instrument choice, but it may be necessary to test the instrument under various circumstances to determine that it is actually measuring the desired outcome. Reliability refers to the ability of the measuring instrument to reproduce its findings. Intraobserver reliability is a measure of how reproducible a measurement is when the same phenomenon is measured with the
2. MEASURING AND IMPROVING PERFORMANCE
102
8. EVIDENCE-BASED PRACTICE OF NEUROSURGERY
same instrument by the same person. Interobserver reliability is a measurement of how likely it is that the same measurement by the same instrument is achieved by different people using that instrument. There are a number of statistics that have been developed to measure intra- and interobserver reliability. For the purposes of this article, it is sufficient to know that the accuracy and reliability of measurement instruments, be they physical instruments, questionnaires, or measurement scales, should be known if reliable measurements are to be obtained. There are questions of diagnosis. Here a diagnostic method is applied and the outcome is an estimate of the likelihood that the patient has the diagnosis in question. The proper methods for answering this question involve analyzing the accuracy of diagnostic estimates under a variety of circumstances in a variety of patients. This will produce measures such as sensitivity, specificity, likelihood ratios, and receiver operating curves. The evidence-based practitioner will know the characteristics of the diagnostic tests that he or she uses in practice. Understanding the natural history or prognosis of a patient with a known diagnosis is essential to assessing the impact of interventions intended to improve outcome. Studies of prognosis should identify the stage of disease at the time patients enter the study, use valid measurements of outcome, allow sufficient time for the observed outcome to occur, and include a sufficient number of patients followed for a sufficient period of time to provide accurate estimates of outcome. The choice of intervention depends on an accurate assessment of the patient, accurate diagnosis, and well-characterized prognosis. Studies of interventions directly compare patients treated with the intervention under study to patients who receive the best available treatment of proven value at the time of the evaluation. Where there is no treatment of proven value, the comparison may be a placebo. Where it is proposed that the intervention is superior in some way to the current therapy, the comparison group should be patients treated with the best-proven therapy. Intervention trials are particularly susceptible to bias as there is always a desire on the part of the investigators to identify improvements in intervention and it is unusual for the investigators who developed the intervention to be involved in its testing. Therefore, intervention trials are subject to some of the most rigorous oversight and safeguards in clinical research. The randomized clinical trial attempts to realize all of the strongest safeguards against bias that are known and randomization adds the unique protection of equalizing the likelihood that unknown factors influencing outcome are present in the treatment and control groups. Otherwise, practicalities often lead to the use of less secure designs for intervention evaluation. Understanding the quality of the evidence produced by these various designs is an important part of interpreting evidence and incorporating it into evidence-based practice.
2. MEASURING AND IMPROVING PERFORMANCE
PROBLEMS IN AN EVIDENCE-BASED PRACTICE
103
Questions of safety and harm arise. For example, we know that full anticoagulation with Warfarin reduces the risk of pulmonary emboli in patients who are at bed rest. We also know that patients with recent central nervous system surgery are at an increased risk of postoperative hemorrhage when fully anticoagulated. Fortunately, both events are relatively rare. The paradigms for making an unbiased assessment of the level of risk and relative risk of both the prevention and the occurrence of harmful events are complex and require study paradigms different from those of assessment, diagnosis, prognosis, and therapeutic intervention. Using this information as part of the artful application of evidence is essential to good evidence-based practice. Finally, questions of economic value are increasingly raised in the practice of neurosurgery. The choice of therapeutic interventions may be influenced by their costs. It is increasingly common to compare two interventions not looking for one to be superior to the other, but simply to assess their equivalence. A less expensive intervention that has equivalent outcomes may well be chosen over the more expensive intervention. Definitions of “equivalence” and the methods for assessing such things are complex and involve many very important design decisions in addition to the technical issues of data collection and analysis. Economic analysis requires an expertise that is not commonly taught in the course of medical school or residency training and the evidence-based physician will almost certainly need economic guidance to help make these decisions. The point of identifying these different types of questions is to emphasize that they require different rigorous methods of design data collection and data analysis. The evidence-based practitioner needs to have enough knowledge about these different techniques to be able to identify studies that use appropriate techniques to answer the types of questions for which they need information.
PROBLEMS IN AN EVIDENCE-BASED PRACTICE Facts are stubborn things; and whatever may be our wishes, our inclinations, or the dictates of our passions, they cannot alter the stated facts and evidence. John Quincy Adams, American President (1825–29)
Perhaps the biggest problem facing the evidence-based practitioner is the overwhelming volume of evidence that is available. The evidence is of variable quality, found in a wide variety of resources, and it is simply impossible for any single individual to access, analyze, and synthesize all of it. Fortunately, there are a number of ways in which evidence can be summarized to make this process easier for the practicing neurosurgeon and, indeed, make evidence-based practice feasible.
2. MEASURING AND IMPROVING PERFORMANCE
104
8. EVIDENCE-BASED PRACTICE OF NEUROSURGERY
The simplest form of evidence summary is the critically analyzed topic. In this situation, the practitioner has a clear, very specific question to answer, identifies the appropriate type of study, locates several of the most recently published studies of this type, and analyzes the quality of evidence they provide, the outcomes that they claim, and makes a wise professional summary. So long as the topic is well circumscribed and evidence is in fact available, this can fairly rapidly resolve a tightly specified issue. For broader topics, formal systematic review, often involving metaanalysis as the statistical technique of rigor, is preferred. This type of review follows strict rules for inclusion and exclusion of the studies; rigorous statistical techniques are used to pool the data from those studies, and therefore the conclusions derived have a relatively high level of clinical certainly if the input data are of high quality. The conduct of a systematic review is time and labor intensive. The typical evidence-based neurosurgical practitioner hopes to find that such a systematic review has already been accomplished and is available in the published literature. Another type of evidence synthesis is the evidence-based practice guideline. Such guidelines typically take on very broad topics, such as “the treatment of low-grade glioma,” “management of cervical spine injury,” etc. These guidelines, when truly evidence-based, have generally been produced through a rigorous process, often supervised by a professional society that vouches for the validity of that process. Ideally, they are subjected to regular updating so that the user of the guideline can have reasonable confidence that the results are up to date. The United States Government maintains a list of clinical practice guidelines at www.guidelines.gov. A number of European governments also maintain guideline registries. The evidence-based practitioner is advised, for purposes of efficiency, to start with evidence repositories such as guidelines collections and systematic reviews to see if the question they are confronting has already been analyzed in detail. The second major problem that faces evidence-based practice is the fact that all evidence has limitations. For certain rare conditions and in many specific clinical instances, the precise situation being confronted will not have been studied. In some cases, it will have been studied but there are significant quality issues with the available evidence. This is more common in neurosurgical practice than in specialties that deal with more common diseases. In these circumstances, the neurosurgeon must apply good judgment based on experience, training, logical application of principles of pathophysiology, and careful inferences from evidence of modest quality to make the best decision he/she can for his/her patient. In some circumstances, registries or other large data repositories such as those maintained by the federal government or insurance companies can provide some guidance. The magnitude of the risks imposed by unintended biases in the collection of data in these registries has not yet been
2. MEASURING AND IMPROVING PERFORMANCE
REFERENCES
105
clarified, so it must be used with some caution, but when they provide the best available evidence, it is the responsibility of the evidence-based practitioner to use them.
SUMMARY Don’t accept your dog’s admiration as conclusive evidence that you are wonderful. Ann Landers (1918–2002)
The necessity of making judgments and decisions based on incomplete and inadequate evidence in neurosurgical practice cannot be denied. The fact that many times our patients benefit under these circumstances is a wonderful thing, but it should not be interpreted as evidence that we know more than we do. The great contribution that formal process improvement exercise can make to the quality and safety of neurosurgery is to provide rapid feedback on our own local practice outcomes that allow us to confront problems, make changes, and document whether or not they lead to improvement in outcome, quality, and safety for our patients. The obligations to use best available evidence in our practice and to continuously monitor and improve that practice are equal and complement each other. Neurosurgical practice is incomplete if either is being neglected and we are fortunate that robust methodologies exist for both disciplines and can be readily applied to our practices.
READINGS ON EVIDENCE-BASED PRACTICE The following list provides some general readings in the area of evidence-based practice. References 1–5 provide overviews of evidencebased neurosurgery.1–5 References 6–8 discuss the impact of evidence on practice.6–8 References 9, 10 discuss educating neurosurgeons in evidence-based practice.9,10
References 1. Haines SJ, Walters BC, eds. Evidence-Based Neurosurgery: An Introduction. New York: Thieme Medical Publishing; 2006. 2. Haines SJ. Evidence-based neurosurgery. Neurosurgery. 2003;52:36–47. 3. Esne IN, El-Shehaby AM, Baeesa SS. Essentials of research methods in neurosurgery and allied sciences for research, appraisal and application of scientific information to patient care (part I). Neurosciences. 2016;21:97–107. 4. Esne IN, Baeesa SS, Ammar A. Evidence-based neurosurgery basic concepts for the appraisal and application of scientific information to patient care (part II). Neurosciences. 2016;21:197–206.
2. MEASURING AND IMPROVING PERFORMANCE
106
8. EVIDENCE-BASED PRACTICE OF NEUROSURGERY
5. Bandopadhayay P, Goldschlager T, Rosenfeld JV. The role of evidence-based medicine in neurosurgery. J Clin Neurosci. 2008;15:373–378. 6. Farah JO, Varma TRK. Evidence-based neurosurgery—is it possible? Br J Neurosurg. 2008;22:461–462. 7. Honeybul S, Ho KM. The influence of clinical evidence on surgical practice. J Eval Clin Pract. 2012;19:825–828. 8. Weber C, Jakola AS, Gulati S, Nygaard OP, Solheim O. Evidence-based clinical management and utilizationof new technology in European neurosurgery. Acta Neurochir. 2013;155:747–754. 9. Burneo JG, Jenkins ME, The UWO Evidence-Based Neurology Group. Teaching evidence-based clinical practice to neurology and neurosurgery residents. Clin Neurol Neurosurg. 2007;109:418–421. 10. Haines SJ, Nicholas JS. Teaching evidence-based medicine to surgical subspecialty residents. J Am Coll Surg. 2003;197:285–289.
2. MEASURING AND IMPROVING PERFORMANCE