Clinical research in trauma and orthopaedic surgery—Call for action

Clinical research in trauma and orthopaedic surgery—Call for action

Injury, Int. J. Care Injured (2008) 39, 627—630 www.elsevier.com/locate/injury EDITORIAL Clinical research in trauma and orthopaedic surgery–—Call ...

76KB Sizes 0 Downloads 127 Views

Injury, Int. J. Care Injured (2008) 39, 627—630

www.elsevier.com/locate/injury

EDITORIAL

Clinical research in trauma and orthopaedic surgery–—Call for action In 1964, the first hand-held, whole-body, multi-species diagnostic tool was presented to the amazed world. It is unknown how many years of development and preclinical testing were required before the device was approved for clinical use. Its sensitivity and specificity were unclear, and its effectiveness was apparently not proven in a clinical experiment. No wonder–—Dr McCoy’s marvellous ‘medi-scanner’, used throughout the different episodes of Star Trek and its successors, is a fictitious apparatus. It is hard to imagine the thousands of Medline citations and reports from the FDA and EUDRA if a ‘one-fits-all’ device could replace all diagnostic tests currently available. However, the biomedical sciences (together with information technology) belied and overhauled any forecasts made in the early 1960s; keeping up with the rapidity of developments is difficult, even for experts in the particular field. Organ transplantation, minimally invasive surgery and navigation, bedside ultrasound, computed tomography, magnetic resonance imaging, decoding of the human genome and thousands of other milestones significantly changed our understanding of surgery both as an art and as a science.4—7,12 Continuing medical progress is socially desirable, but we are at the edge of recognising the limits of spiralling medical progress. We cannot do more than biology, even if the foreshadowing of nanotechnology, micro-machines and molecular engineering suggest something different. In 2005, we were painfully reminded that enthusiasm, pressure to succeed and criminal energy form a dangerous cocktail, when Woo Suk Hwang admitted his fraudulent stem-cell experiments. He dragged the faith in an entire research area with him, and it may take years to repair the damage.9 The question remains–—should we do it, if we can? And if we can, can we afford it? Despite obvious

parallels with other domains of the natural sciences, the eyes of the biomedical researcher must turn not only to objective results of a laboratory or clinical experiment, but also to their potential consequences for society and the individual patient. Is it likely that this diagnostic test or treatment will significantly alter clinical practice? Is it worth its toxicity, invasiveness, risks and costs? Economic considerations have become overwhelming and repressing, but do make up an integral part of our daily practice. There is still some tendency to blindfolding among healthcare professionals and the public against financial restraints in health care, but disclaiming reality makes things worse, not better. However, inputs and outputs cannot be expressed in solely monetary units. All equations must account for the intangible costs and benefits, such as discomfort, adverse events and improved quality of life. Porzsolt and Kaplan coined the term ‘clinical economics’ to cope with this delicate trade-off.11 Defining the value of health technology requires profound knowledge of research methodology.3 This knowledge remains fragmentary among clinicians of all disciplines and is still hazardously poor in the surgical community. Nobody should be accused for this nuisance. However if we, as surgeons, do not immediately set out to troubleshoot, we will lose credibility and professionalism and also influencecapability during political and socio-economic decision-making on the future of health care. How come that a skilled surgeon who is able to replace a hip joint, fix a multi-segmental spine fracture or re-transplant a hand cannot explain a relative risk or the meaning of a p-value? We can interpret a radiograph standing on our heads, so why do we fail so often in designing, conducting and interpreting the results of a clinical trial? We may amuse ourselves about this obvious deficit. We may

0020–1383/$ — see front matter # 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.injury.2008.01.052

628 even feel that innumeracy is a distinct, immutable feature of orthopaedic and trauma surgeons; but, honestly, is not it a shame? It is an interesting but alarming phenomenon that the basic idea of evidence-based medicine has been so poorly understood in the surgical community and has raised so many objections.2 A recent debate published in the Annals of Thoracic Surgery illustrates the biased view of clinical research in surgery, mainly triggered by ungrounded aversions to evidence-based medicine.10 Both practitioners and researchers monotonously repeat the mantra that surgical procedures are different from pharmacological interventions, that we cannot randomise because of this or that, that informed consent is difficult to obtain under pressure of time, during night shifts or from critically ill patients, that it is sometimes unethical to randomise and that we cannot perform double-blind trials (thank God we are still allowed to see what we are doing!). So what, if all these statements are real and indisputable; does this prevent us from performing valid and meaningful clinical studies? Research is the motor of medical advancement. We are ethically committed to actively taking part in this progress, as contributors or as critical consumers and distributors. Offering the best medical care to everybody who needs it, regardless of gender, ethnicity and religion, is the duty of doctors around the globe, anchored in the ancient Hippocratic oath and the modern Declaration of Geneva. We are also obliged to do no harm. Acquiring the competence to identify the best, and to separate the good from the questionable or quackery, is not an add-on or luxury–—it is as important as placing a secure suture and knot. Surgeons will be best prepared for the forthcoming changes and challenges in health care by systematic training in the key issues of research. It may be helpful to start with the simple cascade of efficacy, effectiveness and efficiency. Efficacy describes how a certain intervention works under controlled trial conditions and is closely related to causation–—can the observed effect reliably be assigned to the intervention of interest, or was it merely produced by bias and confounding? For therapeutic methods, the randomised controlled trial (RCT) is a common design to determine efficacy, followed by cohort studies and case series. Effectiveness describes whether interventions with proven efficacy still work under the conditions of daily practice, and in a rather unselected patient population. Effectiveness studies need more practical designs than the classic RCT, with broader inclusion criteria, less exclusion criteria and pragmatic follow-up schemes. The largest proportion of external

Editorial evidence available for decision making in clinical practice is derived from efficacy and effectiveness studies, although the latter are still rare. Finally, efficiency explains the impact of an intervention with proven efficacy and effectiveness on the use and distribution of resources and the health state of a population. The lack of RCTs in trauma and orthopaedic surgery is not the urgent problem. Those who obsessively call for more and more RCTs to compete with other specialties may find it sobering that the difference in the prevalence of RCTs in different branches of medicine and surgery is much lower than generally supposed. Gnanalingham et al. identified 899 RCTs among 11132 medical and 261 among 4959 surgical investigations.8 This translates to an absolute difference of 2.8% (95% confidence interval 2.0—3.6%), or a relative risk of 0.97 (95% confidence interval 0.96—0.98%). Obviously, flooding the world with RCTs is neither a universal remedy to improve the quality of research, nor will it automatically improve clinical practice. The RCT is probably the best method to determine the relative efficacy of a certain intervention compared with another. Randomisation is the only way to balance known and unknown confounders equally between treatment arms. If we are interested in comparing two or more therapeutic interventions, nothing is easier than analysing and interpreting the results of a diligently designed and conducted RCT. With a defined degree of uncertainty, the observed treatment effect is caused by the interventions of interest, not by confounding. That’s it: no magic and no need for hysteria. If we can randomise, if there is therapeutic uncertainty and if patients are willing to participate in a clinical experiment, let’s do it and take advantage of the play of chance. If we cannot randomise, or if there are other obstacles to an RCT, we can consider an alternative design option. Insisting on the term ‘randomisation’ is counterproductive if there is no understanding of how randomisation works. In a systematic review of navigated total knee replacement, we noted that 11 of 18 trials indexed as RCTs did not use proper methods of randomisation.1 Assigning patients alternately, according to their admission date or availability of equipment, is not randomisation. Readers and peer reviewers are strongly advised not to believe in the notification of an RCT until clear statements are made about concealed, computer-generated random codes. Most problems arising in daily practice go far beyond the different performances of therapeutic methods, and the RCT is irrelevant for investigating the accuracy of a diagnostic test or the prognostic

Editorial implication of a laboratory value. We are often interested in the outcomes achieved with a certain procedure without having any alternative. Again, improving the scientific basis of trauma and orthopaedic surgery is not simply a matter of increasing the number of RCTs. The supreme problem is the widespread ignorance of the delicate architecture and distinct chronology of a clinical research project. Without respecting these issues, even the most laudable study is doomed to fail. At this point, we do not address financing. There is no cookbook that ensures the success of a scientific project. However, adherence to the following steps will, at least, increase the likelihood of generating meaningful results. 1. Transform your clinical problem or idea into a precise, simple and thus answerable primary research question. The fate of your study immediately depends on how well this question is defined. Remember ‘Deep Thought’, the supercomputer in Douglas Adams’ novel The Hitchhiker’s Guide to the Galaxy, built to come up with the ultimate answer to life, the universe and everything. After seven and a half million years of calculating, the answer was ‘42’. This silly response illustrates the danger of posing vague questions–—you may waste your resources. 2. State few secondary hypotheses. There is an understandable tendency to address all thinkable problems in one study, but this will lead you down the wrong track. With a type I error of 5%, 1 in 20 experiments (or 1 in 20 hypotheses) may have a false-positive result. The more hypotheses and the more subgroups you are attempting to study, the more the chances of artificial findings. 3. Thoroughly search the available literature. Do not limit yourself to Medline, to abstracts or English papers. Expand your search to the Cochrane Library and Central Register of trials. Seek help from a trained librarian. You may already find an answer to your problem, raising doubts about the need for another study. Do not be disappointed if this happens; after the invention of vaccines, antibiotics and intramedullary nailing, very few interventions have dramatically altered clinical practice. Be reluctant in your expectations. Literature data will help in narrowing the probability of the likely effect size and specifying your research goals. 4. Choose the appropriate patient sample (in both qualitative and quantitative means). What is your target population? To whom will the results of the study apply? Very broad inclusion criteria accelerate recruitment, but too heterogeneous a sample of patients also dilutes the inferences

629 that can be drawn from your data. On the other hand, excessively tight inclusion criteria or too many exclusion criteria will restrict your findings to an unrealistic sample of patients. 5. Select valid and clinically important outcome measures. Are you, your colleagues and the clinical and scientific communities potential readers of the final manuscript, and are the public really interested in surrogate measures such as intraoperative blood loss or time to bone union? Think of valid outcome assessment tools, such as functional scores (e.g. the Harris hip score) and generic measures of quality of life (e.g. the Short Form 36). 6. Draft a study protocol and investigation plan. Do not even think of conducting a clinical study without a detailed protocol, which is the blueprint of your study and will be the best and most reliable friend during its different phases. Even if it takes months to draft the protocol, you will save time. Be specific. Do not state ‘we will allocate patients’. Patients do not enrol themselves into a clinical trial. Who will do it, when and where? Who will be responsible for explaining the research background and aims, for obtaining informed consent, baseline documentation and follow-up examinations? 7. Consult methodological and statistical advisors early, particularly if you are uncertain with type I and type II errors, sample size calculations, confirmatory testing and so on. Take care the statistician is the midwife of your trial, not the pathologist. Trauma and orthopaedic surgery has a long and successful history of basic and disease-oriented research. There are few other disciplines in which standardised animal models provided similarly groundbreaking insights into complex pathophysiological mechanisms such as posttraumatic wholebody inflammation, the biology of bone healing, and biomechanics. Trauma laboratories worldwide are at the forefront of expanding the available knowledge to the subcellular and molecular levels. Thus, there is much potential and a good chance of regaining the lost time during which the importance of clinical research in trauma and orthopaedic surgery has been neglected. However, this process demands not only close collaboration between academic and non-academic institutions, but also concerted action across Europe. Ideas are generated independently of local peculiarities, and problems occur in different surroundings. Their solution by scientifically sound methods is no longer a task of committed individuals but of all members of the surgical community.

630 Clinical research is still evolving in orthopaedic and trauma surgery. It is envisaged that in the very near future, many exciting discoveries and novel treatment methods will be brought from the laboratory environment into the clinical setting, providing potential cures for pathological conditions that were once thought untreatable. Clinical research will be at the centre of many developments in the foreseeable future. For this reason, clinical research is the focus of this special issue of Injury. The manuscripts included constitute a detailed review of all the important issues discussed above. We would like to thank the authors for their contributions and their commitment to the dissemination of knowledge.

Conflict of interest None.

References 1. Bauwens K, Matthes G, Wich M, et al. Navigated total knee replacement. A meta-analysis. J Bone Joint Surg Am 2007;89:261—9. 2. Bhandari M, Giannoudis PV. Evidence-based medicine: what it is and what it is not. Injury 2006;37:302—6. 3. Bhandari M, Pape HC, Giannoudis PV. Issues in the planning and conduct of randomised trials. Injury 2006;37:349—54. 4. Giannoudis PV, Dinopoulos H, Chalidis B, Hall GM. Surgical stress response. Injury 2006;37:S3—9. 5. Giannoudis PV, Pountos I, Pape HC, Patel JV. Safety and efficacy of vena cava filters in trauma patients. Injury 2007;38:7—18. 6. Giannoudis P, Psarakis S, Kontakis G. Can we accelerate fracture healing? A critical analysis of the literature. Injury 2007;38:S81—9.

Editorial 7. Giannoudis P, Tzioupis C, Almalki T, Buckley R. Fracture healing in osteoporotic fractures: is it really different? A basic science perspective. Injury 2007;38:S90—9. 8. Gnanalingham MG, Robinson SG, Hawley DP, Gnanalingham KK. A 30-year perspective of the quality of evidence published in 25 clinical journals: signs of change? Postgrad Med J 2006;82:397—9. 9. Kennedy D. Editorial retraction of Hwang WS, Roh SI, Lee BC, et al. Patient-specific embryonic stem cells derived from human SCNT blastocysts. Science 2005;308:1777—83. Science 2006; 311:335. 10. Mooreim H, Mack MJ, Sade RM. Surgical innovation: too risky to remain unregulated? Ann Thorac Surg 2006;82:1957—65. 11. Porzsolt F, Kaplan RM, editors. Optimizing health: improving the value of healthcare delivery. New York: Springer; 2007. 12. Stylios G, Wan T, Giannoudis P. Present status and future potential of enhancing bone healing using nanotechnology. Injury 2007;38:S63—74.

Peter V. Giannoudis Academic Department of Trauma and Orthopaedic Surgery, School of Medicine, University of Leeds, UK Dirk Stengel* Centre for Clinical Research, Department of Orthopaedic and Trauma Surgery, Unfallkrankenhaus Berlin, Germany University of Greifswald, Germany *Corresponding author at: Centre for Clinical Research, Department of Orthopaedic and Trauma Surgery, Unfallkrankenhaus Berlin, Warener Str. 7, 12683 Berlin, Germany. Tel.: +49 30 5681 3030; fax: +49 30 5681 3034 E-mail address: [email protected] (D. Stengel)