When bad evidence happens to good treatments

When bad evidence happens to good treatments

Acute Pain (2008) 10, 161—162 ABSTRACT When bad evidence happens to good treatments Dan Carr a,b a Saltonstall Professor of Pain Research, Departme...

75KB Sizes 2 Downloads 128 Views

Acute Pain (2008) 10, 161—162

ABSTRACT

When bad evidence happens to good treatments Dan Carr a,b a

Saltonstall Professor of Pain Research, Departments of Anesthesia and Medicine at Tufts Medical Center, Boston, USA b President and CMO of Javelin Pharmaceuticals, Inc., Cambridge, Massachusetts, USA Available online 16 September 2008

The idea that medical practice should be based upon observation of outcomes is at least as old as Plato. However, only recently has the collection and synthesis of clinical evidence on a large scale, and in close to real time, become feasible. Meta-analyses of randomised controlled trials, although a valuable and powerful tool, is easily misused for at least two fundamental reasons. The first is at a technical level related to the poor quality of the primary trials (underpowering; heterogeneity of patients, treatments and assessment tools; too-short follow-up, etc.) and extends to suboptimal application of methods for their synthesis. The second is at a deeper, conceptual level related to the innate satisfaction of understanding apparently disorganised data and phenomena through discernment of concise, timeless unifying principles (‘‘compression’’). This tendency assigns greater validity to a quantitative, reductionist approach than a nonreductionist, qualitative approach (e.g., narrative). Unfortunately, among the recent developments in the field of evidence-based medicine (EBM) is its application to deny payment for treatments that may actually benefit subgroups of patients whose responses are not captured when

group mean outcomes are compared. This novel development is particularly damaging to the development and ongoing provision of high-tech, invasive therapies for patients for whom appropriate, meaningful outcomes may not be captured in simple numerical scales, but also affects other treatments as well (e.g., opioids for chronic pain). This talk will survey technical shortcomings of the evidence currently available for or against certain pain treatments, and question whether simplistic numerical methods may justly be applied to deny payment for treatments that do not meet reductionist standards.

Further Reading [1] Carr DB. Pain therapy. Evidence, explanation—–or ‘the power of myth’? Currt Opin Anesthesiol 1996;9:415—20. [2] Carr DB. Memoir of a meta-analyst: on the silent ‘‘L’’ in ‘‘Quantitative’’. In: Carr DB, Loeser JD, Morris DB, editors. Narrative, pain, and suffering. progress in pain research and management, 34. Seattle: IASP Press; 2005. p. 325—54. [3] Carr DB. When bad evidence happens to good treatments. Reg Anesth Pain Med 2008;33:229—40.

1366-0071/$ — see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.acpain.2008.08.018

162

D. Carr

[4] Chaitin G. On the intelligibility of the universe and the notions of simplicity, complexity and irreducibility. German Philosophical Congress, 2004. Available at www.cs.auckland.ac.nz/CDMTCS/chaitin/bonn.html. [5] Jadad AR. Meta-analysis in pain relief: a valuable but easily misused tool. Curr Opin Anesth 1996;9:426—9. [6] Witter L, Simon L, Dionne RA. Are means meaningless? The application of individual responder analysis to analgesic drug development. APS Bull 2003;13:1—8.

[7] Wittink H, Wiffen P, Carr DB. Evidence-based medicine in pain management. In: Berman S, editor. Approaches to pain management: an essential guide for clinical leaders. Oakbrook Terrace, IL: Joint Commission on Accreditation of Healthcare Organizations; 2003. p. 21—33 [Chapter 2].

Available online at www.sciencedirect.com