The Era of Big Performance Measurement: Here at Last?

The Era of Big Performance Measurement: Here at Last?

The Joint Commission Journal on Quality and Patient Safety Performance Measures The Era of Big Performance Measurement: Here at Last? Peter K. Linden...

40KB Sizes 0 Downloads 23 Views

The Joint Commission Journal on Quality and Patient Safety Performance Measures

The Era of Big Performance Measurement: Here at Last? Peter K. Lindenauer, M.D., M.Sc.; Kaveh G. Shojania, M.D.

A

fter a number of false starts, spanning roughly 150 years,1 there seems little doubt that the era of “big” performance measurement in health care in the United States is here to stay. The vast majority of acute care hospitals now participate in the Joint Commission ORYX® core measures programs and Hospital Quality Alliance (HQA); the number of measures that providers are being asked to report on is growing on an almost daily basis; and experiments with “pay-for-performance” are taking place with increasing frequency. Although such marketdriven efforts have yielded only modest benefits,2 hospital executives increasingly show the same familiarity with concepts such as hospital standardized mortality rates and composite process scores, the lingua franca of the chief quality officer, as they have shown with terms such as days of cash on hand and margin. This long-awaited arrival of sustained interest in performance measurement reflects several related developments: the gradual accumulation of evidence that the quality and safety of health care in the United States are not as good as they should be3–5; an emboldened payor community, led by the Centers for Medicare & Medicaid Services, that has taken on a more active role than was previously imaginable6; and opportunities to share information about health care providers on the Web. Performance measurement benefits the constituents of the health care system in different ways. It allows patients to make informed choices about where to seek care and provides payors with information necessary to purchase health care on the basis of value, not just on cost. For providers, although evidence of concrete improvements is limited, publicly reported performance measurement appears to stimulate hospitals to invest in quality and safety programs.7,8 Each of the studies9–11 appearing in this issue of The Joint Commission Journal on Quality and Patient Safety can be viewed from the perspective of these stakeholders. Given that a large percentage of hospitals in the United States now belong to health care systems, patients and payors have a shared interest in identifying characteristics that can be used to select the best ones. Towards this end, Hines and Joshi9 combined data on quality performance from the HQA with information contained in the America Hospital Association annual survey. They found that not-for-profit hospital systems performed consistently better than for-profit ones and that

June 2008

health systems with centralized approaches to quality management were superior to those with decentralized models. The authors did not ask a perhaps larger question—whether hospitals that belong to health care systems perform better than those who operate independently. One might envision that health systems create value by enhancing the communication of best practices and fostering standardization among member institutions. Whether this is actually the case remains to be seen. In another study that can benefit patients and payors seeking to make more informed health care choices, Jha et al.10 investigated whether hospitals that participated in and reported progress toward safety goals established by the Leapfrog Group also tended to perform well on HQA process and outcome measures. The not-so-surprising answer was yes. As the authors acknowledge, the causal linkage between the presence of intensivists or computer physician order entry systems and mortality in acute myocardial infarction, heart failure, and pneumonia remains fuzzy. The identified associations may in fact reflect confounding—better hospitals have superior performance and also choose to implement the Leapfrog standards. The findings are nevertheless encouraging, if only because two entirely different approaches to rating the quality and safety of hospitals produced reasonably concordant results. Given that the conditions represented in the HQA amount to only a fraction of hospital admissions, the existence of more general approaches to quality assessment are welcome. Finally, perhaps most interesting, is the role that performance measurement can play as a tool for guiding quality improvement efforts.11 Although programs such as the ORYX core measures and the HQA provide potentially valuable information to patients and payors, merely reporting the relative performance of hospitals in a city or state provides little practical benefit to those seeking concrete strategies for improvement. Such knowledge is often best gained through site visits that allow for a detailed review of the structures and processes of top-performing institutions, an approach that has an enviable track record in both surgical and cardiovascular care.12,13 To facilitate this process, Allison and colleagues at the University of Alabama developed an algorithm to identify the best performers within the HQA. Their scoring system assigned points on

Volume 34 Number 6

Copyright 2008 Joint Commission on Accreditation of Healthcare Organizations

307

The Joint Commission Journal on Quality and Patient Safety the basis of relative and absolute performance, sustained performance over time, and success on difficult measures. The 45 hospitals identified through this approach, representing the top 1% nationally, would be an obvious choice for further study. The organizations at the helm of the measurement and reporting movement in the United States have much to celebrate. In little more than a decade, substantial progress has been made in identifying a starter set of performance measures, establishing an infrastructure and incentive system to enable and foster reporting, and developing nuanced approaches to the use of financial incentives. At the same time, even the most ardent proponents of public reporting acknowledge that the field is in its infancy. To some extent, performance measurement has to date proceeded much like the old joke about the drunk looking for his keys under a lamppost: the drunk didn’t lose them there, but it was too dark to look anywhere else. Similarly, many performance measures—including processes of care for acute myocardial infarction, pneumonia, and heart failure, as well as basic structural features such as computer physician order entry and intensivist staffing—have simply been where the light has been bright. In this context, there remains a glaring need for performance measures that not only are based on sound evidence but have strong effects on outcomes that patients care about.14,15 The relatively simple measures in use today must give way to more sophisticated ones capable of encouraging adherence to desired care processes while simultaneously discouraging overuse errors. Furthermore, whether driven by improvements in the way administrative data are created, such as the recent requirement to distinguish between conditions present on admission from those arising during the course of hospitalization,16 or through greater reliance on review of paper and electronic medical records, more robust methods of risk adjustment are needed to make rational decisions on the basis of comparative outcomes. Finally, we need better methods of synthesizing and communicating performance information in ways that patients find useful.17 Similar to the conundrum faced by quality improvement more generally, the field of performance measurement must balance the need for more measures against the requirement that such measures have their intended effect.18 Recent history teaches us that even measures which may at first seem reasonable can turn out to waste precious resources and the goodwill of practicing clinicians, and may occasionally result in harm to patients.19 Given the high stakes associated with public reporting and pay-for-performance programs, the urgent need to improve3 now applies not only to health care quality but to the quality measures themselves. 308

June 2008

References 1. Nightingale F. Notes on Hospitals, 3rd ed. London: Longmans, Green and Co., 1863. 2. Lindenauer P.K., et al.: Public reporting and pay for performance in hospital quality improvement [see comments]. N Engl J Med 356:486–496, Feb. 1, 2007. Epub Jan. 26, 2007. 3. Chassin M.R., Galvin R.W.: The urgent need to improve health care quality. Institute of Medicine National Roundtable on Health Care Quality [see comment]. JAMA 280:1000–1005, Sep. 16, 1998. 4. Jencks S.F., et al.: Quality of medical care delivered to Medicare beneficiaries: A profile at state and national levels [see comment]. JAMA 284:1670–1676, Oct. 4, 2000. 5. McGlynn E.A., et al.: The quality of health care delivered to adults in the United States [see comment]. N Engl J Med 348:2635–2645, Jun. 2006. 6. Pear R.: Medicare says it won’t cover hospital errors. The New York Times, Aug. 19, 2007. 7. Fung C.H., et al.: Systematic review: The evidence that publishing patient care performance data improves quality of care [see comment]. Ann Intern Med 148:111–123, Jan. 15, 2008. 8. Hibbard J.H., Stockard J., Tusler M.: Hospital performance reports: Impact on quality, market share, and reputation. Health Aff (Millwood) 24:1150–1160, Jul.–Aug., 2005. 9. Hines S., Joshi M.S.: Variation in quality of care within health systems. Jt Comm J Qual Patient Saf 34:326–332, Jun. 2008. 10. Jha A.K., et al.: Does the Leapfrog Program help identify high-quality hospitals? Jt Comm J Qual Patient Saf 34:318–325, Jun. 2008. 11. Allison J.J., et al.: Identifying top-performing hospitals by algorithm: Results from a demonstration project. Jt Comm J Qual Patient Saf 34:309–317, Jun. 2008. 12. Khuri S.F., et al.: The Department of Veterans Affairs’ NSQIP: The first national, validated, outcome-based, risk-adjusted, and peer-controlled program for the measurement and enhancement of the quality of surgical care [see comment]. Ann Surg 228:491–507, Oct. 1998. 13. O’Connor G.T., et al.: A regional intervention to improve the hospital mortality associated with coronary artery bypass graft surgery [see comment]. JAMA 275:841–846, Mar. 20, 1996. 14. Bradley E.H., et al.: Hospital quality for acute myocardial infarction: Correlation among process measures and relationship with short-term mortality [see comment]. JAMA 296:72–78, Jul. 5, 2006. 15. Werner R.M., Bradlow E.T.: Relationship between Medicare’s hospital compare performance measures and mortality rates [see comment]. JAMA 296:2694–2702, Dec. 13, 2006. Erratum in: JAMA 297:700, Feb. 21, 2007. 16. Pine M., et al.: Enhancement of claims data to improve risk adjustment of hospital mortality [see comment]. JAMA 297:71–76, Jan. 3, 2007. 17. Peters E., et al.: Less is more in presenting quality information to consumers. Med Care Res Rev 64:169–190, Apr. 2007. 18. Auerbach A., Landefeld C.S., Shojania K.G.: The tension between needing to improve care and knowing how to do it. N Engl J Med 357:608–613, Aug. 9, 2007. 19. Mitka M.: JCAHO tweaks emergency departments’ pneumonia treatment standards [see comment]. JAMA 297:1758–1759, Apr. 25, 2007. Erratum in: JAMA 298:518, Aug. 1, 2007. Peter K. Lindenauer, M.D., M.Sc., is Medical Director, Clinical and Quality Informatics, Baystate Health, and Associate Professor of Medicine, Tufts University School of Medicine, Springfield, Massachusetts. Kaveh G. Shojania, M.D., is Medical Director of Performance Improvement and Associate Professor of Medicine, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, and a member of the Editorial Advisory Board of The Joint Commission Journal on Quality and Patient Safety. Please address correspondence to Peter K. Lindenauer, [email protected].

Volume 34 Number 6

Copyright 2008 Joint Commission on Accreditation of Healthcare Organizations