Editorial
ajog.org
The use of quality metrics in health care: primum non nocere and the law of unintended consequences Todd R. Jenkins, MD, MSHA
Quality and the law of unintended consequences “Healthcare in the United States is not as safe as it should be..”1 With these words, the Institute of Medicine launched their wake-up call to the US health care industry regarding the need for improvement in the quality of the care that we provide to our patients. Since this publication, government organizations, professional organizations, hospitals, and health care providers have worked diligently to increase the quality of the product that we provide to our patients on a daily basis. In an effort to increase the pace of quality improvement, the Affordable Care Act included 3 value-based purchasing programs: the hospital value-based purchasing program, the hospital readmissions reduction program (HRRP), and the hospital-acquired condition (HAC) reduction program.2 Each of these programs utilize financial incentives and penalties that are based on the performance of health care organizations on predetermined quality metrics. As a further stimulus to improve, hospital performance on these quality metrics are publically reported on the Hospital Compare World Wide Web site (http://www.medicare.gov/hospitalcompare).2 In fiscal year 2013, as a result of data obtained by these 3 collective value-based purchasing programs, Centers for Medicare and Medicaid Services (CMS) changes to the inpatient prospective payment system payments to hospitals resulted in a “redistribution of almost 1 billion dollars among hospitals.”3 These payments are a redistribution of money because most of these programs are budget neutral, which is accomplished by removing money from payments to poorly performing hospitals and increasing the payments to high performers. With this amount of money involved, it is incumbent on each hospital and physician to have an understanding of the basics of these programs. Hospital value-based purchasing program: Hospital performance is assessed on a set of quality measures separated
From the Department of Obstetrics and Gynecology, University of Alabama-Birmingham, AL Received Sept. 14, 2015; accepted Oct. 19, 2015. The author reports no conflict of interest. Corresponding author: Todd R. Jenkins, MD, MSHA. trjenkins@uabmc. edu 0002-9378/free ª 2016 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.ajog.2015.10.019
Related article, Page 259
into 5 domains: process of care, patient safety and outcomes, mortality, patient experience of care, and efficiency. “Hospital performance on each measure is scored taking into account achievement relative to a predetermined standard as well as a hospital’s improvement compared with a prior period.”4 HRRP: Hospitals determined as having an “excessive rate of preventable readmissions” can be penalized up to 3% of their Medicare payments.4 Currently, the program includes readmissions for heart attack, heart failure, pneumonia, hip arthroplasty, knee arthroplasty, and chronic obstructive pulmonary disease. In fiscal year 2015, “about three-fourths of the 3478 hospitals for which an HRRP adjustment was reported by the CMS were penalized.”4 HAC reduction: This program measures hospital outcomes on multiple metrics mainly related to infection rates and patient safety measures. The unique component of this program is that the “law requires that hospitals with scores in the worst performing quartile receive a 1% point reduction on their total inpatient prospective payment system payments.”4 These programs and initiatives have been successful in “improving hospital performance on the various program metrics.”4 However, many authors are beginning to question the validity, accuracy, and value of the performance metrics utilized in these programs and other quality initiatives.3-7 With the significant effect that these programs and the metrics that they utilize have on health care organizations and the patients who they serve, it is imperative that the quality metrics and measures utilized are both appropriate and accurate. In this issue of the Journal, Morgan et al5 report on their review of surgical site infections (SSI) in patients undergoing abdominal hysterectomy in the Michigan Surgical Quality Collaborative. SSI rates after abdominal hysterectomy will be a metric in the HAC reduction program. Their study identified 2 significant flaws with this metric. First, hospitals in the bottom quartile, which would receive a 1% penalty under the HAC program, were not statistically significantly different from those programs above the bottom quartile in their rate of infection. Therefore, these hospitals could be financially penalized despite the fact that their difference in rate of infections could be simply related to chance. Second, after application of risk adjustment based on evidence-based risk factors for SSI, such as body mass index >30 and cancer, 20% of programs changed quartiles, which would alter the hospitals penalized under this program.5 Without risk adjustment, these hospitals, and potentially their patients, would be harmed by this program. FEBRUARY 2016 American Journal of Obstetrics & Gynecology
143
Editorial Health policy experts have previously warned about the potential for health outcome metrics to be inaccurate secondary to a failure to appropriately risk adjust. Gilman et al6 wrote “using health outcomes as a metric of value is.potentially problematic because severity of illness and social challenges that affect health management might not be fully captured in risk adjustment models.” As a result of this potential bias, their research found that “safety net hospitals were at greater risk of receiving reduced payments than other hospitals” and “were also less likely than other hospitals to be receiving bonus payments.”6 The potential negative financial effects of inaccurate measurements as a result of failure to risk adjust for social and demographic characteristics could potentially result in a hospital being penalized twice since these factors can also play a significant role in hospital readmission rates (HRRP). Limitations of current quality measures are not only limited to failure to appropriately risk adjust, but also by concerns that they do not appropriately measure the problem. Calderon et al7 describe concerns about the accuracy of the catheterassociated urinary tract infection (CAUTI) measure. Currently, almost one third of the CMS HAC reduction program penalty is based on the Centers for Disease Control and Prevention (CDC) CAUTI metric. This metric is based on selfreported data and is standardized by dividing the total number of infections by total catheter days divided by 1000. In contrast, the Agency for Healthcare Research and Quality (AHRQ) CAUTI metric utilizes nurse reporting of randomly selected cases using a standardized reporting instrument and the resulting data are standardized by dividing the total number of infections by hospital discharges divided by 1000. From 2009 through 2013, the CDC metric demonstrated a 5% increase in the rate of CAUTI while the AHRQ metric found a 28% decrease in the incidence of CAUTI. The authors question the validity of this metric given the wide variability in the results. Furthermore, the authors expressed concern that utilizing catheter days as the denominator provides an incentive to use catheters longer than necessary, which is in opposition with the primary quality goal for which the metric was designed.7 An additional limitation of current quality measures can be that they measure events that are to some degree out of the control of the health care organization or provider. For example, rates of the third- and fourth-degree perineal lacerations have been utilized as a patient safety indicator by AHRQ and the Joint Commission includes the rate of these complications in their Pregnancy and Related Conditions Core Measures. However, the National Quality Forum withdrew their support for this measure “citing concerns around unreliable data. a majority of risk factors not being amenable to prevention, and no interval change in laceration rates after 2003, when laceration rates were adopted as a quality measure.”8 Furthermore, in a review of data from the Nationwide Inpatient Sample, Friedman et al8 found that the “large majority of hospitals in our analysis had adjusted
144 American Journal of Obstetrics & Gynecology FEBRUARY 2016
ajog.org laceration rates that were statistically indistinguishable when including 95% confidence intervals, precluding meaningful comparisons between different institutions.” The goal of this editorial is to familiarize the reader regarding the 3 current CMS programs aimed at improving the quality and efficiency of US health care and to describe some of the concerns surrounding 3 of the currently utilized health care quality metrics in these programs. It is not the goal of this editorial to criticize the tremendous efforts currently in place to improve the quality of the health care that we provide, nor is it to criticize the use of quality metrics. Goals are not truly achievable or tangible unless we define objective measures for meeting them. However, it is imperative that in our zeal to improve we choose metrics that are accurate, are fair, and achieve their intended goal without significant unintended consequences. The law of unintended consequences holds that actions of people always have effects that are unanticipated or unintended. Economist Rob Norton wrote “Economists and social scientists have heeded its (law of unintended consequences) power for centuries; for just as long, politicians and popular opinion have largely ignored it.”9 Defining and measuring quality is an extremely challenging proposition and critically important for the future of health care. Physicians, rather than politicians or public opinion, must be at the forefront in selecting, validating, and implementing future quality metrics. Physician training is steeped in the concepts of scientific validity and primum non nocere: first do no harm. Therefore, we must create, study, and advocate for the use of appropriate quality metrics. Improving the quality of the care that we provide to our patients is too important. REFERENCES 1. National Research Council. To err is human: building a safer health system. Washington (DC): National Academies Press; 2000. 2. Patient Protection and Affordable Care Act 42USC x18001 (2010). 3. Rau J. Medicare discloses hospitals’ bonuses, penalties based on quality. Kaiser Health News. Available at: http://www.kaiserhealthnews. org/stories/2012/december/21/medicare-hospitals-value-based-purcha sing.aspx. Accessed Jan. 26, 2015. 4. Khan CN, Ault T, Potetz L, Walke T, Chambers JH, Burch S. Assessing Medicare’s hospital pay-for-performance programs and whether they are achieving their goals. Health Affairs 2015;34:1281-8. 5. Morgan DM, Streifel KM, Kamdar NS, et al. Surgical site infection following hysterectomy: adjusted rankings in a regional collaborative. Am J Obstet Gynecol 2016;214:259.e1-8. 6. Gilman M, Adams K, Hockenberry JM, Milstein AS, Wilson IB, Becker ER. Safety-net hospitals more likely than other hospitals to fare poorly under Medicare’s value-based purchasing. Health Affairs 2015;34:398-405. 7. Calderon LE, Kavanagh KT, Rice MK. Questionable validity of the catheter-associated urinary tract infection metric used for value-based purchasing. Am J Infect Control 2015;43:1050-2. 8. Friedman AM, Ananth CV, Prendergast E, D’Alton ME, Wright JD. Evaluation of third-degree and fourth-degree laceration rates as quality indicators. Obstet Gynecol 2015;125:927-37. 9. Norton R. The concise encyclopedia of economics: unintended consequences. Available at: at http://www.econlib.org. Accessed August 13, 2015.