Tweaking the Tools of “Quality” Measurement

Tweaking the Tools of “Quality” Measurement

Tweaking the Tools of ‘‘Quality’’ Measurement Patrick Twomey, MD, Oakland, Calif From the Department of Surgery, University of California, San Franci...

40KB Sizes 2 Downloads 64 Views

Tweaking the Tools of ‘‘Quality’’ Measurement Patrick Twomey, MD, Oakland, Calif

From the Department of Surgery, University of California, San Francisco-East Bay, Oakland, Calif

THE ‘‘VOLUME/QUALITY’’ CONTROVERSY smolders on. Are there surgical procedures whose complexity, resource demands, or other factors require their concentration in certain high-volume hospitals? Does restricting some operations to a few ‘‘centers of excellence’’ lead to better outcomes? Today’s centralized data collection by government and other agencies generates plenty of numbers, but how should we interpret them?1 This report by Stukenborg and colleagues is one in a series from this group comparing different scoring systems retrospectively applied to administrative data. This time, the target procedures were resective surgery for lung cancer, and the sole outcome variable is mortality. In their sample of more than 14,000 patients, there were 519 deaths scattered among 330 hospitals in a 3-year period, an average well below 1 death per hospital per year. The distribution of hospital deaths for this 3.6% of the patients, adjusted for risk, is proposed as a measure of ‘‘quality of care’’ for the hospitals, which is then correlated with their lung cancer operative volume. The authors’ extensive statistical analysis compares 3 models of measuring risk-adjusted, in-hospital mortality for lung cancer operations. They show that adjusting estimates of risk using only comorbidities found at admission (and not adding other comorbidities developed or detected later to the prediction model) reduces the apparent differences sometimes found between low-volume and high-volume hospitals. Using the authors’ favored analytic technique, the direction of the difference inches toward including the possibility that ‘‘volume matters little,’’ although this is never explicitly Accepted for publication July 17, 2005. Reprint requests: Patrick Twomey, MD, Department of Surgery, University of California, San Francisco-East Bay, 1411 East 31st St, Oakland, CA 94602. E-mail: [email protected]. Surgery 2005;138:508-9. 0039-6060/$ - see front matter Ó 2005 Mosby, Inc. All rights reserved. doi:10.1016/j.surg.2005.07.002

508 SURGERY

said by the authors. But, by any method of analysis considered here, hospital volume is a minor contributor to variation in mortality. Apart from the small size of any volume-mortality effect, there’s another problem with using studies such as this to guide policy: Administrative data are a poor basis for measuring clinical outcomes.2 It is said that ‘‘The King’s census is only as good as the village watchman’s count.’’ The primary data analyzed here were gathered by coders chiefly for billing purposes and assembled by the state of California for measuring ‘‘financial performance, bed availability, productivity, ., and county health planning.’’3 Little or none of this coding was intended for exercises of clinical quality assessment years later, and no amount of statistical tweaking can replace missing or misclassified data. Far more useful measures come from ‘‘purpose gathered’’ data on risk and outcome, collected by a dedicated health professional, often a trained registered nurse. Such data are already being gathered, and the American College of Surgeons has endorsed such a program, adapting the National Surgical Quality Improvement Program system developed by the Veterans Affairs. Practicing surgeons also may feel that an overall operative mortality rate of 3.6% for extirpation of a deadly disease such as lung cancer does not sound excessive, and that analysis of sources of its variability may seem a blunt tool to ferret out quality differences among hospitals.4 Indeed, focusing entirely on the relatively rare deaths in this population takes no account of the outcomes in the 96% who did not die. It ignores complications, duration of hospital stay, patient satisfaction, and cost as quality measures. It also says nothing about success in palliation or cure of the lung cancers. Would not many patients—and their surgeons— be willing to trade a few tenths of a percentage point in operative risk for any measurable increase in the chances of cure? Finally, audits of hospital outcome alone have an additional drawback---they give no guide to action. Even if they identify an ‘‘outlier hospital,’’ they

Twomey 509

Surgery Volume 138, Number 3

provide no specifics to inform efforts to improve. Should they recruit more intensivists, computerize the pharmacy, buy better monitors, or change nurse staffing ratios? Process audits, which study ‘‘what we are doing’’ rather than ‘‘how it all came out,’’ are more useful in addressing these questions. Such efforts are now springing up across the United States, prodded by the Leapfrog Group and by Medicare’s Pay for Performance initiatives, among others. So, we join our voices with those proposing abandonment of solely retrospective analyses of administrative data for the purpose of quality analysis and call instead for using purpose-gathered, prospective clinical data on process and outcome.5 These data need to be audited for quality, processed in ‘‘real time,’’ and should include end points besides 30-day mortality. As noted by Stukenborg and colleagues, gathering such data can be costly. We share the hope that identifying areas for

improvement, particularly systems changes, with resultant better outcomes will more than offset this expense. REFERENCES 1. Stukenborg GJ, Kilbridge KL, Wagner DP, Harrell FE Jr, Oliver MN, Lyman JA, et al. Present-at-admission diagnoses improve mortality risk adjustment and allow more accurate assessment of the relationship between volume of lung cancer operations and mortality risk. Surgery 2005;138: 498-507. 2. Atherly A, Fink AS, Campbell DC, Mentzer RM, Henderson W, Khuri S, et al. Evaluating alternative risk-adjustment strategies for surgery. Am J Surg 2004;188:566-70. 3. The Office of Statewide Health Planning and Development: Healthcare information available through OSHPD, Nov 2004. Available at: http://www.oshpd.ca.gov/hqad/ customerservice/hirc1104.ppt. Accessed May, 2005. 4. Zalkind DL, Eastaugh SR. Mortality rates as an indicator of hospital quality. Hosp Health Adm 1997;42:3-15. 5. Bass BL. Invited commentary: Measurement of quality in surgery: That’s our job. Surgery 2004;135:575-9.