The Impact of Quality Variations on Patients Undergoing Surgery for Renal Cell Carcinoma: A National Cancer Database Study

The Impact of Quality Variations on Patients Undergoing Surgery for Renal Cell Carcinoma: A National Cancer Database Study

EURURO-7364; No. of Pages 8 EUROPEAN UROLOGY XXX (2017) XXX–XXX available at www.sciencedirect.com journal homepage: www.europeanurology.com Platinu...

746KB Sizes 0 Downloads 12 Views

EURURO-7364; No. of Pages 8 EUROPEAN UROLOGY XXX (2017) XXX–XXX

available at www.sciencedirect.com journal homepage: www.europeanurology.com

Platinum Priority – Kidney Cancer Editorial by XXX on pp. x–y of this issue

The Impact of Quality Variations on Patients Undergoing Surgery for Renal Cell Carcinoma: A National Cancer Database Study Keith A. Lawson a, Olli Saarela b, Robert Abouassaly c, Simon P. Kim c, Rodney H. Breau d, Antonio Finelli a,* a

Division of Urology, Departments of Surgery and Surgical Oncology, Princess Margaret Cancer Centre, Toronto, ON, Canada; b Dalla Lana School of Public

Health, University of Toronto, Toronto, ON, Canada; c Urology Institute, University Hospitals Case Medical Center, Cleveland, OH, USA; d Division of Urology, Department of Surgery, University of Ottawa, Ottawa, ON, Canada

Article info

Abstract

Article history: Accepted April 27, 2017

Background: Despite efforts to define metrics assessing hospital-level quality for renal cell carcinoma (RCC) surgical care there remains a paucity of real-world data validating their ability to benchmark performance. Consequently, whether poor performance on hospitallevel quality indicators is associated with inferior patient outcomes remains unknown. Objectives: To determine hospital-level variations in RCC surgical quality after adjusting for differences in patient- and tumor-specific factors. Further, to determine associations between hospital-level quality performance and surgical volume, academic affiliation, and patient mortality. Design, setting, and participants: RCC patients undergoing surgery in the USA and Puerto Rico (2004–2014) were identified from the National Cancer Database. Outcome measures and statistical analysis: Hospital-level quality of care was assessed according to disease-specific process and outcome quality indicators. Case-mix adjusted hospital benchmarking was performed using indirect standardization methodology and multivariable regression models. A composite measure of quality, the Renal Cancer Quality Score (RC-QS), was subsequently derived and associations between RC-QS and surgical volume, academic affiliation, and patient mortality were determined. Results and limitations: Over 1100 hospitals were benchmarked for quality, with 10–31% identified as providing poor care for a given quality indicator. Lower RC-QS hospitals had smaller referral volumes and were less academic compared with higher RC-QS hospitals (p < 0.001). Higher RC-QS was independently associated with lower 30-d, 90-d, and overall mortality (adjusted odds ratio [confidence interval]: 0.92 [0.90–0.95], odds ratio: 0.94 [0.91–0.96], hazard ratio: 0.97 [0.96–0.98] per unit increase, respectively). These data are retrospective and it is unknown whether improvement in the RC-QS improves outcomes. Conclusions: Widespread hospital-level variations in RCC surgical quality exist, as captured by the RC-QS. Superior quality is associated with improved patient outcomes, including mortality benefit. The RC-QS serves as a benchmarking tool for RCC quality that can provide audit level feedback to hospitals and policymakers for quality improvement. Patient summary: We benchmarked hospital performance across quality indicators for kidney cancer surgical care. Overall, large variations in quality exist, with high volume academic hospitals demonstrating superior performance and improved patient survival. These data can inform hospitals and policymakers for quality improvement initiatives. # 2017 European Association of Urology. Published by Elsevier B.V. All rights reserved.

Associate Editor: Giacomo Novara Keywords: Renal cell carcinoma Quality Quality indicators Performance measures Benchmarking

* Corresponding author. Division of Urology, Department of Surgery, Princess Margaret Hospital, 3–130, 610 University Avenue, Toronto, ON M5G 2M9, Canada. Tel. +1-416-946-2851; Fax: +1-416-946-6590. E-mail addresses: antonio.fi[email protected], a.fi[email protected] (A. Finelli). http://dx.doi.org/10.1016/j.eururo.2017.04.033 0302-2838/# 2017 European Association of Urology. Published by Elsevier B.V. All rights reserved.

Please cite this article in press as: Lawson KA, et al. The Impact of Quality Variations on Patients Undergoing Surgery for Renal Cell Carcinoma: A National Cancer Database Study. Eur Urol (2017), http://dx.doi.org/10.1016/j.eururo.2017.04.033

EURURO-7364; No. of Pages 8 2

EUROPEAN UROLOGY XXX (2017) XXX–XXX

1.

Introduction

A significant effort is being placed on understanding variations in the quality of surgical care as evidenced by the growth of pay for performance programs and national quality initiatives [1,2]. Efforts to measure quality have expanded to most areas of surgical oncology with specialty specific societies endorsing expert opinion generated quality indicators (QI) measuring various structural, process, and outcome elements of health care delivery [3,4]. Despite this, a paucity of literature exists capturing real-world data validating these indicators as benchmarking tools that can accurately identify hospitals providing poor care [5]. Similarly, data capturing the impact of hospital-level quality variation on patient outcomes is significantly underreported for most QIs [5]. Consequently, concerns regarding the appropriate use and choice of indicators have arisen given their impact on financial and administrative resource consumption and allocation as well as effect on patient autonomy and physician reputation [6,7]. Hence, an urgent need to take a robust data driven approach to validate putative QIs is needed in order to prioritize the most valuable measures. To date, efforts to measure the quality of renal cell carcinoma (RCC) surgical care remain in their infancy. While putative expert generated RCC-specific QIs have been proposed, no real-world data exists to validate these metrics as benchmarking tools that can discriminate provider performance in a manner that captures disparate patient outcomes [8]. This is in part due to the challenge of adjusting for complex case-mix variation between hospitals that must be completed in order to rigorously benchmark performance [9]. Comprehensive cancer specific data initiatives, such as the National Cancer Database in the USA, which capture granular patient and tumor specific variables across large volumes of hospitals have greatly facilitated case-mix adjusted quality benchmarking assessments in surgical oncology, and as such provide a platform to validate putative QIs in urologic oncology [10]. Due to the paucity of real-world data benchmarking provider performance in RCC surgical care, our primary objective was to determine whether nationwide variations in quality exist on a hospital-level after adjusting for differences in case-mix factors present within the National Cancer Database (NCDB). Further, we investigated structural elements (ie, hospital type, location, surgical volume) associated with hospital-level quality. Lastly, we sought to determine whether benchmarking hospital performance using our developed case-mix adjusted QIs could discriminate provider performance in a manner that captures disparate patient outcomes by assessing associations between hospital-level quality and patient mortality. We hypothesized that variations in RCC surgical quality exist, with poor quality being associated with adverse patient outcomes.

2.

Materials and methods

2.1.

Data

This cohort study utilized the NCDB, which prospectively collects hospital-level data from Commission on Cancer Accredited Facilities in the USA and Puerto Rico. The NCDB captures approximately 70% of newly diagnosed cancer cases with over 30 million individual records accumulated since its inception across over 1500 hospitals. Approval by the University Health Network (Toronto, ON, USA) Research Ethics Board was obtained.

2.2.

QIs

Hospital-level performance was benchmarked according to five QIs. Three process QIs were identified from a previously published modified Delphi study [8], including the proportion of patients with: (1) T1a tumors undergoing partial nephrectomy (PN), (2) T1-2 tumors receiving a minimally invasive (laparoscopic or robotic) approach for radical nephrectomy (MIS), (3) a positive surgical margin following PN for T1 tumors (PM). Two outcome QIs: (1) length of hospital stay after radical nephrectomy for T1-4 tumors (LOS), and (2) 30-d unplanned readmission proportion after radical nephrectomy for T1-4 tumors (RP), were additionally chosen given their utility as quality benchmarking tools in other realms of surgical oncology [4,11].

2.3.

Study cohort

International Classification of Diseases for Oncology (third edition) and site-specific surgery codes were employed to identify RCC patients who underwent PN or radical nephrectomy between 2004 and 2014. Data from earlier than 2004 was excluded due to incomplete patient comorbidity information. For MIS, analysis was restricted to 2010 onward as laparoscopy was not available prior. For the LOS and RP QIs, patients with localized and metastatic disease were included whereas metastatic patients were excluded for analysis of the MIS, PN, and PMR QIs. Summaries of all inclusion and exclusion criteria, including International Classification of Diseases and histology codes are available in Supplementary Table 1.

2.4.

Statistical analysis

Our statistical approaches closely followed those used in previous analyses of NCDB data for quality comparisons in surgical oncology [10,12]. Interhospital variability in the QIs was investigated using random intercept generalized linear models; PN, MIS, PM, and RP QIs were modeled through logistic regression; LOS, transformed as natural logarithm of 1 + LOS because of the skewed distribution of this QI, was modeled through linear regression. We estimated an intraclass correlation/between hospital variance proportion using the latent variable method and calculated p-values for tests of the null of no between hospital variance component [13]. QIs were adjusted for casemix using indirect standardization, where for each hospital we calculated a standardized mortality ratio of observed to expected outcomes [14]. The expected quality outcomes were calculated from multivariable regression models (logistic for PN, MIS, PM, RP; linear for the transformed LOS) fitted to the entire patient population without the hospital-level random intercepts, given all relevant patient-level demographic, comorbidity, disease progression, and tumor characteristics recorded in NCDB, as listed in Supplementary Figures 1A–E. To identify outlier hospitals, we used z-test statistics of the form Z = (O-E)/S,

Please cite this article in press as: Lawson KA, et al. The Impact of Quality Variations on Patients Undergoing Surgery for Renal Cell Carcinoma: A National Cancer Database Study. Eur Urol (2017), http://dx.doi.org/10.1016/j.eururo.2017.04.033

EURURO-7364; No. of Pages 8 3

EUROPEAN UROLOGY XXX (2017) XXX–XXX

where S stands for the standard error of the observed outcome O, taking

regression models as described above, as well as to hospital location,

the model-based expected outcome E to be a fixed quantity, and

volume, and type of institution. Statistical p-values for the hospital

calculated p-values from the standard normal distribution.

characteristics associations were calculated from chi-square (location)

For each QI, we classified outlier hospitals performing worse than

and two-sample Wilcoxon tests (volume and type), comparing hospitals

expected as poor outliers and those performing better than expected as

with positive sum scores to hospitals with negative sum scores, omitting

superior outliers. We experimented with adjusting for multiple testing

zero scores. The statistical analyses were performed in SAS software

through Bonferroni correction and control of false discovery rate [15],

version 9.3 (SAS Institute, NC, USA) and R statistical environment version

but chose to use the fixed p-value threshold of 0.05 for the purpose of

3.2.2 (R Foundation for Statistical Computing, Vienna, Austria).

classifying outlier hospitals for further analyses as previously reported [10,12]. Additionally, for data representation, we multiplied the observed to expected outcomes ratio for each hospital by the national average to determine the case-mix adjusted QI [14]. Internal consistency of the resulting outlier classifications was investigated via Venn diagrams, pairwise Kendall’s tau correlations, and Cronbach a statistic. Patients treated in outlier versus nonoutlier hospitals were compared by 30-d mortality, 90-d mortality (logistic regression), and overall mortality (Cox regression). Mortality models were fitted using generalized estimating equations allowing for within-hospital correlation, both with and without case-mix adjustment. We calculated a composite quality measure encompassing hospital performance across the five QIs, hereafter referred to as the Renal Cancer Quality Score (RC-QS). Briefly, for each QI a hospital identified as a superior outlier received one point, for being a poor outlier one point was deducted, and for being a nonoutlier zero points were awarded. The final RC-QS was a summation of the points received across each indicator for an individual hospital. We investigated the association of the RC-QS to outcomes using similar

3.

Results

Variations in the quality of RCC surgical care were captured across five QIs: MIS, PN, PM, LOS, and RP. The number of patients analyzed for each QI and their corresponding comorbidities, tumor-, and treatment-related characteristics are summarized in Table 1. Statistically significant interhospital variation was observed for all QIs (p < 0.001) with random effects models indicating between hospital variance proportions of 31%, 17%, 12%, 15%, and 20% for the MIS, PN, PM, LOS, and RP indicators, respectively. All of the random effects remained significant when adjusting for case-mix (p < 0.001). For each QI, more than 1100 hospitals were benchmarked for quality performance against the national average utilizing our case-mix adjusted QIs. A total

Table 1 – Study cohort characteristics

Na Median age, yr (IQR) Sex Male Charlson/Deyo Score 0 1 2 T-Stage T1 T2 T3 T4 Lymph node status NX N0 N1 Metastases M0 M1 Histology Clear cell Papillary Chromophobe Other Median tumor size, cm (IQR) Tumor grade G1/2 G3/4 GX Median yr of diagnosis (IQR) No. hospitals assessed

MIS

PN

PM

LOS

RP

Total (%)

Total (%)

Total (%)

Total (%)

Total (%)

34 150 62 (53–70)

87 408 60 (51–69)

71 422 59 (50–68)

126 289 62 (53–71)

131 319 62 (53–71)

20 740 (61)

51 833 (59)

43 639 (61)

78 579 (62)

81 804 (62)

23 357 (68) 7834 (23) 2959 (9)

61 512 (70) 19 487 (22) 6409 (8)

51 153 (72) 15 748 (22) 4521 (6)

87 665 (69) 28 657 (23) 9967 (8)

91 613 (70) 29 520 (22) 10 186 (8)

26 050 (76) 8100 (24) NA NA

87 408 (T1a: 100) NA NA NA

71 422 (100) NA NA NA

69 134 (55) 20 994 (17) 34 253 (27) 1908 (1)

71 517 (54) 21 803 (17) 35 962 (27) 2037 (2)

29951 (88) 3878 (11) 321 (1)

83341 (95) 3935 (4) 132 (1)

68976 (96) 2389 (3) 57 (1)

102245 (81) 19149 (15) 4895 (4)

106061 (81) 20036 (15) 5222 (4)

NA

NA

NA

117 121 (93) 9168 (7)

122 240 (92) 9990 (8)

27 010 (79) 4548 (13) 2195 (6) 397 (2) 5.0 (3.5–7.0)

68 277 (78) 13 419 (15) 4618 (5) 1094 (1) 2.6 (2.0–3.4)

54 022 (76) 12 326 (17) 4214 (6) 860 (1) 2.6 (2.0–3.6)

103 320 (82) 13 104 (10) 6467 (5) 3398 (3) 5.5 (3.7–8.0)

107 555 (82) 13 549 (10) 6643 (5) 3572 (3) 5.5 (3.8–8.1)

18 562 (54) 8807 (26) 6855 (20) 2011 (2010–2012) 1155

59 073 (67) 14 604 (17) 13 731 (16) 2009 (2007–2012) 1207

47 090 (66) 12 367 (17) 11 965 (17) 2010 (2007–2012) 1131

65 746 (52) 43 125 (34) 17 418 (14) 2008 (2006–2011) 1245

67 846 (52) 44 855 (34) 18 618 (14) 2008 (2006–2011) 1254

a Denotes number of patients included in the analysis for each quality indicator. IQR = interquartile range; LOS = length of stay; MIS = T1-2 tumors receiving a minimally invasive (laparoscopic or robotic) approach for radical nephrectomy; NA = not applicable; PM = positive surgical margin following PN for T1 tumors; PN = T1a tumors undergoing partial nephrectomy.

Please cite this article in press as: Lawson KA, et al. The Impact of Quality Variations on Patients Undergoing Surgery for Renal Cell Carcinoma: A National Cancer Database Study. Eur Urol (2017), http://dx.doi.org/10.1016/j.eururo.2017.04.033

EURURO-7364; No. of Pages 8 4

EUROPEAN UROLOGY XXX (2017) XXX–XXX

Please cite this article in press as: Lawson KA, et al. The Impact of Quality Variations on Patients Undergoing Surgery for Renal Cell Carcinoma: A National Cancer Database Study. Eur Urol (2017), http://dx.doi.org/10.1016/j.eururo.2017.04.033

EURURO-7364; No. of Pages 8 EUROPEAN UROLOGY XXX (2017) XXX–XXX

5

Fig. 2 – Concordance in identifying outlier hospitals between quality indicators. Venn diagram demonstrating the overlap in the number of outlier hospitals identified by the five quality indicators. LOS = length of stay. MIS = T1-2 tumors receiving a minimally invasive (laparoscopic or robotic) approach for radical nephrectomy.

of 27%, 31%, 12%, 27%, and 10% of hospitals were identified as delivering lower than expected care (poor-outliers) per the MIS, PN, PM, LOS, and RP indicators, respectively, with funnel plots [16] summarizing results displayed in Figure 1. Concordance among the QIs for identifying outlier hospitals (superior and poor) is displayed in Figure 2. While certain individual QIs demonstrated significant concordance (eg, MIS and LOS; Supplementary Table 2) the overall concordance (Chronbach a 0.25) observed collectively was small, with many identifying unique outliers. To better understand possible structural elements driving quality variations we evaluated associations between hospital outlier status and hospital volume, facility type (ie, academic status), and geographical location. A Renal Cancer Quality Score (RC-QS), representative of the overall performance of a hospital across the five QIs, was determined to facilitate this analysis; hospital distribution by RC-QS is displayed in Supplementary Figure 2. Overall, hospitals with a positive RC-QS (ie, superior performance) were associated with higher volume and academic affiliation relative to hospitals with a negative sum score (p < 0.001; Figure 3). This was confirmed when the QIs were analyzed independently, with superior-outlier hospitals demonstrating higher volume and academic affiliation relative to poor outlier hospitals (Supplementary Figs. 3 and 4). Minimal variation in quality was observed across geographical locations (Fig. 3).

We then assessed the impact of quality variation on patient outcomes. Odds ratio (OR) and hazard ratios (HR) summarizing associations between RC-QS and 30-d, 90-d, and overall mortality rates are displayed in Figure 4. Overall, a higher RCQS portended a decrease in 30-d, 90-d, and overall mortality (unadjusted OR [95% confidence interval]: 0.91 [0.88–0.94], OR: 0.93 [0.91–0.96], HR: 0.96 [0.95–0.97] per unit increase, respectively). This association remained after multivariable modeling of patient and tumor case-mix factors, with each point increase in RC-QS portending a 8%, 6% lower odds of 30d, 90-d mortality, and 3% lower overall mortality rate, respectively (adjusted OR [95% confidence interval]: 0.92 [0.90–0.95], OR: 0.94 [0.91–0.96], HR: 0.97 [0.96–0.98] per unit increase, respectively). When assessed individually, significant quality-mortality associations were observed for the MIS, PN, RP, and PM indicators (Supplementary Fig. 5). 4.

Discussion

Healthcare continues to evolve towards increasing provider accountability with the ultimate goal of maximizing the quality of care being delivered. Consequently, significant efforts towards developing hospital-level performance metrics, or QIs, have been made. Despite this, a lack of real-world data exists to validate many of the proposed QIs, bringing into question their value in shaping health policy decision-making, resource allocation, and educational

Fig. 1 – Benchmarking hospital performance reveals widespread variation in renal cancer surgical quality. Case-mix adjusted performance for individual hospitals (circles, size proportional to hospital volume) benchmarked for quality according to T1-2 tumors receiving a minimally invasive (laparoscopic or robotic) approach for radical nephrectomy, T1a tumors undergoing partial nephrectomy, positive surgical margin following PN for T1 tumors, mean length of stay and readmission proportion for T1-4 tumors undergoing radical nephrectomy. Vertical dashed line represents the average nationwide hospital performance. The y-axis represents the inverse standard error of the case-mix adjusted performance measure. The dashed funnel gives the 95% nonrejection region for the null of equivalence between observed and expected performance. SE = standard error.

Please cite this article in press as: Lawson KA, et al. The Impact of Quality Variations on Patients Undergoing Surgery for Renal Cell Carcinoma: A National Cancer Database Study. Eur Urol (2017), http://dx.doi.org/10.1016/j.eururo.2017.04.033

EURURO-7364; No. of Pages 8 6

EUROPEAN UROLOGY XXX (2017) XXX–XXX

Fig. 3 – Structural features associated with hospital quality. Associations between hospital quality, measured by the Renal Cancer Quality Score (RC-QS; sum score), and hospital volume (left), type (middle) and geographical location (right).

Fig. 4 – Impact of hospital quality on patient mortality. Unadjusted and case-mix adjusted associations between hospital quality, measured by the Renal Cancer Quality Score (sum score), and 30-d, 90-d, and overall mortality. Note, values displayed reflect OR (odds ratio) and HR (hazard ratio) per 1 unit change in Renal Cancer Quality Score. CI = confidence interval.

initiatives. Moreover, without validated QIs it is difficult to ascertain the true degree of quality variations and importantly, the impact this variation has on patient outcomes. Herein, we developed case-mix adjusted QIs to benchmark RCC surgical quality of care at a hospital-level. Utilizing these tools we reveal the wide variations in care that patients experience, and further demonstrate the negative consequence of this variation on important patient outcomes.

To the best of our knowledge, these results are the first to display widespread variability in the care that patients receive on a national level when undergoing RCC surgery while adequately adjusting for case-mix variation. While Gore et al [17] have previously demonstrated hospital-level variation in quality following radical nephrectomy in the state of Washington, this analysis suffered from case-mix bias as tumor specific factors were not adjusted for. Notably, our analysis revealed significant variations across all QIs,

Please cite this article in press as: Lawson KA, et al. The Impact of Quality Variations on Patients Undergoing Surgery for Renal Cell Carcinoma: A National Cancer Database Study. Eur Urol (2017), http://dx.doi.org/10.1016/j.eururo.2017.04.033

EURURO-7364; No. of Pages 8 EUROPEAN UROLOGY XXX (2017) XXX–XXX

whilst confirming hospital-level effects contributed significantly. For process-based QIs, such as PN and MIS, this may reflect that surgeon preference and access to resources as opposed to patient or tumor characteristics are driving treatment decisions. Furthermore, a volume-quality relationship may explain the variation observed, as supported by the finding that superior-outlier hospitals were associated with higher surgical volumes for all QIs aside from the LOS indicator. This is consistent with previous reports, demonstrating higher surgical volume and academic affiliation are associated with the uptake of PN [18,19]. Hence, while the underlying mechanism driving quality variation could not be addressed directly, the aforementioned factors warrant further investigation as causal factors. Such data has important implications regarding decisions surrounding the centralization of RCC surgery to high volume or academic centers. According to the Donabedian model, ideal QIs not only capture variations in care, but further demonstrate construct validity through being associated with known structural, process, or outcome elements of health care delivery [20]. For all QIs analyzed, construct validity could be demonstrated to varying degrees with superior performance being associated with greater hospital volume, academic status, or lower mortality (30-d, 90-d, overall). We argue that a systematic data-driven approach as reported here, which determines the construct validity of putative QIs, should be implemented by policymakers to prioritize QIs for inclusion in benchmarking strategies. Composite measures encompassing multiple validated QIs will likely be required to comprehensively capture quality-outcome relationships [21]. This is consistent with the minimal concordance observed between the QIs studied here (Fig. 2), highlighting that each captures a unique aspect of RCC quality care. To address this, we developed a composite measure of RCC surgical quality incorporating all QIs analyzed in this study, the RC-QS. We included all individual QIs as each displayed interhospital discrimination and construct validity by association to a structural or outcome quality element. Importantly, construct validity was determined for the RC-QS as a positive score associated with both structural and outcome measures, including improved patient survival. Notably, further improvements to the RC-QS could be adopted through a similar analysis as represented here as more QIs are validated. As such, we envision this work as an initial step requiring ongoing and dynamic improvements towards the creation of a qualitybenchmarking program for RCC surgery, with populationlevel databases such as the NCDB serving as practical vehicles to disseminate the RC-QS. This study has several limitations. First, NCDB data are retrospective with certain case-mix factors not captured, including tumor complexity and renal function. These factors may influence QI performance and were not controlled for [18,19,22]. Retrospective data are also subject to reporting bias, which may particularly affect PMR. Further, individual surgeon characteristics, referral patterns, and care networks could not be assessed given their absence in the NCDB. This is particularly relevant for

7

the PN and MIS indicators as surgeons may refer patients due to a lack of experience or resources, contributing to a poor quality score. Moreover, while quality-mortality associations were captured, additional important patient outcomes (eg, in-hospital complications, cancer-specific survival) could not be assessed given their absence from the NCDB. Lastly, similar quality assessments utilizing population level databases outside the USA are required to confirm the external validity of our results. 5.

Conclusions

In conclusion, nationwide variations in RCC surgical quality exist on a hospital-level. These variations are captured by the RC-QS, a validated RCC specific composite measure of quality readily determined from the NCDB. Although assessments of quality variation between hospitals can engender controversy, ongoing healthcare delivery improvement cannot occur without accurate measure and feedback. These results support the use of the RC-QS as a quality benchmarking tool for RCC surgery that provides audit level feedback to hospitals and policymakers for quality improvement. This work was accepted for presentation at the 2017 American Urological Association and Canadian Urological Association Annual Meetings.Author contributions: Antonio Finelli had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Lawson, Saarela, Abouassaly, Finelli. Acquisition of data: Lawson, Saarela, Abouassaly, Kim, Finelli. Analysis and interpretation of data: Lawson, Saarela, Abouassaly, Kim, Breau, Finelli. Drafting of the manuscript: Lawson, Saarela, Finelli. Critical revision of the manuscript for important intellectual content: Lawson, Saarela, Abouassaly, Kim, Breau, Finelli. Statistical analysis: Lawson, Saarela, Abouassaly, Finelli. Obtaining funding: Abouassaly, Finelli. Administrative, technical, or material support: None. Supervision: Saarela, Abouassaly, Finelli. Other: None. Financial disclosures: Antonio Finelli certifies that all conflicts of interest, including specific financial interests and relationships and affiliations relevant to the subject matter or materials discussed in the manuscript (eg, employment/affiliation, grants or funding, consultancies, honoraria, stock ownership or options, expert testimony, royalties, or patents filed, received, or pending), are the following: None. Funding/Support and role of the sponsor: None. Acknowledgments: This work was supported by funds from the Princess Margaret Cancer Centre Foundation.

Appendix A. Supplementary data Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/j. eururo.2017.04.033.

Please cite this article in press as: Lawson KA, et al. The Impact of Quality Variations on Patients Undergoing Surgery for Renal Cell Carcinoma: A National Cancer Database Study. Eur Urol (2017), http://dx.doi.org/10.1016/j.eururo.2017.04.033

EURURO-7364; No. of Pages 8 8

EUROPEAN UROLOGY XXX (2017) XXX–XXX

[11] Tsai TC, Joynt KE, Orav EJ, et al. Variation in surgical-readmission rates and quality of hospital care. N Engl J Med 2013;369:

References

1134–42. Jama

[12] Russell MC, You YN, Hu CY, et al. A novel risk-adjusted nomogram

[2] Khuri SF, Daley J, Henderson W, et al. The Department of Veterans

[13] Merlo J. A brief conceptual tutorial of multilevel analysis in social

Affairs’ NSQIP- the first national, validated, outcome-based, risk-

epidemiology: using measures of clustering in multilevel logistic

adjusted, and peer-controlled program for the measurement and

regression to investigate contextual phenomena. J Epidemiol Com-

[1] Song

Z,

Colla

CH.

Specialty-based

global

payment.

for rectal cancer surgery outcomes. JAMA Surg 2013;148:769–77.

2016;315:2271.

enhancement of the quality of surgical care. Ann Surg 1998;228: [3] Spencer BA, Miller DC, Litwin MS, et al. Variations in quality of care for

men

munity Health 2006;60:290–7. [14] Shahian DM, Normand SLT. Comparison of ‘‘risk-adjusted’’ hospital

491–507. with

early-stage

prostate

cancer.

J

Clin

Oncol

2008;26:3735–42. [4] McGory ML, Shekelle PG, Ko CY. Development of quality indicators for patients undergoing colorectal cancer surgery. J Natl Cancer Inst 2006;98:1623–33. [5] Schroeck FR, Kaufman SR, Jacobs BL, et al. Adherence to performance measures and outcomes among men treated for prostate cancer. J Urol 2014;192:743–8. [6] Hibbard JH, Stockard J, Tusler M. Hospital performance reports: impact on quality, market share, and reputation. Health Affairs 2005;24:1150–60. [7] Birkmeyer JD, Dimick JB, Birkmeyer NJO. Measuring the quality of surgical care: structure, process, or outcomes? J Am Coll Surg 2004;198:626–32. [8] Wood L, Bjarnason GA, Black PC, et al. Using the Delphi technique to improve clinical outcomes through the development of quality indicators in renal cell carcinoma. J Oncol Pract 2013;9:e262–7. [9] Henneman D, van Bommel AC, Snijders A, et al. Ranking and rankability of hospital postoperative mortality rates in colorectal cancer surgery. Ann Surg 2014;259:844–9. [10] Massarweh NN, Hu CY, You YN, et al. Risk-adjusted pathologic

outcomes. Circulation 2008;117:1955–63. [15] Jones HE, Ohlssen DI, Spiegelhalter DJ. Use of the false discovery rate when comparing multiple health care providers. J Clin Epidemiol 2008;61:232–40. [16] Spiegelhalter DJ. Funnel plots for comparing institutional performance. Stat Med 2005;24:1185–202. [17] Gore JL, Wright JL, Daratha KB, et al. Hospital-level variation in the quality of urologic cancer surgery. Cancer 2012;118:987–96. [18] Kiechle JE, Abouassaly R, Gross CP, et al. Racial disparities in partial nephrectomy persist across hospital types: results from a population-based cohort. Urology 2016;90:69–74. [19] Kim SP, Shah ND, Weight CJ, et al. Contemporary trends in nephrectomy for renal cell carcinoma in the united states: results from a population based cohort. J Urol 2011;186:1779–85. [20] Donabedian A. The quality of medical care. Science 1978;200: 865–74. [21] Dimick JB, Staiger DO, Osborne NH, et al. Composite measures for rating hospital quality with major surgery. Health Serv Res 2012;47:1861–79. [22] Maurice MJ, Zhu H, Kiechle JE, et al. Increasing biopsy utilization for renal cell carcinoma is closely associated with treatment. Urology 2015;86:906–13.

margin positivity rate as a quality indicator in rectal cancer surgery. J Clin Oncol 2014;32:2967–74.

Please cite this article in press as: Lawson KA, et al. The Impact of Quality Variations on Patients Undergoing Surgery for Renal Cell Carcinoma: A National Cancer Database Study. Eur Urol (2017), http://dx.doi.org/10.1016/j.eururo.2017.04.033