A Direct Comparison of the Clinical Practice Patterns of Advanced Practice Providers and Doctors

A Direct Comparison of the Clinical Practice Patterns of Advanced Practice Providers and Doctors

ARTICLE IN PRESS CLINICAL RESEARCH STUDY A Direct Comparison of the Clinical Practice Patterns of Advanced Practice Providers and Doctors David Johns...

380KB Sizes 0 Downloads 14 Views

ARTICLE IN PRESS CLINICAL RESEARCH STUDY

A Direct Comparison of the Clinical Practice Patterns of Advanced Practice Providers and Doctors David Johnson, PA-C d, Othman Ouenes, BS a, Douglas Letson, MD d, Enrico de Belen, MD a, Tim Kubal, MD, MBA d, Catherine Czarnecki, BS e, Larry Weems, MD f, Brent Box, MD g, David Paculdo, MPH a, John Peabody, MD, PhD a,b,c,⁎ a

QURE Healthcare, San Francisco, Calif; b University of California, Los Angeles; c Institute for Global Health Sciences, University of California, San Francisco; d Moffitt Cancer Center, Tampa, Fla; e Aetna, Hartford, Conn; f Novant Health, Winston-Salem, NC; g AdventHealth, Altamonte Springs, Fla.

ABSTRACT BACKGROUND: Rising health care costs, physician shortages, and an aging patient population have increased the demand and utilization of advanced practice providers (APPs). Despite their expanding role in care delivery, little research has evaluated the care delivered by APPs compared with physicians. METHODS: We used clinical patient simulations to measure and compare the clinical care offered by APPs and physicians, collecting data from 4 distinct health care systems/hospitals in the United States between 2013 and 2017. Specialties ranged from primary care to hospital medicine and oncology. Primary study outcomes were to (a) measure any differences in practice patterns between APPs and physicians and (b) determine whether the use of serial measurement and feedback could mitigate any such differences. RESULTS: At baseline, we found no major differences in overall performance of APPs compared with physicians (P = .337). APPs performed 3.2% better in history taking (P = .013) and made 10.5% fewer unnecessary referrals (P = .025), whereas physicians ordered 17.6% fewer low-value tests per case (P = .042). Regardless of specialty or site, after 4 rounds of serial measurement and provider-specific feedback, APPs and physicians had similar increases in average overall scores—7.4% and 7.6%, respectively (P b .001 for both). Not only did both groups improve, but practice differences between the groups disappeared, leading to a 9.1% decrease in overall practice variation. CONCLUSIONS: We found only modest differences in quality of care provided by APPs and physicians. Importantly, both groups improved their performance with serial measurement and feedback so that after 4 rounds, the original differences were mitigated entirely and overall variation significantly reduced. Our data suggest that APPs can provide high quality care in multiple clinical settings. © 2019 Elsevier Inc. All rights reserved. • The American Journal of Medicine (2019) xxx:xxx-xxx KEYWORDS: Advanced practice providers; Care standardization; Clinical practice; Hospital medicine; Oncology; Primary care; Quality of care

INTRODUCTION By 2030, the projected shortfall of practicing physicians in the United States is expected to be 42,600 to Funding: None. Conflict of Interest: The authors declare no conflict of interest. Authorship: All authors had access to the data and a role in writing the manuscript. Requests for reprints should be addressed to: John Peabody, MD, PhD, QURE Healthcare, 450 Pacific, Suite 200, San Francisco, CA 94133. E-mail address: [email protected] 0002-9343/© 2019 Elsevier Inc. All rights reserved. https://doi.org/10.1016/j.amjmed.2019.05.004

121,300.1 In oncology, this gap—exacerbated by increasing prevalence of cancer and an aging demographic—will be a projected shortfall of 1521 oncologists and 737 radiation oncologists. 2 In primary care, the projected deficiency is a more substantial shortfall of between 14,800 and 49,300 physicians by 2030. 1 One strategy to fill these gaps is to add advanced practice professionals (APPs) to the workforce.3 APPs—nurse practitioners and physician assistants (PAs)—are qualified to provide care and treatment working under the supervision of a

ARTICLE IN PRESS 2

The American Journal of Medicine, Vol xxx, No xxx, ▪▪ 2019

doctor, with the number of APPs in the workforce estiClinical Performance and Value Vignettes mated to grow from 294,000 to over 400,000 by 2024.4,5 Each site worked with QURE Healthcare using CPV vignettes The substitution of APPs for medical doctors, however, deto develop a set of 12 CPV cases and use these cases to serially pends upon understanding the quality of care they provide and measure clinical practice and provide individual feedback to the roles they can fill.6 A recent report revealed that 41% of reduce unnecessary clinical practice variation. CPVs are onsurveyed physicians across primary and specialty care thought line simulated patient vignettes in which every provider that the use of APPs in their praccares for the same patients. tice had a negative impact on CPVs have been designed and their ability to care for patients.7 CLINICAL SIGNIFICANCE validated to replicate the doOther studies have shown that • Overall, using online simulated cases, the baseline ca- mains (eg, history taking, physiAPPs can be as effective as phypabilities of physicians and advanced practice provider cal examination, diagnostic sicians and even exceed physi(APPs) were similar, with small but important differ- work-up, making a diagnosis, cian performance: providing ences in low-value tests ordered (APPs ordered more) and ordering a treatment plan) lower cost of care, with fewer of a typical clinical encounter17, and unneeded referrals (physicians ordered more). 18 complications, and a shorter (see the Supplementary Mate• After 4 rounds of measurement and feedback, both length of stay.8–11 In studies of rial, available online, for details). groups improved standardizing the quality of care proprimary care, APPs were found Other studies confirm that decivided regardless of provider type. to have a salubrious effect on pasions made in the CPV environ• Despite differences in governing practice, APPs are catients' health status, health utilizament are consistent with actual pable of providing care similar to physicians. tion, and satisfaction.12,13 clinical decisions and that imComparative studies, howprovement in CPV care transever, have been lacking and lates into improvements in the are limited because they do care of real patients.19–22 not directly compare the quality of care delivered by physicians and APPs. Challenges Study Overview stem from heterogeneity of the patient population, lack of Over the course of 4 rounds in each project, providers evalurigorous study designs, and inconsistency in the measures ated 2 of the 12 simulated patients online every 4 months. at different sites.14–16 Questions remain about APP care No provider saw the same vignette more than once, and paquality and whether they can or should function in tients were randomly assigned. When caring for the CPV paplace of physicians. tients, providers responded to open-ended questions as they The objective of this multisite, multispecialty study was to went through a typical patient visit. Individual, personalized determine whether APPs provided the same quality care as feedback highlighted areas of need for each provider after physicians in a variety of settings, ranging from outpatient to each completed case. inpatient and primary to specialty care. To make these direct comparisons, these sites utilized a standardized measurement Participants tool, the Clinical Performance and Value (CPV) vignettes. Within each site, we followed a participant cohort of physician and APP clinicians. Every provider in this analysis completed a minimum of 3—and typically 4—rounds of cases. In total, METHODS there were 69 APPs (between 10% and 31% of the providers in each of the 4 sites) and 288 physicians who met this criteStudy Sites rion across the 4 sites. Between 2013 and 2017, we compared the clinical care of APPs with medically licensed providers at 4 distinct health Ethics care systems/hospitals in the United States, as part of other The data gathered from these studies contained no patient instudies designed to measure and reduce clinical practice formation and were obtained as part of standard monitoring variation. for clinical quality and safety. Per the Office of Research InThe motivations for reducing care variation differed by site. tegrity of the US Department of Health and Human Services A large primary care accountable care organization in the under US Code of Federal Regulation, 45 CFR 46, the study Northeast wanted to reduce variation to meet risk-based peris exempt from Institutional Review Board review. formance targets, while a National Cancer Institutedesignated Comprehensive Cancer Center in the Southeast Analysis had developed oncology pathways and wanted to increase adherence to those pathways. A large regional, mid-Atlantic We analyzed the distributions of CPV vignette scores and health care system wanted their hospitalists to go at financial compared the performance of APPs and physicians over risk for pneumonia and heart failure admissions. A large health time, examining adherence overall and in each clinical docare system across the Southeast and Midwest was challenged main. For continuous outcome comparisons (eg, domain to standardize their hospitalist practice in 5 different states. scores), we used Student’s t test and linear regressions. For

ARTICLE IN PRESS Johnson et al

3

Table 1 Baseline Characteristics Across All 4 Sites Combined

n Male, % Age Years of experience Patients seen per week Time teaching, % Facility Variability of care None Low Moderate High Very high Focus on QI and cost initiatives Poor Fair Average Good Excellent

APP

Physician

Total

P Value

69 16% 42.5 ± 10.8 9.1 ± 8.1 45.8 ± 37.9 13.5% ± 19.4%

288 68% 44.6 ± 9.6 12.4 ± 9.7 83.8 ± 37.9 10.1% ± 15.1%

357 58% 44.2 ± 10.8 11.7 ± 9.5 77.2 ± 38.5 10.7 ± 16.0

– b .001 .058 .005 b .001 .112

3.8% 22.6% 43.4% 20.8% 9.4%

0.8% 23.4% 52.5% 14.3% 9.0%

1.4% 23.2% 50.8% 15.5% 9.1%

.283

0.0% 9.6% 13.5% 36.5% 40.4%

1.6% 6.1% 19.0% 44.9% 28.3%

1.3% 6.7% 18.1% 43.5% 30.4%

.301

APP = advanced practice provider.

binary outcome comparisons, we used Fisher’s exact test and logistic regression. All analyses were performed using Stata 14.2 (StataCorp LLC, College Station, Texas).

RESULTS Overall, there were 357 providers (69 APPs and 288 MDs) across 4 sites caring for primary care, hospital, and oncology CPV patients. APPs were more likely to be women (P b .001), were 2.1 years younger (P = .058), and had an average

of 3.3 years less experience (P = .005) compared with physicians (Table 1). Physicians reported seeing 1.8 times more patients per week (83.8 vs 45.8, P b .001) than APPs. In half the sites, APPs spent significantly more time teaching than their physician counterparts (P b .001), but no difference in the other 2 sites or overall. When asked about perceptions of care variation and the focus on quality, 24.6% felt variation was high or very high, whereas 73.9% thought that their facility’s focus on quality improvement and cost was good or excellent (P = .301).

Figure Clinical Performance and Value performance at baseline for advanced practice providers (APPs) and physicians (with means and standard deviations).

ARTICLE IN PRESS

The American Journal of Medicine, Vol xxx, No xxx, ▪▪ 2019

4 At baseline, we found no significant difference in the overall CPV performance (P = .898; Figure) or by domain between APPs and physicians, except APPs performed 3.2% better in history taking (P = .013) and made 10.5% (22.8% vs 33.3%, P = .025) fewer unnecessary referrals, whereas physicians ordered significantly fewer low-value tests (1.3 vs 1.5, P = .042), defined as a diagnostic test that would not change the patient’s treatment. By subgroups across sites, we found no significant difference in overall score for inpatient (APP: 59.9% ± 9.7% vs MD: 59.9% ± 9.8%; P = .986) vs outpatient (APP: 60.9% ± 12.4% vs MD: 61.0% ± 13.0%; P = .987) or primary care (APP: 60.3% ± 10.8% vs MD: 59.4% ± 10.4%; P = .459) vs specialty care (APP: 60.5% ± 11.5% vs MD: 64.9% ± 11.5%; P = .119). This similarity in baseline scores between APPs and physicians extended across all domains and ordering of low-value work-up for these subgroups (P N .05 for all), demonstrating similar expertise in clinical practice. After 4 rounds of measurement and feedback, both APPs and physicians had increased their average overall quality scores, with APPs improving by 7.4% and physicians by 7.6% (Table 2). This difference in improvement proved nonsignificant (P = .863). By care group, APPs and physicians improved their overall CPV scores at roughly the same rate (within 1%) whether it was inpatient, outpatient, or specialty care (P N .05 for all). Only in primary care did we see clinically, but not statistically, significant differences, where APPs improved 5.9% more than physicians (P = .218). Both APPs and physicians improved statistically and clinically in all clinical areas (P b .01). Notably, after 4 rounds of serial measurement and feedback, physician scores improved at the same rate as APP scores in History Taking (Table 3). Similarly, low-value testing by APPs decreased so that, by the end, both groups ordered the same number of low-value tests (1.2 per case; P = .571). In the Diagnosis and Treatment domains, where physicians might be expected to perform better, APPs performed the same as physicians at baseline (Figure). By the end, APPs

Table 2 Change in Performance from Baseline to Round 4 Round

Overall CPV scores Domain scores History Physical Work-up Diagnosis Treatment Unnecessary workup

APP

Physician

Baseline Final

P Baseline Final P Value Value

60.3

67.7 b .001

60.2

67.8 b .001

71.1 79.9 69.2 54.8 47.9 1.5

78.8 b .001 88.6 b .001 69.5 .454 67.1 b .001 55.1 .001 1.2 .085

67.9 81.3 70.7 53.6 48.0 1.3

77.1 88.1 70.5 66.8 57.1 1.2

b .001 b .001 .539 b .001 b .001 .090

APP = advanced practice provider; CPV = Clinical Performance and Value.

had improved by 12.3% in Diagnosis and 7.2% in Treatment. In comparison, physicians improved 13.2% in Diagnosis and 9.1% in Treatment, although the differences in improvement between the 2 groups were not statistically significant (P N .05 for both). In Diagnosis, APPs improved their ability to make the correct primary diagnosis by 14%, nearly matching physicians (75.9% vs 76.2%, P = .759). Further examination of specific details of care between APPs and physicians (Table 3) showed that although the 2 groups changed practice at different rates, the difference-indifference changes were not significant. For example, in oncology treatment, physicians ordered chemotherapy (adjuvant or neoadjuvant) at a nonsignificantly higher rate during baseline (77.4% vs 76.5%, P N .05). By study end, APPs had improved their chemotherapy ordering by 15%, compared with a 6% improvement for physicians, which did not reach statistical significance (P = .417). With respect to admission-level decision in the hospital (inpatient only), neither group improved from baseline to end (P = .928). We found one important improvement among both provider groups over time: physicians made unnecessary referrals 33.3% of the time at baseline—10.5% more frequently than APPs (P = .025), but by the end, physicians had decreased their unnecessary referrals to 7.4% (P b .001)—nearly the same rate as APPs (5.0%; P = .460). We compared Diagnosis and Treatment capabilities between primary care and specialty and inpatient vs outpatient over time, while controlling for basic provider characteristics (provider sex and number of years of experience). Primary care providers performed significantly better than specialists (+ 9.15%; P b .001), and inpatient providers (hospitalists) performed slightly better than outpatient providers (+ 2.11%; P = .087) (Table 4). Of interest, providers with more years of experience scored 0.08% worse per year of experience (P = .103), suggesting some degradation in adherence to the evidence base occurred over time. In this same multivariate regression analysis, we found no significant difference in physician vs APP performance either at baseline (+ 0.29%; P = .859) or in improvement after 4 rounds of measurement (+ 1.61%; P = .463). Every participating site wanted, as an objective, to decrease practice variation. To measure this, we normalized the baseline data for each study, and performed Levene’s test to determine whether care had standardized across the 4 studies over time. In general, by study end, there was a 9.1% relative reduction in the standard deviation (P = .014). By physician and APP, we found similar reductions for both groups, with physicians reducing their variation by 8.3% (P = .037) and APPs reducing theirs by 11.5% (P = .083). By site, we saw a significant decrease in variation in the primary care (− 30.4%; P = .001) and one hospitalist (− 16.2%, P b .001) site.

DISCUSSION The looming shortage of primary care and specialist physicians has led to growing reliance on APPs in the United

ARTICLE IN PRESS Johnson et al

5

Table 3 Specific Areas of Performance from Baseline to Round 4 Round history (%)

APP

Current complaint Medical history Social history Family history Primary diagnosis (%) Treatment (%) Ordering chemotherapy (site 1 only) Ordering preferred Abx regimen (sites 3 and 4 only) Correct admissions level (sites 3 and 4 only) Unwarranted referrals

Male Years of experience Inpatient providers Primary care providers Round Physician Physician round Constant CI = confidence interval.

− 1.60 − 0.08 2.11 9.15 7.55 0.29 1.61 47.67

95% CI − 3.53-0.34 − 0.18-0.02 − 0.31-4.53 5.86-12.45 3.67-11.44 − 2.90-3.48 − 2.69-5.92 44.29-51.05

P Value

Final

% Change

Baseline

Final

% Change

90.8 96.9 74.5 92.5 61.9

89.0 100.0 82.6 97.8 75.9

− 2% + 3% + 6% + 5% + 14%

86.3 98.9 71.4 91.5 71.4

89.5 99.0 82.4 95.3 76.2

+ 3% 0% + 10% + 4% + 5%

.550 .620 .814 .509 .759

76.5 43.5 75.0% 22.8%

88.2 30.6 70.0% 5.0%

+ 15% − 13% − 5% − 51%

77.4 44.5 81.9% 33.3%

82.4 41.7 77.1% 7.4%

+ 6% − 3% − 5% − 31%

.417 .433 .928 .460

Table 4 Multivariate Regression Model, Combined Diagnosis and Treatment Coefficient

Diff-in-Diff

Baseline

States and elsewhere.23,24 Clinicians and patients alike have expressed concerns that there may be differences in the quality of care provided by these 2 distinct professional groups. Previous direct comparisons of care between APPs and medically licensed physicians have been hampered by differences in case mix and availability of comparable data. We performed a direct comparison using a standardized measurement tool, CPVs, to measure quality of care between APPs and licensed physicians who cared for the same simulated patients within each site. The validation of the CPVs to actor patients and their linkage to better outcomes provide assurances that we were measuring actual practice.17,19,25 Among these 4 disparate sites, we found that APPs generally provided the same quality care as their physician counterparts in 3 specialty areas and in inpatient/outpatient care. At baseline, we found some interesting differences in history taking (APPs were initially better), low-value testing (APPs were worse), and unwarranted referrals (physicians were worse). Even in domains where APP practice has been restricted (diagnosis and treatment), we found, perhaps surprisingly, that APPs performed comparably with physicians at baseline (54.8% vs 53.6% for diagnosis; 47.9% vs 48.0% for treatment)

Variable

Physician

P Value .106 .103 .087 b .001 b .001 .859 .463 b .001

and at follow-up (+ 12.3% vs + 13.2% for diagnosis and + 7.2% vs + 9.1% for treatment). An equally important finding is that these disparities disappeared with iterative measurement and feedback, and that this heterogeneous group of providers, working in disparate sites and different specialties, were all able to improve their performance. This result demonstrates that CPVs are a tool that engages providers, while also being sensitive enough measure to detect meaningful (or lack of any meaningful) differences between different provider types. The other takeaway from this study is, like groups everywhere, every site in this study aspired to decrease their overall practice variation. With serial measurement and feedback, there was a substantial decrease in overall variation that penetrated down to the level of both the APPs and the physicians. Serially engaging providers with a validated clinical practice measurement tool, based upon a principle of active provider learning, has been shown to be effective in changing clinical practice in multiple settings using a variety of approaches.20,26 CPV simulations have been extensively validated to reflect actual clinical care and shown to accurately measure actual practice, not just clinician knowledge.17,18 The CPV measurement and feedback method has been applied in a wide variety of care settings to measure and reduce variation, improve quality and outcomes, and lower costs by standardizing practice.27 This report adds to a growing literature on the use of simulations to elevate practice and underscores the importance of detailed clinical measurement and reaffirms hopes of standardizing practice by engaging all providers. With the shift toward high-value care, our results indicate that APPs can play an important role in providing highquality care at lower cost. These results have clear policy implications around integrating APPs into the clinical workforce at the hospital and national level. Current perceptions—potentially dispelled by these findings—are that APPs are not capable of diagnosing and treating. This long-standing assumption is codified by law in many states. Only 21 states and the District of Columbia currently allow Advanced Practice

ARTICLE IN PRESS 6 Registered Nurses to carry out a full practice—allowing them to evaluate patients, make diagnoses, order and interpret diagnostic tests, and initiate treatments. The other states offer either reduced or restricted practice capabilities for APPs.28 PAs also experience variability in practice authority among different states, retaining full prescriptive authority in 42 states, adaptable supervision requirements in 28, and co-sign requirements in 26. For nurse practitioners and PAs, on the types of cases in these 4 projects across 7 states, these data suggest that these restrictions may not be warranted, particularly if they limit access to care services when physicians are unavailable. The costs of care findings from our study deserves a note. APPs are paid, on average, one-third of what a specialist physician makes, representing more cost-effective care.29, 30 If we assume, based upon this study, that complications (and their costs) and facility costs are the same, this represents a significant savings. At baseline, however, we found that the savings in labor costs were offset by APPs ordering more low-value tests, which has been reported by others.31 However, after measurement and educational feedback, APP practice changed, and APPs ordered no more low-value tests than physicians. With standardization of test ordering, which was achieved here, the cost of care goes down if they are producing the same care product. We add one caveat: the APPs herein self-reported seeing fewer patients per week than their physician counterparts, suggesting an important offset to the overall savings that will need to be quantified. There are a number of limitations to our study. Different states have different policies dictating the extent that APPs are involved in patient care, likely affecting some of the conclusions above. We do not know if the outcomes presented here match against patient-level data in these groups, although this has been reliably reported in other CPV studies.19,20 APPs averaged 9 years of experience, and although we had a sample size of 69, the sample size was inadequate to determine if APPs with less experience would demonstrate the same capabilities as more experienced providers. Finally, future studies should determine if our findings were replicable in other sites and geographies, among different sets of providers, and in different disease areas. Beyond this, future research could explore whether improvements resulting from serial measurement and feedback are sustained—a finding observed in a 5-year followup study of a large National Institutes of Health-funded randomized trial using the same methodology.32 In some circles there may be concerns or bias that APPs are unable to provide the same level of care as physicians. This direct comparative study dispels this in a couple of important ways. For common inpatient and outpatient cases in primary or specialty care, clinical practice was indistinguishable and the rates of improvement in their care was virtually identical. In a 2012 survey, half of patient respondents endorsed the idea of seeing an APP.33 Further work, including real-world outcome studies of APP outpatient and inpatient practice, is needed to confirm that this endorsement by patients is warranted.

The American Journal of Medicine, Vol xxx, No xxx, ▪▪ 2019

References 1. Association of American Medical Colleges. The Complexities of Physician Supply and Demand: Projections from 2016 to 2013. Washington, DC: IHS Markit, Ltd.. 2018 Update. 2. Yang W, Williams JH, Hogan PF, et al. Projected supply of and demand for oncologists and radiation oncologists through 2025: an aging, betterinsured population will result in shortage. J Oncol Pract 2014;10(1): 39-41. 3. Levy W, Gagnet S, Stewart FM. Appreciating the role of advanced practice providers in oncology. J Natl Compr Cancer Netw 2013;11(5): 508-11. 4. Association of American Medical Colleges. Forecasting the Supply of and Demand for Oncologists: A Report to the American Society of Clinical Oncology (ASCO). Washington, DC: Center for Workforce Studies. 2007. 5. National Commission on Certification of Physician Assistants (NCCPA). 2015 Statistical Profile of Certified Physician Assistants. Johns Creek, GA: NCCPA. 2016. 6. Coombs LA, Hunt L, Cataldo J. A scoping review of the nurse practitioner workforce in oncology. Cancer Med 2016;5(8):1908-16. 7. Ryan J, Doty M, Hamel L, Norton M, Abrams M, Brodie M. Primary care providers' views of recent trends in health care delivery and payment findings from the Commonwealth Fund/Kaiser Family Foundation 2015 National Survey of Primary Care Providers. Available at:The Commonwealth Fund: , https://www.commonwealthfund.org/publications/issuebriefs/2015/aug/primary-care-providers-views-recent-trends-health-caredelivery August 5, 2015. Accessed January 30, 2019. 8. Newhouse RP, Stanik-Hutt J, White KM, et al. Advanced practice nurse outcomes 1990-2008: a systematic review. Nurs Econ 2011;29(5):1-21. 9. Swan M, Ferguson S, Chang A, Larson E, Smaldone A. Quality of primary care by advanced practice nurses: a systematic review. Int J Qual Health Care 2015;27(5):396-404. 10. Martínez-González NA, Tandjung R, Djalali S, et al. Effects of physiciannurse substitution on clinical parameters: a systematic review and metaanalysis. PLoS One 2014;9(2), e89181. 11. Casey M, O’Connor L, Cashin A, et al. An overview of the outcomes and impact of specialist and advanced nursing and midwifery practice, on quality of care, cost and access to services: a narrative review. Nurse Educ Today 2017;56:35-40. 12. Kurtzman ET, Barnow BS. A comparison of nurse practitioners, physician assistants, and primary care physicians’ patterns of practice and quality of care in health centers. Med Care 2017;55(6):615-22. 13. Horrocks S, Anderson E, Salisbury C. Systematic review of whether nurse practitioners working in primary care can provide equivalent care to doctors. BMJ 2002;324:819-23. 14. Woo BFY, Lee JXY, Tam WWS. The impact of the advanced practice nursing role on quality of care, clinical outcomes, patient satisfaction, and cost in the emergency and critical care settings: a systematic review. Hum Resour Health 2017;15:63https://doi.org/10.1186/s12960-0170237-9. 15. Edkins RE, Cairns BA. Hultman CS. A systematic review of advance practice providers in acute care: options for a new model in a burn intensive care unit. Ann Plast Surg 2014;72(3):285-8. 16. Kleinpell RM, Ely EW, Grabenkort R. Nurse practitioners and physician assistants in the intensive care unit: an evidence-based review. Crit Care Med 2008;36(10):2888-97. 17. Peabody JW, Luck J, Glassman P, Jain S, Spell M, Hansen J. Measuring the quality of physician practice by using clinical vignettes: a prospective validation study. Ann Intern Med 2004;141(10):771-80. 18. Peabody J, Luck J, Glassman P, Dresselhaus TR, Lee M. Comparison of vignettes, standardized patients, and chart abstraction: a prospective validation study of 3 methods for measuring quality. JAMA 2000;283(13): 1715-22. 19. Weems L, Strong J, Plummer D, et al. A quality collaboration in heart failure and pneumonia inpatient care at Novant Health: standardizing hospitalist practices to improve patient care and system performance. Jt Comm J Qual Patient Saf 2019;45(3):199-206. 20. Burgon TB, Cox-Chapman J, Czarnecki C, et al. Engaging primary care providers to reduce unwanted clinical variation and support ACO cost and quality goals: a unique provider-payer collaboration [e-pub ahead of print]. Popul Health Manag 2018 Oct 17https://doi.org/10.1089/pop.2018.0111. 21. Colonna S, Sweetenham J, Burgon TB, et al. A better pathway? Building consensus and engaging providers with feedback to improve and standardize cancer care. Clin Breast Cancer 2019;19(2):e376-84. 22. Oravetz P, White CJ, Carmouche D, et al. Standardising practice in cardiology: reducing clinical variation and cost at Ochsner Health System. Open Heart 2019;6(1):e00094. 23. Maier CB, Barnes H, Aiken LH, Busse R. Descriptive, cross-country analysis of the nurse practitioner workforce in six countries: size, growth, physician substitution potential. BMJ Open 2016;6(9), e011901. 24. Maier T, Afentakis A. Forecasting supply and demand in nursing professions: impacts of occupational flexibility and employment structure in Germany. Hum Resour Health 2013;11:24.

ARTICLE IN PRESS Johnson et al 25. Peabody JW, Shimkhada R, Quimbo S, Solon O, Javier X, McCulloch C. The impact of performance incentives on child health outcomes: results from a clustered randomized controlled trial in the Philippines. Health Policy Plan 2014;29(5):615-21. 26. McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. Effect of practice on standardised learning outcomes in simulation-based medical education. Med Educ 2006;40:792-7. 27. Quimbo S, Peabody JW, Javier X, Shimkhada R, Solon O. Pushing on a string: how policy might encourage private doctors to compete with the public sector on the basis of quality. Econ Lett 2011;110(2):101-3. 28. Dillon D, Gary F. Full practice authority for nurse practitioners. Nurs Adm Q 2017;41(1):86-93. 29. Kane L. Medscape Physician Compensation Report, 2018. Available at:, https://www.medscape.com/slideshow/2018-compensation-overview6009667 April 11, 2018. Accessed January 14, 2019. 30. Bureau of Labor Statistics. Occupational Outlook Handbook, 2017. Available at:, https://www.bls.gov/ooh/healthcare/nurse-anesthetistsnurse-midwives-and-nurse-practitioners.htm. Accessed January 20, 2019. 31. Venning P, Durie A, Roland M, et al. Randomised controlled trial comparing cost effectiveness of general practitioners and nurse practitioners in primary care. BMJ 2000;320(7241):1048-53.

7 32. Quimbo S, Wagner N, Florentino J, Solon O, Peabody J. Do health reforms to improve quality have long-term effects? Results of a follow-up on a randomized policy experiment in the Philippines. Health Econ 2016;25(2):165-77. 33. Deloitte. Issue Brief: Deloitte 2012 Survey of U.S. Health Care Consumers: The performance of the health care system and health care reform, Deloitte. Available at:, https://www2.deloitte.com/content/dam/ Deloitte/us/Documents/life-sciences-health-care/us-lshc-2012-survey-ofus-consumers-health-care.pdf. Accessed January 14, 2019.

APPENDIX ASUPPLEMENTARY DATA Supplementary data to this article can be found online at https://doi.org/10.1016/j.amjmed.2019.05.004.