The Joint Commission Journal on Quality and Patient Safety Methods, Tools, and Strategies
A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data Ahmed Al-Kuwaiti, PhD; Karen Homa, PhD; Thennarasu Maruthamuthu, PhD
I
n measuring health care performance in the improvement of quality and patient safety, clinicians can review their performance and make adjustments, as necessary, in the existing care process, monitor progress, and share success with various stakeholders. Performance measures—to which we refer as performance indicators (PIs) here—then, can help clinicians and other staff to not only track the progress of improvement initiatives but to identify and implement best care practices. As Benneyan et al. have stated, the major components of health care performance measures are as follows: (1) identifying the PI, (2) collecting an appropriate and required quantity of data, (3) analyzing and interpreting the data, and (4) driving an action plan for improvement.1 We developed a model that focuses on the third component—the analysis and interpretation of the data using statistical process control (SPC). SPC is a branch of statistics that combines rigorous time series analysis methods with graphical presentation of data to make the data more understandable to lay decision makers.1 The basic theory of SPC was developed in the late 1920s by Walter Shewhart,2 and, starting in the 1950s. W. Edwards Deming expanded its application in the industrialized world.3,4 Our model also incorporates benchmarking, which is an important tool that health care administrators can use to help engage clinicians in improvement work and, in particular, to help the process owner of a specific care practice to understand how his or her performance compares with that of others. Benchmarking theory is built on performance comparison, gap identification, and changes in the management process.5 In this way, benchmarking can stimulate healthy competition, as well as help clinicians reflect on their own performance, and, at the very least, understand that better results are possible.6 Some PIs may not suitable for the comparison with benchmarks, so the essential first step is to verify that the data fall within the statistically accepted limit—that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. Thus, if the data from a process are stable, their variation 462
October 2016
Article-at-a-Glance Background: A performance improvement model was
developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit—that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. Methods: The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. Results: The steps are illustrated through the use of health care–associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Conclusion: Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the “Analyze” part of the DMAIC model.
Volume 42 Number 10
Copyright 2016 The Joint Commission
The Joint Commission Journal on Quality and Patient Safety Sidebar 1. A Brief Primer on Control Charts Any measure collected over time can be plotted in a time series chart but in and of itself does not provide information about variation of the process. A control chart has a center line, as well as control-limit lines (upper and lower control limits), which are used to evaluate the type and amount of variation within a process. Thus, the control chart is a tool for distinguishing between the common causes of variation (that is, to due to the system itself) and special causes of variation (variation due to factors external to the system) for a critical-to-quality (CTQ) element.1 Process variation is the range of natural variability in a process for which health care managers employ control charts to monitor the measurements. If the natural variability or the presence of random variation exceeds limits set by control charts, then the process is not meeting the design specifications.2,3 Thus, control charts are the most important statistical process control (SPC) technique and are valuable for several purposes throughout the process improvement cycle4–7— References 1. Levine DM. Statistics for Six Sigma Green Belts with Minitab and JMP. Upper Saddle River, NJ: Pearson Education, 2006. 2. The Quality Web. Lesson #9—Tool #6—Xbar & R Control Charts. Accessed Sep 12, 2016. http://thequalityweb.com/control.htm. 3. Moen RD, Nolan TW. Process improvement. Quality Progress. 1987;20(9):62–68 4. Shewhart WA. Economic Control of Quality of Manufactured Product. New York City: D. Van Nostrand Company, 1931. 5. Deming WE. Quality, Productivity, and Competitive Position. Cambridge, MA: Massachusetts Institute of Technology, Center for Advanced Engineering Study, 1982. 6. Montgomery DC. Introduction to Statistical Quality Control, 3rd ed. New York City: Wiley, 1996. 7. Benneyan JC. Statistical quality control methods in infection control and hospital epidemiology. Infect Control Hosp Epidemiol. 1998;19:194–214. 8. Matthes M, et al. Statistical process control for hospitals: Methodology, user education, and challenges. Qual Manag Health Care. 2007;16:205–214. 9. Woodall, WH. The use of control charts in health-care and public-health surveillance. Journal of Quality Technology. 2006;38:89–104.
will be predictable and can be described by some of the statistical distributions.7,8 On the other hand, if special-cause variation is observed over the period of time, it would be necessary to control the variation (that is, optimize the process) before proceeding toward comparison with the benchmark. Additional detail on control charts is provided in Sidebar 1 (above). In this article, we present the rationale for and illustrate the application of a proposed new model for the analysis of PI data.
The Define, Measure, Control, Internal Threshold, and Benchmark (DMCIB) Model
The Define, Measure, Analyze, Improve, Control (DMAIC) model6 is applicable to the entire process of quality improvement. Our proposed DMCIB model includes benchmarking in analyzing PI data.
Steps in the Model The model consists of the following five steps: ■■ Step 1. Define the process (identify the PI).
October 2016
testing for and establishing a state of statistical control; monitoring an in-control process for changes in process and outcome quality; and identifying, testing, and verifying process improvement opportunities. The use of control charts is increasingly being suggested for a variety of applications in health care in an effort to improve the quality of health care delivery. Components of variability exhibited by health care data make them suitable to employ control charting techniques.8–10 Several common types of control charts exist, each appropriate for different types of process and each constructed using different formulas11–16: • Either an np- or p-control chart should be used for discrete binomial data. • Either a c- or u-chart should be used for count data generated by Poisson distribution. • Both an Xbar chart and S-chart should be used, always together, for normally distributed continuous data. 10. Sonneson C, Bock,D. A review and discussion of prospective statistical surveillance in public health. J R Stat Soc Ser A. 2003;166:5–21. 11. Duncan AJ. Quality Control and Industrial Statistics, 5th ed. Homewood, IL: Irwin, 1986. 12. Finison LJ, Finison KS, Bliersback CM. The use of control charts to improve healthcare quality. J Healthc Qual. 1993;15(1):9–23. 13. Inhorn SL; Vooijs GP (moderator). Quality assurance and quality control in clinical cytology: Symposium by correspondence. In Wied GL, et al., editors: Compendium on Quality Assurance, Proficiency Testing and Workload Limitations in Clinical Cytology. Chicago: Tutorials of Cytology, 1995, 319–389. 14. Laffel G, Blumenthal D. The case for using industrial quality management science in health care organizations. JAMA. 1989 Nov 4;262:2869–2873. 15. Mango LJ; Vooijs GP (moderator). Quality assurance and quality control in clinical cytology: Symposium by correspondence. In Wied GL, et al., editors: Compendium on Quality Assurance, Proficiency Testing and Workload Limitations in Clinical Cytology. Chicago: Tutorials of Cytology, 1995, 319–389. 16. Walton M. The Deming Management Method. New York City: Dodd, Mead, 1986.
■■ Step 2. Monitor and measure the variation over the period of time (at least 12 data points). ■■ Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4. Otherwise, control variation with the help of an action plan. ■■ Step 4. Develop an internal threshold and compare the process with it. ■■ Step 5.1. Compare the process with an internal benchmark (for example, compare with another hospital or cluster of hospitals in the same health system). ■■ Step 5.2. Compare the process with an external benchmark. We illustrate these steps through the use of data collected from the infection control unit, King Fahd Hospital of the University (KFHU), University of Dammam, Saudi Arabia. Health care–associated infection (HAI) data were collected for 2013 and 2014.These were attribute data, in which a patient either has an acquired infection or does not. We used a c-chart (Sidebar 1), as we report the number of infections per 1,000 days of hospitalization.9
Volume 42 Number 10
Copyright 2016 The Joint Commission
463
The Joint Commission Journal on Quality and Patient Safety Health Care–Associated Infections (HAIs), 2013: Before Action Plan
Health Care–Associated Infections (HAIs): 2013 (Before) and 2014 (After) Action Plan 2013
UCL = 7.064
7 6
2014
6
5
_
X = 4.253
4 3 2
HAI/1000 days
HAI/1000 days
UCL = 7.064
7
UCL = 5.561
5
_
X = 4.253 4
_
X = 3.37 3 2
LCL = 1.441
LCL = 1.441
1
1 Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
1
3
5
7
9
11
13
15
17
19
LCL = 1.179 21 23
Observation
2013
Figure 1. The process depicted in this figure was considered to have too much range of random variation of HAIs for this hospital, and an action plan was developed to reduce the amount of variation. UCL, upper control limit; LCL, lower control limit.
Figure 2. For the hospital’s 2014 data, the new process average for HAIs was 3.4—almost 1 less HAI per 1,000 patient-days than in 2013. In addition, the variation was reduced by 1.2 HAIs per 1,000 patient-days. UCL, upper control limit; LCL, lower control limit.
Step 1. Define The PI was defined as follows: Name of the PI: HAI PI Description: HAI is a localized or systemic condition resulting from an adverse reaction to the presence of an infectious agent(s) or its toxin(s); there must be no evidence that the infection was present or incubating at the time of admission to the health care setting. Dimension of Performance: Safety Frequency of Data Collection: Monthly Rationale for Choice of PI: Patient safety Goal or Anticipated Outcome: ≤ 5.7 /1,000 patient-days Numerator: HAI Denominator: patient-days (each day represents a unit of time in which health care services were provided to patients; thus, 100 patient-days could represent 50 patients, each with a 2-day stay)
Step 3. Control Variation In an action plan developed to reduce the amount of variation, nurses and the paramedics were trained in the proper usage of an infection prevention bundle for central line–associated bloodstream infection (CLABSI) and catheter-associated urinary tract infection (CAUTI). Also, a culture of deep cleaning was instilled, with maintenance of sufficient supplies for isolation precautions (such as alcohol rub), and hand hygiene compliance was promoted. Figure 2 (above) shows the 2013 data depicted in Figure 1, along with the 2014 data. The 2014 HAI results were below the historical average of 4.3 HAIs per 1,000 patient-days, representing a significant shift in the data and a special-cause signal of eight or more consecutive points below the historical average. In the 2014 data, displayed with its average (center line) and control limits, the new process average for HAIs was 3.4—almost 1 fewer HAI per 1,000 patient-days compared to 2013. In addition, the random variation was reduced from 5.6 (UCL, 7.06; LCL, 1.44) in 2013 to 4.4 (UCL, 5.56; LCL, 1.17) in 2014, so that the variation was reduced by 1.2 HAIs per 1,000 patient-days.
Step 2. Monitor Variation Figure 1 (above) shows HAIs per 1,000 patient-days during 2013, in which the average was 4.3 HAIs per 1,000 patientdays. Because all the points lie between the upper and lower control limits (UCL, LCL), and there is no special cause of variation, the process is under statistical control. The current process could have as many as 7 HAIs per 1,000 patient-days or as few as 1.5 HAIs per 1,000 patient-days, which was deemed as reflecting too much range of random variation, and an action plan was developed to reduce the amount of variation. 464
October 2016
Step 4. Internal Threshold As shown in the 2014 data (Figure 2), there was only random variation, so—using the UCL of the current process—an internal threshold could be set at 5.56. An HAI rate of > 5.56, then, should signal to the hospital administration that something new entered the process and made HAI worse, with action to be taken to learn the cause. Volume 42 Number 10
Copyright 2016 The Joint Commission
The Joint Commission Journal on Quality and Patient Safety Comparison of Health Care–Associated Infection (HAI) 2014 Data with Internal and External Benchmarks 6
UCL = 5.561
External Benchmark (5.7)
HAI/1000 days
5 Internal Benchmark (4.2)
4
_ X = 3.37
3
2
1 Jan Feb Mar Apr May Jun
Jul
LCL = 1.179 Aug Sep Oct Nov Dec
2014
Figure 3. The hospital’s HAI control chart is shown, along with the internal and external benchmarks. In a stable process, all future points will fall between the control limits, which, in this case, would mean that the hospital would consistently outperform the external benchmark. UCL, upper control limit; LCL, lower control limit.
Step 5. Benchmark Comparison In the last step, the actual KFHU HAI rate was compared with that of another Saudi Arabian University hospital (internal benchmark) of 4.2 per 1,000 patient-days. The 2014 KFHU average, 3.37 (UCL, 5.56), was slightly lower. The international standard (external benchmark) was 5.7 HAIs per 1,000 patientdays—above 5.56, the UCL of KFHU for 2014. According to SPC theory, in a stable process, all future points will fall between the control limits,1 which, in this case, would mean that KFHU would consistently outperform the benchmark. Figure 3 (above) shows the 2014 KFHU HAI control chart, along with the internal and external benchmarks.
Discussion
Monitoring variation is an important strategy in understanding and learning about a process. We have proposed the DMCIB model for the analysis of PIs. Execution of the model’s steps— Define, Monitor Variation, Control Variation, Establish an Internal Threshold, and Compare with Internal and External Benchmarks—we suggest, can help to expedite analysis of performance data. In the provided example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. Because 2014 data only October 2016
showed random variation, the UCL can be considered an internal threshold for HAI. That is, KFHU should be notified if the HAI rate goes above this level, as this would mean that something new entered the system to make the process worse. Benchmarks, both internal and external, were obtained, which showed that KFHU’s result would be consistently better than the external benchmark—and better than the internal benchmark most of the time. The model is subject to limitations: (1) It is not applicable to the entire process of performance measures, such as the collection of reliable and valid data; (2) it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations; (3) it focuses only on the “Analyze” part of the DMAIC model; and (4) its use requires the SPC technique. Nonetheless, the model can help managers, clinicians, quality and safety coordinators, and researchers to use data and statistical thinking to make appropriate decisions and comparisons. J Ahmed Al-Kuwaiti, PhD, is Associate Professor and Supervisor General, Deanship of Quality and Academic Accreditation, University of Dammam, Saudi Arabia. Karen Homa, PhD, formerly Improvement Specialist, Leadership Preventive Medicine Residency Program, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, is Healthcare Consultant, Orford, New Hampshire, USA. Thennarasu Maruthamuthu, PhD, is Statistician, Deanship of Quality & Academic Accreditation, University of Dammam. Please address correspondence to Ahmed Al-Kuwaiti,
[email protected].
References
1. Benneyan JC, Lloyd RC, Pisek PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care. 2003;12: 458–464. 2. Shewhart WA. Economic Control of Quality of Manufactured Product. New York City: D. Van Nostrand Company, 1931. 3. Deming WE. Quality, Productivity, and Competitive Position. Cambridge, MA: Massachusetts Institute of Technology, Center for Advanced Engineering Study, 1982. 4. Best M, Neuhauser D. Walter A Shewhart, 1924, and the Hawthorne factory. Qual Saf Health Care. 2006;15:142–143. 5. Kay JFL. Health care benchmarking. Medical Bulletin. 2007;12(2):22–27. Accessed Sep 12, 2016. http://www.fmshk.org/database/articles/06mbdrflkay .pdf. 6. Agency for Healthcare Research and Quality. Practice Facilitation Handbook: Module 7. Measuring and Benchmarking Clinical Performance. May 2013. Accessed Sep 12, 2016. http://www.ahrq.gov/professionals/prevention -chronic-care/improve/system/pfhandbook/mod7.html. 7. Levine DM. Statistics for Six Sigma Green Belts with Minitab and JMP. Upper Saddle River, NJ: Pearson Education, 2006. 8. The Quality Web. Lesson #9—Tool #6—Xbar & R Control Charts. Accessed Sep 12, 2016. http://thequalityweb.com/control.htm. 9. Ozcan YA. Quantitative Methods in Healthcare Management: Techniques and Applications, 2nd ed. San Francisco: Jossey-Bass, 2009.
Volume 42 Number 10
Copyright 2016 The Joint Commission
465