HOSPITAL-ACQUIRED INFECTIONS IN THE UNITED STATES*

HOSPITAL-ACQUIRED INFECTIONS IN THE UNITED STATES*

0891-5520/97 $0.00 NOSOCOMIAL INFECTIONS + .20 HOSPITAL-ACQUIRED INFECTIONS IN THE UNITED STATES The Importance of Interhospital Comparisons Lenno...

756KB Sizes 0 Downloads 124 Views

0891-5520/97 $0.00

NOSOCOMIAL INFECTIONS

+

.20

HOSPITAL-ACQUIRED INFECTIONS IN THE UNITED STATES The Importance of Interhospital Comparisons Lennox K. Archibald, MD, MRCP, and Robert P. Gaynes, MD

Nosocomial infections are estimated to involve more than 2 million patients annually, and the financial burden in 1992 was estimated in excess of 4.5 billion dollars.4Surveillance and control of nosocomial infections have therefore become a priority within acute care hospitals throughout the United States. Also, in the current era of managed care and downsizing of hospitals, there has been an emphasis to improve the quality of medical care while simultaneously controlling costs. Approaches to monitor the effect of nosocomial infections and quality of care are strikingly similar to the principles developed by Deming for quality improvement in man~facturing.~ In order to use infection rates as a basis for measuring quality of care, however, these rates must be meaningful for comparison, either from one hospital to another or within a hospital over time. Published reports have addressed the importance of adjusting for risk factors (eg., severity of illness) when comparing mortality rates among hospital^.'^, '' Similar approaches for nosocomial infection rates are n e c e ~ s a r y .This ~ ~ article reviews the aspects hospital epidemiologists must consider before attempting to make interhospital comparisons of infection rates.

All material in this article, with the exception of borrowed figures, tables, or text, is in the public domain. ~

~~~~~

From the Hospital Infections Program, Centers for Disease Control and Prevention (LKA, RPG), and Emory University School of Medicine (RPG), Atlanta, Georgia

INFECTIOUS DISEASE CLINICS OF NORTH AMERICA VOLUME 11 * NUMBER 2 -JUNE 1997

245

246

ARCHIBALD & GAYNES

CONCEPT OF COMPARABLE RATES

A comparable rate is a rate that controls for variations in the distribution of major risk factors associated with an event so that the rate can be meaningfully compared within the hospital or to an external standard. There are two types of risk factors: intrinsic and extrinsic. Intrinsic risk factors include diseases, underlying conditions such as immunosuppression, age and sex, and severity of illness; extrinsic risk factors include various forms of therapy, treatments and procedures (including surgical), or exposure to devices such as intravascular catheters, ventilators, or urinary catheters. Such risk-adjustments are elaborated on later in this article. There are two types of rate comparisons: intrahospital and interhospital. The primary goals of intrahospital comparison are to identify areas within the hospital that may need evaluation and to measure the efficacy of interventional efforts. Quantification of baseline nosocomial infection rates enables hospitals to objectively analyze and follow the trends of their hospital-acquired infection rates over time. Intrahospital comparison has the advantage of better control of observer variation, particularly for nosocomial case-finding, culturing frequency and technique, and case-mix. Unfortunately, sample size comparison in a single hospital can be a major problem, especially when monitoring surgical procedures. This is a major reason for turning to multicenter studies. Interhospital comparison (or comparison to an external standard) entails comparing rates with those of other hospitals participating in a multicenter surveillance system. Like intrahospital comparison, one of the main purposes of comparing infection rates of one hospital with those of other hospitals is to assess areas (or rates) that need evaluation. The approach is different, however. Because more than 90% of nosocomial infections do not occur in recognized epidemic^,"^ the endemic infection rate may be very consistent, and variation that signals an outbreak may be absent. Without external comparisons, a hospital may not know if its endemic rate is high or, at least, what area to direct the limited resources of the infection control program. External comparisons, although very appealing, come at a higher price than intrahospital comparisons. Interhospital comparisons imply that a large number of hospitals are collecting data in the same manner. Differences in rates among hospitals are assumed by many to represent differences in the health care worker or institutional practices in preventing nosocomial infections. A low nosocomial infection rate may be interpreted as an indication that the hospital's infection control program is effective at preventing nosocomial infections; the converse also may be true if nosocomial infection case-finding is very poor. An infection rate found to be relatively high compared with that of other hospitals may suggest a potential problem in the hospital; it does not, however, establish that the problem is one of infection control since there may be overzealous or inaccurate case-finding or the denominator data may be inaccurate. Comparisons should be used only as an initial guide for setting priorities for further investigation. THE VALUE OF SURVEILLANCE OF NOSOCOMIAL INFECTIONS

Suvveillance is defined as the ongoing, systematic collection, analysis, and interpretation of health care data essential to the planning, implementation, and evaluation of public health practice, closely integrated with the timely dissemination of these data to those who need to know.3 The question arises:

HOSPITAL-ACQUIRED 1NFECTIONS IN THE UNITED STATES

247

why perform surveillance of nosocomial infections? The primary purpose of such surveillance is to reduce nosocomial infections. Surveillance is, therefore, a tool for prevention. There is strong scientific evidence that surveillance reduces nosocomial infection rates. From 1974 through 1983, the Centers for Disease Control and Prevention (CDC) carried out the seminal Study on the Efficacy of Nosocomial Infection Control, more commonly known as the SENIC The SENIC project provided the scientific basis for the assertion that surveillance is an essential element of an infection control program. With 338 hospitals participating and examination of over one third of a million patient charts, one of the key findings of the SENIC project was that hospitals with the lowest nosocomial infection rates had both strong surveillance and prevention-control programs. There is also scientific evidence from other published studies that surveillance reduces nosocomial infections. For example, the collection and dissemination of surgeon-specific surgical site infection rates to surgeons has been shown to lower the rate of nosocomial surgical site infection^.^, 7, 35 THE NATIONAL NOSOCOMIAL INFECTIONS SURVEILLANCE (NNIS) SYSTEM

The National Nosocomial Infections Surveillance (NNIS) system, the only source of national data on the epidemiology of nosocomial infections, their associated pathogens, and the antimicrobial susceptibility patterns of these pathogens in the United States, was established in 1970 by the CDC to help create a national database of nosocomial infections and improve surveillance methods in hospitals.” The NNIS system has evolved over the years to its present state in which participating hospitals voluntarily and routinely collect and report their nosocomial infection data on medical and surgical inpatients requiring acute care to the CDC using one of four standardized surveillance components: hospital-wide, adult and pediatric intensive care unit (ICU), highrisk nursery, and surgical inpatient.’l Standardized protocols and uniform definitions of nosocomial infections are used to monitor these groups of patients.16 More than 235 US hospitals participate in the NNIS system and the identities of all these hospitals remain confidential under section 308 of the United States Public Health Service Act. At least eight “Aggregating Institutions” collect information for nosocomial infection rates (Table 1).There are a number of factors to consider when contemplating participation in an aggregated national database for purposes of interhospital comparison. We have developed a series of questions for hospital administrations to pose before participating in such an endeavor. The questions were devised from the scientific principles of epidemiology and 20 years of experience with the analysis of hospital-acquired infection rates from hospitals participating in the NNIS system. We cite examples from the NNIS system for illustration. Are the Data Collected Across Institutions with Attention to the Principles of Epidemiology?

Epidemiology is defined as the study of factors determining the occurrence of diseases in human pop~1ations.l~ The definitions of nosocomial infections, along with other data fields, and the populations monitored must be standardized and practical to be useful to the hospital and the aggregating institutions.

248

ARCHIBALD & GAYNES

Table 1. SELECTED INSTITUTIONS COLLECTING DATA FOR NOSOCOMIAL INFECTION RATES FOR INTERHOSPITAL COMPARISON ~~~~~~~~~~~~~~~

~

~

Overall nosocomial infection rate and surgical site infection rate by class Maryland Hospital Association* Greater Cleveland Health Quality Choice** South Carolina Hospital Association North Carolina Hospital Association Indiana Hospital Association Georgia Hospital Association ICU nosocomial infection rates and surgical site infection rates by risk index Joint Commission on Accreditation of Healthcare Organizations CDC/National Nosocomial Infections Surveillance ("IS) system 'Added option in 1989: overall pneumonia and bloodstream infection rate. **Does not collect surgical site infection rate.

Definitions of nosocomial infections usually involve clinical and laboratory parameters. If they only involve laboratory parameters, there may be no clinical relevance to the event. One may not know the patient really had the disease because nearly all laboratory tests have false-negatives and false-positives. Alternatively, there may be no single laboratory test for the event. If only clinical parameters are used, however, ( e g , a doctor's note or diagnosis) there may be too much subjective variation for the event to be useful to examine across hospitals. Finding events in hospitals, termed case-finding, can occasionally be straightforward, such as for mortality. Finding nosocomial infections, however, requires considerable training before a health care worker can reliably and accurately determine if a patient's record indicates a particular infection. Medical record abstractors have consistently performed poorly on case-finding for nosocomial infections when compared with infection control The lack of sufficient personnel resources makes it nearly impossible to intensively monitor all hospitalized patients. Therefore, each hospital must know what group of patients ( e g , those patients in an intensive care unit) to monitor. Just as important, the length of time that the hospital monitors the group must be defined and standardized. Experience in the NNIS system has confirmed that targeted surveillance is better than hospital-wide surveillance for three main reasons. First, case-finding is more accurate if targeted in a specific area, for example, a surgical intensive care unit or other specialized units. Second, in practical terms, targeting a specialized unit is more efficient for the infection control practitioner and for allocation of limited resources. And third, risk adjustment is much more feasible for targeted units. The intensive care unit, high-risk nursery, and surgical patient components were developed in the NNIS system primarily to address these limitations in the hospital-wide component of the NNIS system so that surveillance efforts could be targeted using standardized surveillance methods and more specific denominators could be acquired. Since 1988, NNIS hospitals have been surveyed approximately every 2 years regarding their number of licensed beds and their number of ICU beds. Analysis of the data shows that between 1988 and 1995 at NNIS hospitals, the mean total number of beds decreased slightly, whereas the mean total number of ICU beds in these hospitals increased significantly.2These data suggest that at many acute care hospitals, the changing health care environment is resulting in smaller

HOSPITAL-ACQUIRED INFECTIONS IN THE UNITED STATES

249

non-ICU populations and larger ICU populations. The accompanying increase in device use may result in increased nosocomial infection rates. Thus, ICUs remain the paramount areas for targeted surveillance and comparison of nosocomial infection rates. How Are Numerator and Denominator Data Chosen for the Determination of Rates? To make effective use of surveillance data, infection rates need to be calculated. A rate is an expression of the probability of occurrence of an event during a certain time interval. The numerator of an infection rate is always the number of infections, of a particular type, that have occurred within a particular group of patients over a period of time. The group of patients chosen and the choice of the denominator used in calculating the infection rate are what separates comparative rates from those that are not. The crude overall nosocomial infection rate is the total number of nosocomial infections at all sites (e.g., urinary tract infections, pneumonia, surgical wound infections, bloodstream infections, and others) divided by a measure of the population at risk (e.g., the number of admissions, discharges, or patientdays). Using a crude nosocomial infections rate to characterize a hospital’s nosocomial infection problem has been seriously questioned or reje~ted.’~, *I, 31 Many investigators and organizations, including the Task Force on Infection Control for the Joint Commission on Accreditation of Health Care Organizations (JCAH0):7 have rejected this rate as a valid indicator of the quality of care. JCAHOs reasons for doing so were stated by Dr. Robert Haley, the task force chair: “A hospital’s crude overall nosocomial infection rate was considered to be too time-consuming to collect because of the need to do continuous, comprehensive surveillance, unlikely to be accurate, and thus misleading to interpret, and unusable for interhospital comparison because of the lack of a suitable risk index of infection of all types.”2nBefore nosocomial infection rates are used for interhospital comparison or as indicators of quality of care, they require risk adjustment. The accuracy of nosocomial infection rates would be enhanced if better adjusted with, for example, direct measurement of severity of illness. A crude overall nosocomial infection rate of a hospital provides no means of adjustment for patients’ intrinsic or extrinsic risks. Thus, the CDC has stated that such a rate should not be used for interhospital comparison.33 Are the Populations Monitored Adjusted for Their Level of Risk?

We have previously shown the importance of risk adjustment in the use of device-associated infection rates in I C U S . In ~ ~these units, there is one dominant risk factor: exposure to invasive devices. Unlike ICU infections in which one risk factor predominates, for patients who have undergone surgical procedures, the risk of surgical site infections (SSI) is related to a number of factors, including the operative procedure performed, the degree of microbiologic contamination of the operative field, duration of operation, and the intrinsic risk of the patient.7,l o , 22, 24 Because infection control practices cannot ordinarily alter or eliminate these risks, SSI rates must be adjusted for these risks before the rates can be used for comparative purposes. SSI rates traditionally have been categorized by operative procedure, surgeon,

250

ARCHIBALD & GAYNES

and wound class, in an attempt to account for some of these factors. They fail, however, to account for variations in patients' intrinsic susceptibility to infection. An SSI risk index that effectively adjusts SSI rates for most operations has been developed by the CDC for the NNIS system.* This risk index uses a scoring system ranging from 0 to 3 and consists of scoring each operation by counting the number of risk factors present from among the following: (1) a patient with an American Society of Anesthesiologists (ASA) preoperative assessment score of 3, 4, or 5; (2) an operation classified as contaminated or dirty-infected; and (3) an operation lasting over T hours where T is the approximate 75th percentile of the duration of surgery for the various operative procedures reported to the NNIS data base and depends on the operative procedure being performed. The NNIS risk index is a better predictor of SSI risk than is the traditional wound classification system and performs well across a broad range of operative procedures. The NNIS risk index also predicts varying SSI risks within a wound class, suggesting, for example, that all clean operations do not carry the same risk of wound infection. Surgical site infection rates should be stratified by risk categories before comparisons are made between institutions and surgeons or across time. Exceptions are spinal fusion, craniotomy, ventricular shunts, and caesarean section operations in which SSI risk is not predicted by the risk index. Work is currently under way to devise an appropriate risk index scoring system for these operations.

Do Analysis and Dissemination of Aggregated Data Occur in a Simple and Timely Fashion?

The analysis, interpretation, and dissemination of data is an essential characteristic of a surveillance system. The feedback also must be in a form that personnel in an individual hospital can easily understand. To include an individual hospital in the aggregated data, the sample size (e.g., number of patients undergoing an operative procedure or number of device-days in an ICU) must be sufficient so that the calculated rate for the hospital (or surgeon, unit, and so on) adequately estimates the true rate. This is based on the size of the denominator of the rate. The number of hospitals (or surgeons, units, and so on) that comprise the aggregated data must also be sufficient to adequately estimate the distribution of the rate. In the NNIS system, we do not report distributions of rates unless at least 20 hospitals report sufficient data to calculate their device-associated infection rate.'*,25 The value of the rate, indeed, the data collection process must be clear to patient care personnel. If patient care personnel perceive the value of surveillance information, they will rely on data for decisions and will alter behavior. This requires dissemination of rates, in a simple and routine manner, to those who need to know. The NNIS system achieves this goal by publishing a NNIS semiannual Moreover, surveillance can demonstrate whether or not infection rates are reduced within a hospital. Several hospitals have reported the value within their hospital of NNIS comparative aggregated data as a tool for reducing nosocomial infection rates.', 28, 36 The use of risk-adjusted infection rates and feedback of the distributions of these rates back to the participating NNIS hospitals have helped refine outcome measures that provide more meaningful rates for interhospital c ~ m p a r i s o n . ~ ~

HOSPITAL-ACQUIRED INFECTIONS IN THE UNITED STATES

251

What Approach Exists to Examine the Data for Inaccuracies?

The hospital and the aggregating institution share the responsibility for ensuring that the data are as accurate as possible. The aggregating institution must have an organized, systematic approach to examine the data for inaccuracies. This includes estimates of sensitivity and specificity of the system, edit checks in software to prevent inaccuracy on data entry, and training of data collectors. For example, in a participating NNIS hospital, of 150 coronary artery bypass graft operations, 97% revealed an ASA score of 1 (normal, healthy patient). Of all other coronary artery bypass graft procedures reported to NNIS, only 4% revealed an ASA score of 1. Following an investigation, the hospital corrected the inaccuracies in their ASA scores and sent more consistent coronary artery bypass graft data. Examining data for inaccuracies also includes making the estimates of the sensitivity and specificity of the system available to prospective hospitals. Independent determination of data accuracy or validation is an essential activity of any group that is aggregating data from multiple collectors. At issue is assessing the accuracy of case-findings of nosocomial infections by determining two factors: sensitivity and specificity. Ascertaining the sensitivity and specificity of nosocomial infection case-finding by an independent, trained observer adds to the credibility of the surveillance system, helps determine ways to adjust rates for hospitals that vary in size and case-mix, and offers ways to improve surveillance activities. Sensitivity is the ratio of the number of events (eg., patients with nosocomial infections) reported divided by the number of events that actually occurred?" Specificity is the reported number of patients without nosocomial infections divided by the actual number without nosocomial infection^.^^ Predictive value positive is the proportion of reported infections that are indeed true infections. Low sensitivity (i.e., missed infections) in a surveillance system is usually more common than low specificity (i.e., patients reported to have infections who did not actually have infections). An evaluation of the sensitivity, specificity, and predictive value positive of NNIS definitions was recently concluded.'* During 1994 and 1995, the CDC completed a pilot study to determine if a chart review methodology could be used to evaluate the accuracy of nosocomial infection data reported to the NNIS system. In this study, patient records from NNIS hospitals were independently reviewed by an independent contractor that had infection control practitioners trained in the application of NNIS criteria. The numbers of infections reported by NNIS hospitals were compared to the numbers detected by the contractors. Discrepancies and CDC confirmation also were analyzed. Contractors detected 77% of reported infections and detected twice as many infections at the four major sites-bloodstream, respiratory, urinary tract, and surgical site-as the hospitals. Estimates of sensitivity, specificity, and predictive value positive for reported infections varied by site; urinary tract infections and other sites were underreported by NNIS hospitals; pneumonia was overdetected by contractors. The sensitivity for reported bloodstream infections, pneumonia, and surgical site infections were 86%, 69%, and 68%, respectively. The specificity for these three infections ranged from 97.9% to 98.7% suggesting that NNIS hospitals are not overreporting these infections. Generally, there was misclassification of bloodstream infections as primary versus secondary. Areas for improvement included the definition of nosocomial pneumonia and clarification of the reporting instructions to hospitals on what constitutes a primary versus secondary

252

ARCHIBALD & GAYNES

bloodstream infection. In conclusion, NNIS ICU data were found to be sufficiently accurate and reliable to be used for interhospital comparison. In the NNIS system, we assess data accuracy by reporting data back to the hospital in an effort to detect errors in data transmission. Training of data collectors is needed. In 1995, 87% of NNIS hospitals had an infection control practitioner currently employed who had visited the CDC for at least 2.5 days of training. Is the Data a Hospital Provides Confidential?

Data with hospital identifiers that can be made available to the media or the courts may encourage inaccurate reporting of events, nosocomial infections, and the misuse of such information. Sensitivity and specificity of infection casefinding in the surveillance system could, therefore, be adversely affected. This makes implementation of an ongoing method to obtain regular estimates of the sensitivity, specificity, and predictive value positive of a surveillance system even more critical. External comparison may be linked to marketing of a particular hospital’s services. This endeavor may compromise the confidentiality status of the participating hospital. If rates are used for marketing, there is an incentive to obtain lower rates. This bias raises the obvious question: do the rates reflect the truth? The dilemma remains despite good intentions. This use of external comparison should be strongly discouraged since too many uncertainties exist in the data collection by often hundreds of different data collectors. Does Collection and Dissemination of Data on Nosocomial Infection Rates Actually Influence a Reduction in the Rates?

NNIS data indicate changes in the entire distribution of nosocomial infection rates following dissemination of comparative rates. Since 1987, when the NNIS system began reporting device-associated, device-day rates to member hospitals, there has been a 7% to 10% annual reduction in mean rates for device-associated infections among ICUs in NNIS hospital^.'^ Moreover, there were no increases in any rate for any of the types of ICU. The fact that dissemination of information does make a difference is demonstrated in previous reports showing statistically significant falls, for example, in the pooled ventilator-associated pneumonia rates among NNIS hospitals following dissemination of a NNIS report on comparative infection rates in 1991.36 LIMITATIONS OF RATES FOR INTERHOSPITAL COMPARISON

Although a hospital’s surveillance system might aggregate accurate data and generate appropriate risk-adjusted nosocomial infection rates for both internal and external comparison, the comparison may be misleading for several reasons. First, the rates may not adjust for patients’ unmeasured intrinsic risks for infection, which vary from hospital to hospital. For example, a hospital with a large proportion of immunocompromised patients would be expected to have a population at higher intrinsic risk for infection than one without such a population of patients. Second, if surveillance techniques are not uniform among hospitals or are used inconsistently over time, variations will occur in sensitivity

HOSPITAL-ACQUIRED INFECTIONS IN THE UNITED STATES

253

and specificity for nosocomial infection case-finding. Third, the sample size (e.g., number of patients, admissions and discharges, patient-days, or operations) must be sufficient so that the calculated rate adequately estimates the true rate for the hospital. This issue is of concern for hospitals with less than 100 beds. Hospitals participating in the NNIS system have 100 or more beds. No adequate system exists for comparison of rates for these very small hospitals, which represent only about 10% of hospital admissions in the United States. In all of the NNIS analyses, rates from hospitals with small denominators have been excluded,”. 25 thus rendering comparison of rates from small hospitals with NNIS rates invalid. On the other hand, a hospital with less than 100 beds and with large enough denominators may validly compare its nosocomial infection rates with those of NNIS hospitals if the data were collected and analyzed in a similar manner to hospitals within the NNIS system. FURTHER IMPROVEMENTS TO INTERHOSPITAL RISK COMPARISON

In the NNIS system, the validity of nosocomial infection rates from ICUs, adjusted for extrinsic risk factors, would be enhanced if they were better adjusted with a direct measurement of patients’ severity of illness. Otherwise, hospitals providing care for patients with a greater severity of illness may have higher rates of nosocomial infections. Properties of a severity of illness score should include specificity for a particular nosocomial infection and site of infection. Recently, researchers in the Hospital Infections Program at the CDC performed a search of the medical literature from 1991 through 1996 to identify a severity of illness scoring system (SISS) that would be useful for further adjusting ICU nosocomial infection ratesz9 Factors assessed in existing scoring systems included objectivity, simplicity, discriminating power, and wide availability. Eleven studies reported use of an SISS; four correlated SISS with all sites of nosocomial infection but did not meet with success; and six showed some predictive value between SISS and nosocomial pneumonia. The acute physiology and chronic health evaluation (APACHE 11) score was the most commonly used SISS but was performed inconsistently and may not be available in many ICUs. Thus, although existing scores predict mortality and resource use, none are presently available for the prediction of nosocomial infection rates. Such a score for the severity of illness therefore needs to be developed to adjust nosocomial infection rates. Until such measures are available, comparative nosocomial infection rates will be limited in their use as definitive indicators of the quality of care. At the present time, many institutions around the country aggregate surveillance data on nosocomial infection rates for interhospital comparison. The question often arises when deciding to participate in a surveillance system: are some data better than none? The answer is: it depends. The comparison of nosocomial infection rates is complex, and the value of the aggregated data must be balanced against the burden of their collection. Hospital administrators and medical personnel should give careful thought and consideration before committing their hospital to participate in one of the various multihospital surveillance systems. The decision to participate should be discussed at all levels within the hospital-from the hospital administrator to the persons involved in collecting the data. If a hospital does not devote sufficient resources to data collection, the data will be of limited value because they will be replete with inaccuracies. No national database has successfully dealt with all the problems in collecting data on nosocomial infections and each varies in its ability to address these problems.

254

ARCHIBALD & GAYNES

Hospitals must be aware of these biases when assessing their participation. Although comparative data can be useful as a tool for the prevention of nosocomial infections, bad data may not be better than none.

References 1. Adams A, Mullaney K, McLaughlin P, et al: Using the National Nosocomial Infections Surveillance (NNIS) System to achieve quality improvement [abstract M251. In Proceedings of 2nd Annual Meeting of the Society for Hospital Epidemiology of America, Baltimore, 1992, p 68 2. Archibald LK, Phillips L, Monnet D, et a 1 Antimicrobial resistance in inpatients and outpatients in the United States: The increasing importance of the intensive care unit. Clin Infect Dis 24211-215, 1997 3. Centers for Disease Control and Prevention: CDC Surveillance Update. Atlanta, Centers for Disease Control and Prevention, 1988 4. Centers for Disease Control and Prevention: Public health focus: Surveillance, prevention and control of nosocomial infections. MMWR Morb Mortal Wkly Rep 41:783787, 1992 5. Condon RE, Schulte WJ, Malangoni MA, et al: Effectiveness of a surgical wound surveillance program. Arch Surg 118:303-307, 1983 6. Cruse P: Wound infection surveillance, Rev Infect Dis 3:734-737, 1981 7. Cruse PJ, Foord R: The epidemiology of wound infection: A 10-year prospective study of 62,939 wounds. Surg Clin North Am 60:2740, 1980 8. Culver DH, Horan TC, Gaynes RP, et al: Surgical wound infection rates by wound class, operative procedure, and patient risk index. Am J Med Yl(supp1 3B):152-157, 1991 9. Deming WE: Out of the Crisis. Center for Advanced Engineering Study, Cambridge, MA. .~ , I986 - -~ 10. Ehrenkranz NJ: Surgical wound infection occurrence in clean operations: Risk stratification for interhospital comparisons. Am J Med 70909-914, 1981 11. Emori TG, Culver DH, Horan TC, et al: National Nosocomial Infections Surveillance System (NNIS): Description of surveillance methodology. Am J Infect Control 19:1935,1991 12. Emori TG, Edwards JD, Culvert DH, et a1 The quality of surveillance data: A report on the NNIS evaluation study [latebreaker abstract]. In Program of the 6th Annual Meeting of the Society for Healthcare Epidemiology of America, Washington, DC, 1996 13. Fox JP, Hall CE, Elveback L R Epidemiology: Man and Disease. MacMillan Publishing, New York, 1970 14. Freeman J, McGowan JE Jr: Methodologic issues in hospital epidemiology: Investigating the modifying effects of time and severity of underlying illness on estimates of cost of nosocomial infection. Rev Infect Dis 6285-300, 1984 15 Fuchs PC: Will the real infection rate please stand up? Infect Control 8235-236, 1987 16 Garner JS, Jarvis WR, Emori TG, et a1 CDC definitions for nosocomial infections, 1988. Am J Infect Control 16:28-40, 1988 17 Gaynes RP, Solomon S: Improving hospital-acquired infection rates: The CDC experience. Journal on Quality Improvement 22457467, 1996 18 Gaynes RP, Culver DH, Martone WJ, the National Nosocomial Infections Surveillance System: Comparison of rates of nosocomial infections in neonatal intensive care units in the United States. Am J Med 9l(suppl 38):192-196, 1991 19 Green J, Wintfeld N, Sharkey P, et al: The importance of severity of illness in assessing hospital mortality. JAMA 263:241-246, 1990 20 Haley R W JCAHO infection control indicators, part I. Infect Control Hosp Epidemiol 11~545546,1990 21 Haley RW: Surveillance by objective: A new priority-directed approach to the control of nosocomial infections. Am 1 Infect Control 13:78-89. 1985 qn LL. Haley RW, Culver DH, Morgan WM, et al: Identifyingpatients at high risk of surgical ~

~

HOSPITAL-ACQUIRED INFECTIONS IN THE UNITED STATES

23. 24.

25. 26. 27. 28.

29. 30. 31. 32. 33. 34. 35. 36.

37.

255

wound infection: A simple multivariate index of patient susceptibility and wound contamination. Am J Epidemiol 121:206-215, 1985 Haley RW, Culver DH, White JW, et al: The efficacy of infection surveillance and control programs in preventing nosocomial infections in US hospitals. Am J Epidemiol 212182-205, 1985 Hooton TM, Haley RW, Culver DH, et al: The joint associations of multiple risk factors with the occurrence of nosocomial infection. Am J Med 70:960-970, 1981 Jarvis WR, Edwards JR, Culver DH, et al: Nosocomial infection rates in adult and pediatric intensive care units in the United States. Am J Med 9l(suppl3B):185-191,1991 Jencks SF, Daley J, Draper D, et al: Interpreting hospital mortality data: The role for clinical risk adjustment. JAMA 260:3611-3616, 1988 Joint Commission on Accreditation of Hospitals: The Joint Commission’s Agenda for Change. Chicago, November 1986 Karanfil L, Josephson A, Alonzo H: An infection control quality improvement approach to nosocomial bacteremia in neonates [abstract]. In Proceedings of the 18th Annual Educational Conference of Association for Practitioners of Infection Control, Nashville, 1991 Keita-Perse 0, Gaynes RP: Severity of illness scoring systems to adjust nosocomial infection rates: A review and commentary. Am J Infect Control 24:429434, 1996 Kleinbaum DG, Kupper LL, Morgenstern H: In Epidemiologic Research: Principles and Quantitative Methods. Belmont, CA, Wadsworth, 1982, p 221 Larson E: A comparison of methods for surveillance of nosocomial infections. Infect Control 1:377-380, 1980 Massanari RM, Wilkerson K, Streed SA, et al: Reliability of reporting nosocomial infections in the discharge abstract and implications for receipt of revenues under prospective reimbursement. Am J Public Health 77561-564, 1987 National Nosocomial Infections Surveillance System: Nosocomial infection rates for interhospital comparison: Limitations and possible solutions. Infect Control Hosp Epidemiol 12:609-621, 1991 National Nosocomial Infections Surveillance System, Centers for Disease Control and Prevention: NNIS Semiannual Report. Am J Infect Control 6:377-385, 1995 Olson MM, Lee JT Jr: Continuous 10-year wound infection surveillance: Results, advantages, and unanswered questions. Arch Surg 125794-803, 1990 Selva J, Toledo A, Maroney A, et al: The value of participation in the CDC-National Nosocomial Infections Surveillance System (NNIS) in a large teaching hospital [abstract]. In Proceedings of the 16th Annual Educational Conference of the Association for Practitioners of Infection Control, Reno. Am J Infect Control 17117, 1989 Stamm WE, Weinstein RA, Dixon RE: Comparison of endemic and epidemic nosocomial infections. Am J Med 70393-397, 1981 Address reprint requests to

Robert P. Gaynes, MD Hospital Infections Program Centers for Disease Control and Prevention Mailstop E-55 1600 Clifton Road Atlanta, GA 30333