Data collection strategies in support

Data collection strategies in support

0895-4356/90$3.00+ 0.00 Copyright 0 1990PcrgamonF%ss plc J CUEF&id&o1 Vol. 43, Suppl., 5S-9s. 1990 printed in Great Britain. All rights reserved CHA...

557KB Sizes 65 Downloads 121 Views

0895-4356/90$3.00+ 0.00 Copyright 0 1990PcrgamonF%ss plc

J CUEF&id&o1 Vol. 43, Suppl., 5S-9s. 1990 printed in Great Britain. All rights reserved

CHAPTER 2 DATA COLLECTION STRATEGIES IN SUPPORT BARBARA KEELING,’ DETRA K. ROBINSON’

and

MARILYN BWGN~

‘ICU Research Unit, George Washington University Medical Center, 2300 K Street, NW, Washington, DC 20037 and ZHealth !kvices Research and Development Center, Department of Health Policy and Management, The Johns HopkinsSchoolof Hygieneand PublicHealth, Baltimore, MD 21205, U.S.A.

I. CHART DATA COLLECTION

(A) IdentifVing Eligible Patients Every unit in the participating sites is screened except for the psychiatric, OB/GYN, pediatrics, and the burn units. The chart abstracters review the charts of all patients admitted to general medicine, oncology, and surgery floors and to intensive care units (ICU) to identify patients who may have become eligible within the preceding 24 hours. Generally, the approach is to review the registration records of all admissions listed in the previous day’s census. Sites with computerized registration equipped records use the recorded admission diagnosis as a helpful first-cut screen to identify potential study patients. The chart abstractor then carefully reviews the charts of patients identified by admission/emergency room data to determine if specific criteria for the study disease categories are present. Charts of patients who have completed a 24-hour stay and whose relevant data are accessible to the chart abstractor at the time of screening are reviewed first. Patients who were admitted to the hospital or transferred to the ICU in the late evening hours before the close of census, and whose relevant data are not yet in the charts, are screened at the first opportunity on the next day. Using documentation from the previous 24 hours, the chart abstractor completes the disease category screens (Chapter 3). For patients who are eligible upon hospital admission, the study admission time is defined as the earliest time recorded by staff in the emergency

room, admissions office, or ICU. The study admission date is the day the patient is admitted to the medical/surgical floor or the ICU. For patients who are not eligible upon hospital admission, but later become eligible and require intensive care, the study admission time is defined as the earliest time recorded by the ICU staff. Patients who are immediately excluded from study entry are those who at hospital admission are known to be: (1) pregnant; (2) non-English speaking; (3) transferred from another hospital and not to an intensive care unit; (4) less than 18 years old; (5) diagnosed as having AIDS; (6) hospitalized with a planned length of stay less than 72 hours; (7) foreign nationals whose purpose for entry into the U.S. is medical treatment, or those who, while visiting the U.S., became seriously ill; (8) admitted with burn injuries; or (9) admitted with head trauma. An otherwise eligible study patient who is discharged or dies within 48 hours of study entry is excluded from further study. The two major components of data collection are sorted by source, medical chart abstraction and interviews.

(B) Components of Chart Abstraction After completing the category screen and determining the study admission time, the chart abstractor documents patient physiologic measures, therapeutic interventions, chronic health status, demographics, the attending physician’s name, and the surrogate’s name. Several instruments are used to collect these data: (1)

6S

BARBARA KRELING et al.

Table 1. Timeframe for collection of chart data

Category eligibility

APS and additional physiologic measures

TISS

X X X X X

X X X X X

X Day 2 Day 4 Day 8 Day 15 Day 26 Discharge Subseauent hosuitalization

Sentinel decisions

X X X

APS = Acute Physiologic Score [l]. TISS = Therapeutic Index Scoring System [2].

Acute Physiology Score (APS)--using the APACHE II scoring form [l] (Appendix 1); (2) Modified Therapeutic Intervention Scoring System (TISS) [2] (Appendix 2); (3) Co-morbid Conditions (Appendix 3); (4) ICU Diagnosis List (Appendix 4); and (5) Sentinel Decisions (Appendices 5-14). After patient discharge, Sentinel Decisions (Chapter 13) and the total hospital charges are collected. Chart data are collected up to six times during the hospital stay (days 2, 4, 8, 15, 26, and discharge) and after each subsequent hospitalization (up to 6 months) to the same hospital (see Table 1). All data collected during initial hospitalization (index stay) are from the preceding day (i.e. for days 1,3,7, 14, and 25). Sentinel decisions are also collected on subsequent readmissions of 2 days or more.

The chart abstracters at each site are persons experienced in interpreting complex notations from medical records (e.g. a nurse or medical records technician). Training sessions were aimed at explanation and computation of variables contained in data collection instruments. Bi-monthly conference calls with supervisors are conducted to clarify ambiguities and/or discuss difficult cases. In addition, site visits are made every 6-9 months by NCC project staff to provide technical assistance and ensure uniformity in data collection across the five sites. To test the reliability of chart data collection, the NCC assigns study ID numbers for reabstraction. A 10% sample of each site’s total study population has the day 1 abstraction repeated by a different chart abstractor who makes no reference to the original data.

(C) Data Management and Quality Control

II. INTERVIEW DATA COLLECTION

The chart data are recorded on intake sheets that are stored in hard copy form at the sites. Data are entered onto computer files using a custom-designed software package that is equipped with internal logic checks. Chart data are sent weekly to the NCC where they are reviewed for accuracy and internal consistency. The NCC then transfers the data to the study’s National Statistical Center at Duke University for further processing and analysis. Numerous efforts are made to ensure the completeness and accuracy of the data. Thorough examination of admissions records and scheduled repeat screening of the hospital population and enrolled patients are designed to minimize selection bias arising from either excluding eligible patients or including ineligible ones. Information bias is minimized by extensive training and supervision of chart abstractors, standardized and thorough quality control mechanisms for data processing, and reliability testing.

(A) Overall Time Frame As seen in Table 2, data collection is timedriven as opposed to event-driven. The exceptions are data collections triggered by hospital discharge or by patient death. Depending upon hospital stay, there are several data collection points with specific time/data windows. All data collection is scheduled from study admission and continues for a period of 6 months or until the patient’s death, whichever occurs first. (B) Overview of Questionnaire Content The conceptual components of each interview schedule are as follows (Appendix 15-25): the first questionnaires (day 3) measure prior functioning, preferences for treatment, perceptions of prognosis, and decision making; the day 8 instruments (and the discharge instruments) measure symptoms and satisfaction with hospital experiences. Questionnaires on day 14 and 25 are a repeat of the day 3 questions (except for prior function questions, which are only asked

Chaprer 2: Data Collection Strategies

7s

Table 2. Interview schedule Time from entry into the study (days during which instruments must be completed)

Patient

Surrogate

Demographic (no windows)

(Or surrogatein person) Demographic

(Or patient in person or by telephone)

Day 3 (2nddth

day)

(In person) Day 3P

(In person or by telephone) Day 3s

Day 8 (8th-12th day)

(In person) Day 8P

(In person or by telephone) Day 8s

Day 14 (13th-16th day)

@DY;???

(In person or by telephone) Day 14s

Day 25 (24th-27th day)

(In person) Day 25P

(In person or by telephone) Day 25s

Discharge (1 day beforeA days after discharge)

(In person or by telephone) Day 8P

(In person or by telephone) Day 8s

Month 2 (5Oth-70th day)

(Telephone or in person if in the hospital) Month 2P Month ZP-SIP (within 1 week)

(Telephone or in person if in the hospital) Month 2s

Month 6 (16Oth-200th day)

(Telephone or in person if in the hospital) Month 6P

(T’elephone or in person if in the hospital) Month 6s

Death (6-8 weeks after death)

once) and are asked only of patients still in the hospital on those days. The follow-up questionnaires measure functioning, symptoms, satisfaction with outcomes, family impact, and subsequent utilization of health care resources. A demographic questionnaire may be administered at anytime during the study. Physicians’ interviews are administered at day 3 and repeated on day 25 and include questions concerning prognosis, patient decision-making, and perceptions of patient preferences for cardiopulmonary resuscitation and other treatment. A physician demographic questionnaire is collected once for each physician in the study and includes sociodemographics, training, attitudes, and perceptions about the hospital practice environment. (C) Survey System Procedures After the patient is enrolled in the study and the physician’s permission is obtained, the interviewer supervisor assigns an interviewer to the patient. Whenever possible, the same

Physician

(In person or self administered) MD Questionnaire

(In person or by telephone) Day 25MD

(Telephone) After death

interviewer follows the patient over time and interviews all respondents for the case (with the exception of the day 3 and day 25 interviews of the physician, which are frequently self administered). For most questionnaires in the hospital, interviewers have 5 days to complete the interview; no attempt is made to conduct any interview after this time period of eligibility has passed. All efforts to contact respondents are recorded on separate “contact sheets,” which are used to code the final disposition status of each interview. In order to minimize refusal rates, principal investigators speak with physicians who do not give permission to have a patient interviewed in order to explain the study and encourage participation. Interviewers keep track of patients for followup interviews with monthly calls to the surrogate or contact person identified at discharge. Dates of death or rehospitalization are often discovered through this follow-up procedure. This information is then entered into the

8S

Bmm

KRELING et al.

software system to update study records and schedule subsequent interviews. Finally, the software system generates forms to help the supervisor manage the day-to-day operations of the survey and to report to the NCC on operating details of the system (e.g. response rates). The printouts also show response rates by interview. Completed questionnaires are edited by both the interviewer and supervisor. (0) Data Processing

The data entry system contains range and skip checks as well as a double-entry option to verify keypunching and minimize keypunch errors. File transfer to the NCC is done automatically through telephone lines using a communications software package. (E) Interview Data Quality Control (1) Overview

The potential for selection bias is diminished by efforts to minimize non-response and loss to follow-up. If a physician or patient refuses involvement in the interview process, chart data and National Death Index mortality data are collected to evaluate the effect of non-response. Information bias is minimized by extensive training and supervision of interviewers, several levels of field edit, standardized and thorough quality control mechanisms for data processing, and reliability testing of all instruments and validation of new measures. In addition, extensive computer editing is incorportated into the software for data entry. (2) Assessment and minimization of selection bias Selection bias results from the systematic exclusion of particular types of eligible patients from the study population. In this study, the three greatest potential sources of selection bias are: (1) incomplete or inaccurate case assessment, (2) non-response, and (3) loss to followup [(2) and (3) are referred to as “transfer bias” in the overview of Phase II, Chapter 191. Efforts to minimize non-response include an information pamphlet for respondents, intensive training of interviewers about the appropriate ways to encourage participation, and attempts by the principal investigators to convert physician refusals. Efforts to minimize loss to follow-up include: (i) documenting where the patient plans to go after discharge, (ii) identifying an individual

who will always know the patient’s whereabouts (in addition to the surrogate), (iii) monthly tracking of the patient’s whereabouts with the surrogate or contact person, (iv) stressing the importance of follow-up interviews with the patient and surrogate at the end of each interview, (v) training interviewers about the importance of maintaining a good relationship during hospitalization, (vi) using the same interviewer whenever possible, (vii) sending the patient and surrogate a follow-up letter of appreciation after discharge, (viii) contacting the physician’s office if the patient’s whereabouts is not known from other sources, and (ix) using the National Center for Health Statistic’s National Death Index. (3) Assessment and minimization of information bias The data collection instruments have undergone extensive refinement during the design of this study. During implementation of the study at the sites, there were efforts to minimize information bias by training interviewers to be objective and reproducible in their interviewing. Mechanisms include standardized quality control for data processing, reliability testing of instruments, and validation of new measures. (a) Interviewer training and supervision. All data are kept strictly confidential. All project staff have signed confidentiality pledges and have been briefed during training about the importance of the confidentiality of project data. Violations of the pledge of confidentiality are cause for dismissal. Three texts are used for interviewer training and for guidance after training. The “Questionby-Question” manual gives specifications for administering questionnaires. The “Field” manuals for interviewers and for supervisors contain study procedures, solutions to problems, and administrative procedures. These manuals contain most of the information interviewers will need to conduct the interviews and will be updated and used for reference throughout the study. Three main aspects of interviewer performance are supervised: rate of response (both initial response and follow-up response are considered important); quality of data collected (legible recording of responses, skip instructions appropriately followed, completeness of answers, etc); and quality of the interviewing process (appropriate introduction of the study, asking questions exactly as written,

Chapter2: Data CollectionStrategies and appropriate handling of the interpersonal aspects of the interview). (b) Field edit. The first quality control check of the survey data is the edit that each interviewer does. This interviewer edit includes checks for clarity of recording, circling codes, and calculations. It also includes a check of skip patterns in the questionnaires and logic checks. The next quality control check is done by the interviewer supervisor. Although the primary purpose of this edit is a check on the quality of data, its secondary purpose is to identify areas in which individual interviewers need additional training. The review and retraining process may go on for a period of time after data collection begins and provides a valuable quality assurance measure. (c) Computer edit checks. Edit functions in the data entry software include range checks (outliers that should be verified), permissible code checks, routing checks (to verify that particular items were answered only when they should have been), consistency checks (on the logical relationships between two data elements), and procedures for correcting the identified problems. (d) Site monitoring and data quality control procedures. The NCC produces reports which

show response rates for each questionnaire, each respondent type, and each site. These reports help each site to identify response problems. The NCC also produces reports of data completion. Data sent to the NCC are subjected to a series of editing checks for accuracy and internal consistency. All data collection problems detected are addressed with the site and measures are taken to correct the problem. Site visits by NCC project staff are made every 6-9 months and when needed to assess the adequacy of the data collection, to provide technical assistance, and to ensure uniformity across the five sites. These visits involve monitoring interviews, and reviewing case outcomes and other procedures. (e) Reliability testing. Reliability will be assessed for two reasons: to assure that this study attains reported levels of reliability for measures that are well tested and have a known reliability; and to determine the reliability of measures that have been developed for this study or were developed by others which have no or weak assessments of reliability. Two types of reliability will be examined: internal consistency and reproducibility. The former will address whether the components of a measure contrib-

9s

ute to the assessment of a single construct. The latter will address whether the measure produces the same results when an attribute is stable and that interviewer or respondent characteristics do not affect reproducibility. Reliability in terms of reproducibility is not available for most measures. Two of the measures being used have established levels of reproducibility, the Activities of Daily Living [3] and the Sickness Impact Profile [4]. For the others, reproducibility will have to established. (f) Validity testing. The validity of the measures used in this study will be assessed by calculating statistical validity and construct validity. The former involves examination of the factor structure of the measure. The latter involves examination of the relationship between the score obtained on the measure of interest with other data with which it should be directly related (convergent validity), as well as other data with which the measure of interest should not be related (discriminant validity). The factor structure can be assessed by statistical techniques such as factor analysis or principal components analysis. These techniques provide information about the dimensionality of the measure and whether a measure is composed of a single scale or several related subscales. Construct validity can be assessed using a multi-trait multi-method matrix [5] which provides information concerning convergent and discriminant validity, and reliability in a single display. These assessments of validity will be made for measures especially developed for this study and also for those developed and tested by others that we use in their original form. The rationale for assessing already developed measures is to determine whether these measures are as valid for seriously ill patients as they are for the less ill patients reported in the literature.

REFERENCES 1. Knaus WA, Draper EA, Wagner DP, Zimmerman JE. APACHE II: a severity of disease classification system. Crit Care Med 1985; 13: 818-829. 2. Keene AR, Cullen DJ. Therapeutic intervention scoring system: Update 1983. Crlt Care Med 1983; 11: l-5. 3. Katz S, Downs TD, Cash HR, Grotz RC. Progress in development of the index of ADL. Cerontologiat 1970; IO: 20-30. 4. Bergner M, Bobbitt RA, Carter WB, er al. The sickness impact profile: development and final revision of a health status measure. Med Care 1981; 19: 787-805. 5. Campbell DT, Fiske DW. Convergent and discriminant validation by the multimethod matrix. Psyebol Bull 1959; 56: 81.