Measure for measure

Measure for measure

Public Health (2002) 116, 257–262 ß R.I.P.H. 2002 www.nature.com/ph Measure for measure: the quest for valid indicators of non-fatal injury incidence...

108KB Sizes 1 Downloads 116 Views

Public Health (2002) 116, 257–262 ß R.I.P.H. 2002 www.nature.com/ph

Measure for measure: the quest for valid indicators of non-fatal injury incidence C Cryer1,*, JD Langley2, SCR Stephenson2, SN Jarvis3 and P Edwards4 1 Centre for Health Services Studies, University of Kent, Tunbridge Wells, UK; 2Injury Prevention Research Unit, University of Otago, New Zealand; 3Community Child Health, University of Newcastle, Newcastle-upon-Tyne, UK; and 4Public Health Intervention Research Unit, London School of Hygiene and Tropical Medicine, London, UK

In this edition of Public Health, McClure and colleagues report on research that considered the criterion validity of indicators based on serious long bone fracture and length of stay in hospital. They found that neither were sensitive or specific indicators for serious injury as defined by an Injury Severity Score (ISS) of 16 or more. They contend that their study findings ‘ . . . strongly support a return to a measure similar in intent to that encapsulated in the original UK Green Paper . . . ’. We contend that their analysis does not provide any empirical evidence to support their view that there should be a return to the Green Paper: Our Healthier Nation indicator. Furthermore, we consider the analyses that they carry out to validate both the Saving Lives: Our Healthier Nation and the serious long bone fracture indicators are flawed. We agree that national (or state) indicators are very influential. They encourage preventive action and resource use aimed at producing favourable changes to these indicators. However, each of the four non-fatal indicators considered in their analysis have problems. Formal validation of existing indicators is necessary and the following aspects of validity should be addressed: face; criterion; consistency; and completeness and accuracy of the source date. Taking into account the current national data systems in England, possible options for one or more national non-fatal unintentional injury indicators have been proposed in our paper. Furthermore, the International Collaborative Effort on Injury Statistics (ICE) Injury Indicators Group is about to embark on the development of a strategic framework for the development of valid indicators of non-fatal injury occurrence. Public Health (2002) 116, 257–262. doi:10.1038=sj.ph.1900878 Keywords: wounds-and-injuries; public health; indicators; validation

Introduction In 1998, the UK Government proposed a draft health strategy for England in the Green Paper: Our Healthier Nation,1 which included a non-fatal injury target: ‘ . . . to reduce the rate of accidents — here defined as those which involve a hospital visit or consultation with a family doctor — by at least a fifth . . . ’. We applauded the identification of unintentional injury as a priority, and the shift of focus from fatal to all injury, but we also expressed concern that the particular target proposed would focus attention on minor injury, and so not reflect the main burden of injury for individuals, the population or for the NHS.2 We also argued that unintentional injury that results in a medical consultation would be influenced by social factors, as well as provision of service and access factors, and so would not necessarily reflect trends in the occurrence of non-fatal injury. We postulated validation criteria that could be used to assess indicators, but were unable to identify an all-cause *Correspondence: C Cryer, CHSS at Tunbridge Wells, University of Kent, Oak Lodge, David Salomons’ Estate, Broomhill Road, Tunbridge Wells, Kent TN3 0TG. E-mail: [email protected] Accepted 14 June 2002

unintentional injury indicator that satisfied these criteria. As an interim solution, we found that an indicator based on serious long bone fractures exhibited favourable characteristics when judged against these criteria.2 Following a consultation period on the Green Paper, the Government published their health strategy in the White Paper: Saving Lives: Our Healthier Nation.3 They changed the non-fatal unintentional injury target to one based on the reduction of unintentional injury admissions resulting in four or more days’ stay in hospital. Although we thought this an improvement on the previously proposed indicator, we argued that indicators based on length of stay thresholds and cumulative days’ stay in hospital have the disadvantage that they are sensitive to changes due to service factors.4 For example, financial pressure on service provision can drive down average lengths of stay. In this edition of Public Health, McClure and colleagues5 considered the criterion validity of indicators based on serious long bone fracture and length of stay in hospital and found that neither were sensitive or specific indicators for serious injury as defined by an Injury Severity Score6 (ISS) of 16 or more. They contend that their study supports the need to include all hospital admissions in any population-based measure of injury and suggest that the findings from their study ‘ . . . strongly support a return to a measure

The quest for valid indicators of non-fatal injury incidence C Cryer et al

258

similar in intent to that encapsulated in the original UK Green Paper . . . ’. Our response We consider that: 1 McClure and colleagues do not provide any empirical evidence to support their view that there should be a return to the Our Healthier Nation Green Paper indicator; and 2 the analyses that they carried out to validate both the Saving Lives and the serious long bone fracture indicators were flawed. We will address each of these issues in turn, and then identify a number of important points related to the development or identification of indicators. These provide the motivation and a direction for the development of sound non-fatal injury indicators. The Green Paper indicator McClure and colleagues claim that the findings of their study strongly support a return to a measure similar in intent to that encapsulated in the original UK Green Paper which defines an important injury as one sufficiently serious to trigger a visit to a medical practitioner. Their claim is unsupported by the evidence that they present. Firstly, their paper focuses solely on a validation of the Saving Lives White Paper indicator and our proposed alternative of serious long bone fracture. It does not provide any empirical examination of the validity of the Green Paper indicator. Secondly, their brief consideration of the Green Paper indicator leaves the original concerns about that indicator unaddressed. Both of these points are considered further below. The empirical work that they present is restricted solely to inpatient data.5 In contrast, the Our Healthier Nation Green Paper indicator is based on injury sufficiently serious to trigger a visit to a medical practitioner. This definition includes general practitioner (GP) and Accident and Emergency Department (AED) attendance as well as admissions to hospital. The number of patients attending for GP or AED consultations is an order of magnitude greater than the numbers admitted to hospital,7 and these injuries are, on average, much less severe. Given that McClure and colleagues have restricted their work to an analysis of inpatient data, their findings are irrelevant to an assessment of the original Green Paper indicator. Furthermore, our previous paper4 systematically assessed the Green Paper indicator against four validation criteria. The main theoretical problems that were identified with that indicator were as follows: 1 It will not consistently count cases satisfying some case definition of anatomical or physiological damage and hence may not reflect trends in injury incidence; Public Health

2 Many injuries that result in a visit to a GP or AED are of minor severity (ie result in minuscule threat-to-life, no disablement or loss of quality of life, and are low cost), and so the incidence of these injuries will not necessarily reflect the main burden of injury; 3 There is evidence that attendance at a GP or AED is influenced, independent of incidence, by socio-demographic and service factors, ie by distance,8,9 probably by age,10 as well as social deprivation.4 A change over time in the Green Paper indicator could result from changes in the accessibility of the service, with no change in the incidence rate. McClure and colleagues do not address any of these shortcomings in their paper.

Sensitivity and specificity of the two indicators We congratulate them for providing the only published evidence that we are aware of that examines the sensitivity and specificity of any of the four indicators (Box 1) discussed in their paper.5 The work focuses solely on indicators 3 and 4, both of which have been described as indicators of serious injury.

Box 1: Indicators discussed by McClure and colleagues 1 Injury that results in a hospital visit or consultation with a family doctor 2 Injury that results in hospitalisation 3 Injury that results in four or more days stay in hospital 4 Serious long-bone fracture admitted to hospital.

The analysis that is presented, however, is flawed. The definition of ‘serious’ injury used by McClure and colleagues as their gold standard, to assess the sensitivity and specificity of the indicators, is too extreme. Injury with an ISS of 16 and over is described as ‘serious’, and injuries with an ISS < 16 as ‘minor’. There are difficulties with this use of the word ‘minor’ to describe any injury with an ISS < 16. For example, lower extremity fracture including fractured femur (which has an Abbreviated Injury Scale severity score (AIS) ¼ 3, equivalent to an ISS ¼ 9 if accompanied by no further injuries to other body regions) must be regarded as serious since it is associated with a substantial probability of death, significant disablement, and high cost of treatment.12–17 Their own analysis shows that the injuries that they describe as ‘minor’ result in 50% of the deaths, and 32% of the ICU admissions. The average length of stay in hospital for the so-called ‘minor’ injuries is stated to be 9 days.5 It seems to us that no sensible definition would result

The quest for valid indicators of non-fatal injury incidence C Cryer et al

259

in these injuries being labelled ‘minor’. Both of the indicators that they consider in their sensitivity and specificity analysis (indicators 3 and 4, Box 1) are consistent with the definition that a ‘serious’ injury is one with an AIS of 3 or more. The evidence for this is much more explicit for serious long bone fractures, both from the construction and the description of the indicator.2 This is recognised by McClure and colleagues in their paper as they, when ascertaining cases for indicator 4, select only long bone fractures with AIS ¼ 3.5 Given all of the above, it is inappropriate to use, as they did, a definition for ‘serious’ of ISS of 16 or more when assessing the sensitivity and specificity of these indicators. It is not surprising, therefore, that both indicators 3 and 4 showed poor sensitivity and specificity when validated using this ‘gold standard’.

Ways forward — motivation and direction There are a number of important points made by McClure and colleagues that add to the case for continuing the search for national indicators of non-fatal injury occurrence on which to base targets. These are: 1 Indicators are highly influential; 2 All of the four indicators considered in their analysis have problems; 3 Indicators should be valid; 4 Indicators should focus attention on important problems. Consideration of these has led to proposed ways forward.

Indicators are highly influential We agree that national (or state) indicators are very influential. These indicators can encourage preventive action and resource use aimed at producing favourable changes to these indicators. So, if these indicators are inappropriate or misleading, they can have the effect of steering scarce resources towards inappropriate activities. Perhaps worse still, if an invalid indicator is adopted by a country or a state, it may convey the illusion that current prevention strategies are proving effective when in fact the reverse may be the case. This may result in precious resources being moved away, inappropriately, from injury prevention activity.

All of the four indicators considered in their analysis have problems Previous work has identified problems with utilisation measures and has identified the bias that can occur when trends in occurrence are based solely on these types of

measures.2,10,18 This work suggests that indicators 1–3 are likely to be biased, and indicator 4 less so (Box 1). In synopsis, the problems with each of these indicators are as follows:  Green Paper indicator: This has too many problems, as described earlier in this paper. This indicator will be primarily counting minor injury. The patterns of injury occurrence depend on severity as illustrated by Figure 1, which shows the pattern of occurrence of New Zealand hospital admissions for motor vehicle traffic crash injury for various severity thresholds. These differences in the trends in rates matter. For example, if the New Zealand target was a 10% reduction in injury by 1996, using indicators based on minor (AIS ¼ 1 þ ) or moderately severe injury (AIS ¼ 2 þ ) the target would appear to have been met. Using indicators based on serious injury (AIS ¼ 3 þ and 4 þ ), the reduction was less than 10%, and so these trends would lead to a conclusion that the target had not been met.  Hospital admissions: Indicators based on hospital admission of any severity or diagnosis are biased and unstable. Admissions of children with minor injury are influenced by age and social factors.19 Admission to hospital for minor injury is sensitive to a change in the threshold for admissions.10,18 Changes in service provision can affect the likelihood of admission for a particular injury.20  Injury that results in four or more days’ stay in hospital: Average lengths of stay in hospital have reduced over the recent past. This is likely to destabilise length-ofstay based indicators. We have carried out some work that has illustrated this for indicator (3), the results of which are shown in Figure 2. This shows that in the north east of England the rates of injury amongst children that resulted in four or more days stay in hospital have been reducing. On the other hand, serious long bone fractures (SLBFs), most of which are admitted to hospital, show an increasing trend over the same period. These contradictory trends are surprising since SLBFs represent the majority of serious injury. For SLBFs, however, lengths of stay in hospital also reduced. About 80% were kept in hospital for longer than three days in the 1980s, whereas by the late 1990s this had dropped to less than 30%. This has resulted in rates of SLBFs resulting in four or more days stay in hospital reducing over this period despite increases in the admissions of these serious injuries. A plausible explanation for the apparent decline in the frequency of injury resulting in four or more days’ stay is not a decline in the incidence of serious injury, but rather a reduction of lengths of stay in hospital over this period.  Serious long bone fracture is only reasonable as an indicator of the occurrence of serious blunt trauma injury. Even in this context, it fails to count some important injuries that result from blunt trauma, including head, neck and spinal injury. Public Health

The quest for valid indicators of non-fatal injury incidence C Cryer et al

260

Figure 1 Percentage deviation from 1988 base in age-adjusted rate (per 100,000 population) of MVTC hospitalisations by maximum AIS score 1988 – 1999.

Indicators should be valid McClure and colleagues argue that a key validity criterion for any indicator is: ‘ . . . that the chosen indicators do in fact validly measure what they purport to measure’.5 The work that both we and McClure and colleagues have so far completed convinces us that formal validation of existing indicators is necessary. Additionally, before newly proposed indicators are promulgated, they too should have been subjected to formal validation. At the very least, we believe that the following aspects of validity need to be addressed:  Face validity: through consideration of the indicator against formal validation criteria. We have published a set of validation criteria2,21 and we have developed them further since these publications.22 Public Health

 Criterion validity: Estimates of sensitivity, specificity, positive and negative predictive values against appropriate ‘gold standards’.  Consistency: Investigate the historical trends across a range of indicators based on differing severity thresholds, including the use of a ‘gold standard’ indicator if available. If these show contradictory trends, then this is a potential cause for concern.  Completeness and accuracy of the source data: Whatever other properties the indicator might have, confidence in the validity of the indicator will be undermined if the data on which it is based is incomplete or inaccurate. Indicators that focus on important problems Consistent with public health goals, we want to identify indicators that influence the use of resources to maximise

The quest for valid indicators of non-fatal injury incidence C Cryer et al

261

based measures that reflect other (than threat-to-life) aspects of the burden of injury. Ways forward — developing indicators

Figure 2 Tyne & Wear hospital admissions for ‘serious’ injury to 0 to 19-year-olds.

the health of the public. Like McClure and colleagues,5 we think that indicators should focus our attention on the ‘real nature of the injury problem’. Like them, we believe that it is those injuries that result in the greatest burden that are the ‘real nature of the injury problem’. How does one measure burden? We agree that this should be done in terms of mortality, disablement, reduced quality of life, and cost. This is inherent in the validation criteria that we proposed in our original paper.2 Throughout the paper, they implicitly invoke these as measures by which the real nature of the injury problem should be judged.5 Preference should be given to indicators that can be derived from existing large-scale databases. Many of these databases seem more amenable to the development of indicators whose definition is based on severity thresholds that reflect threat-to-life (for example AIS), including our previous work around serious long bone fractures.2 It should also be our aim to develop additional indicators that reflect disablement, quality of life, and cost of injury. Ideally, this work should progress in parallel. For example, there is a potential to extend some of the existing methods to develop indicators that reflect these other injury outcomes. One severity scale that is being considered as the basis for the development of sound indicators is ICISS.23–25 In its current formulation, this is an International Classification of Diseases26 (ICD-)based threat-to-life scale. For a given ICD.9.CM27 diagnosis, the ICISS score is the probability of survival, estimated from the proportion surviving amongst those injuries assigned the ICD diagnosis. Using the same methodology, it may be possible to estimate probability of (or average) disablement, average loss of quality of life, or average cost for each ICD nature of injury code, and hence create ICD-

As elsewhere in the world, in England there are difficulties in developing sound national indicators based on the current routinely collected data. Any new indicator that is developed should be based on an explicit definition of an injury, from which it should be clear which events will be captured by the indicator. Up until now, that definition has tended to be based on the use of services, reports (for example road traffic crashes to the police), or service callouts (for example to the fire service). This and our previous work argued against the use of these unstable case definitions. For indicators of non-fatal injury occurrence, a case definition of injury that is based on some severity threshold is sensible. Although, the Abbreviated Injury Scale is the most commonly used severity coding system,28 its routine use in national data systems in England is currently impractical. Taking into account the current national data systems in England, the following options for one or more national non-fatal unintentional injury indicators have been proposed:  Critical injury — Case ascertainment based on an extension of the Trauma Audit and Research Network29 to the whole country could be the basis of the development of indicators of critical injury, ie, with an Injury Severity Score of 16 and over.  Serious injury — ICISS could be the focus of development work to produce an ICD1030-based severity measure.  Slight — If one were looking to produce a minor injury indicator, then a potential data source=case definition that shows promise is the Health Survey for England31 minor injury definition, ie, injury in the previous four weeks that caused pain or discomfort for over 24 hours. Globally, some of the authors (CC, JL, SS) are part of the International Collaborative Effort on Injury Statistics (ICE)–http:==www.cdc.gov=nchs=advice.htm. One of the authors (CC) facilitates the ICE Injury Indicators Group. That group is about to embark on the development of a strategic framework for the development of valid indicators of non-fatal injury occurrence. We would like to hear from anyone who has an interest in this area and who is engaged in the assessment or development of international, national or state injury indicators. Acknowledgements Our thanks to Philip Lowe (Community Child Health, University of Newcastle) for his assistance with data analysis. The New Zealand hospital data was sourced from the New Zealand Health Information Service. CHSS Public Health

The quest for valid indicators of non-fatal injury incidence C Cryer et al

262

is funded by the Department of Health as an R&D Support Unit. The New Zealand Injury Prevention Research Unit is funded by the Health Research Council of New Zealand and the Accident Compensation Corporation. The views and=or conclusions in this article are those of the authors and do not necessarily reflect those of the funders.

References 1 Secretary of State for Health. Our Healthier Nation: A contract for health. London: The Stationery Office, 1998. 2 Cryer PC, Jarvis SN, Edwards, P, Langley JD. How can we reliably measure the occurrence of non-fatal injury? Int J Cons Prod Safety 1999; 6(4): 183–191. 3 Secretary of State for Health. Saving Lives: Our Healthier Nation. London: The Stationery Office, 1999. 4 Cryer PC, Jarvis SN, Edwards P, Langley JD. Why the Government was right to change the ‘Our Healthier Nation’ accidental injury target. Public Health 2000; 114: 232–237. 5 McClure RJ, Peel N, Kassulke D, Neale R. Appropriate indicators for injury control? Public Health 2002; 116: 252– 256. 6 Baker SP et al. The injury severity score: a method of describing patients with multiple injuries and evaluating emergency care. J Trauma 1974; 14: 187–196. 7 Board of Science and Education of the British Medical Association. Injury Prevention British Medical Association: London, 2001. 8 Lyons RA, Lo SV, Heaven M, Littlepage BNC. Injury surveillance in children — usefulness of a centralized database of accident and emergency attendances. Inj Prev 1995; 1: 173–176. 9 McKee CM, Gleadhill DNS, Watson JD. Accident and emergency attendance rates: variation among patients from different general practices. Br J Gen Prac 1990; 40: 150–153. 10 Walsh SS, Jarvis SN, Towner EML, Aynsley-Green A. Annual incidence of unintentional injury among 54,000 children. Inj Prev 1996; 2: 16–20. 11 Association for the Advancement of Automotive Medicine. The abbreviated injury scale, 1990 revision. AAAM: Des Plaines, IL, 1990. 12 Bandolier. Outcome after hip fracture. Bandolier Evidence Based Healthcare 1998; http://www.jr2.ox.ac.uk/bandolier/ band49/649-5.html. 13 Salkeld G et al. Quality of life related fear of falling and hip fractures in older women: a time trade off study. Br Med J 2000; 320: 341–346. 14 Butcher JL et al. Long-term outcomes after lower extremity trauma. J Trauma 1996; 41: 4–9. 15 McCarthy ML, MacKenzie EJ, Bosse MJ, Copeland CE, Hash CS, Burgess AR. Functional status following orthopaedic trauma in young women. J Trauma 1995; 39: 828–837.

Public Health

16 MacKenzie EJ. The public health impact of lower extremity trauma. SAE Technical Report Series Paper, Society of Automotive Engineers Inc.: Warrendale Pennsylvania, USA, 1986. 17 Dolan P, Torgerson DJ. The cost of treating osteoporotic fractures in the United Kindgom female population. Osteoporos Int 1998; 8: 611–617. 18 Marganitt B, MacKenzie EJ, Deshpande JK, Ramzy AI, Haller JA. Hospitalizations for traumatic injuries amongst children in Maryland: trends in incidence and severity: 1979 through 1988. Pediatrics 1992; 89: 608–613. 19 Walsh SS, Jarvis SN. Measuring frequency of ‘severe’ accidental injury in childhood. J Epidemiol Commun Health 1992; 46: 26–32. 20 Beattie TF, Currie CE, Williams JM, Wright P. Measures of injury severity in childhood: a critical overview. Inj Prev 1998; 4: 228–231. 21 Langley J, Cryer C. Indicators for injury surveillance. Australasian Epidemiologist 2000; 7: 5–9. 22 Cryer C, Langley J, Jarvis S, Mackenzie S. Injury indicators: a validation tool. 6th World Conference on Injury Prevention and Control. University of Montreal: Montreal, Canada, 2002, pp 1107–1109. 23 Osler T, Rutledge R, Deis J, Bedrick E. ICISS: An International Classification of Disease-9 based injury severity score. J Trauma 1996; 41: 380–387. 24 Rutledge R et al. Comparison of the injury Severity Score and ICD-9 diagnosis codes as predictors of outcome of injury: analysis of 44,032 patients. J Trauma 1997; 42: 477– 487. 25 Stephenson SCR, Langley JD, Civil ID. Comparing measures of injury severity for use with large databases. J Trauma 2002; (in press). 26 World Health Organization. Manual of the international statistical classification of disease, injuries and causes of death. 1975 revision. WHO: Geneva, 1979. 27 United States National Centre for Health Statistics. The International Classification of Diseases, ICD.9.CM Clinical Modification Volume 1. Commission on Professional and Hospital Activities: Ann Arbor Michigan, USA, 1979. 28 Stevenson M, Segui-Gomez M, Lescohier I, Di Scala C, McDonald-Smith G. An overview of the injury severity score and the new injury severity score. Inj Prev 2001; 7: 10–13. 29 Trauma Audit and Research Network. Developing effective care for injured patients through process and outcome analysis and dissemination. The first decade 1990–2000. The Hope Hospital: Salford, 2001. 30 World Health Organisation. International statistical classification of diseases and related health problems. 10th revision. WHO: Geneva, 1992. 31 Prescott-Clarke P, Primatesta P. Health Survey for England. The Stationery Office: London, 1997.