Misconceptions regarding case-control studies of bicycle helmets and head injury

Misconceptions regarding case-control studies of bicycle helmets and head injury

Accident Analysis and Prevention 38 (2006) 636–643 Misconceptions regarding case-control studies of bicycle helmets and head injury Peter Cummings a,...

169KB Sizes 0 Downloads 10 Views

Accident Analysis and Prevention 38 (2006) 636–643

Misconceptions regarding case-control studies of bicycle helmets and head injury Peter Cummings a,∗ , Frederick P. Rivara a , Diane C. Thompson a,1 , Robert S. Thompson b a

b

Harborview Injury Prevention and Research Center, University of Washington, Seattle, WA, USA Group Health Center for Health Studies, Department of Preventive Care, Group Health Cooperative of Puget Sound, Seattle, WA, USA Received 24 June 2005; received in revised form 8 December 2005; accepted 12 December 2005

Abstract A number of published case-control studies have reported that bicycle helmets are associated with a reduced risk of head injury and brain injury among bicyclists who crashed. A paper in this journal offered several criticisms of these studies and of a systematic review of these studies. Many of those criticisms stem from misconceptions about the studies that have been done and about case-control studies in general. In this manuscript we review case-control study design, particularly as it applies to bicycle helmet studies, and review some aspects of the analysis of case-control data. © 2005 Elsevier Ltd. All rights reserved. Keywords: Bicycle helmets; Case-control studies

1. Introduction Bicycles are an important means of transportation in many parts of the world, as well as an important source of recreation and physical activity for many people. The majority of serious and fatal injuries related to bicycling involve the head. Bicycle helmets have been promoted as one means of preventing these injuries. An article in this journal (Curnow, 2005) critiqued a systematic review (Thompson et al., 1999) done by three of us regarding case-control studies (Maimaris et al., 1994; McDermott et al., 1993; Thompson et al., 1989, 1996a,b) that have reported evidence that wearing a helmet in a bicycle crash was associated with a reduced risk of head and facial injuries. Curnow raised a number of points that are important to examine here. These can be categorized into questions about the importance of injuries to the head but not to the brain, the mechanism by which helmets may prevent brain injury in a bicycle crash, potential biases in the case-control studies of helmet use and head injury, and the analysis of case-control data. (Other issues regarding bicycle helmets and the systematic review have been previously ∗ Corresponding author at: 250 Grandview Drive, Bishop, CA 93514, USA. Tel.: +1 760 873 3058. E-mail address: [email protected] (P. Cummings). 1 Retired.

0001-4575/$ – see front matter © 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.aap.2005.12.007

addressed (Thompson et al., 2004) and will not be considered in this manuscript.) 2. Importance of head injuries Head injury is a term used to describe injuries to the scalp, skull, and brain, while brain injury more specifically refers to injuries that cause some degree of brain dysfunction, including concussion, intracranial hemorrhage, and diffuse axonal injury. While injuries to the head that result in death nearly always involve brain injury, injuries to the scalp and skull can cause significant disability and should be prevented if possible. Injuries to the scalp can include large lacerations, which in children sometimes require general anesthesia to repair adequately. Fractures to the vault of the skull usually heal without long-term consequences, however skull fractures with depression more than the width of the cortex may cause seizures and usually require operative repair with plates (Smith and Grady, 2005). Fractures to the base of the skull (basilar skull fractures) often cause intracranial bleeding and can injure the eighth cranial nerve or the ossicles resulting in hearing impairment. They can also cause cerebrospinal fluid leaks with consequent risk of meningitis and ventriculitis. The Cochrane review regarding bicycle helmets also addressed the issue of facial injury (Thompson et al., 1999), and reported that helmets appeared to decrease the risk of injury

P. Cummings et al. / Accident Analysis and Prevention 38 (2006) 636–643

to the upper and middle face (Thompson et al., 1996a). Facial injures are common in cyclists and can require operative repair, involve cranial nerve injury, and result in cerebrospinal fluid leaks. 3. Theoretical basis for protection by bicycle helmets Helmets for bicyclists did not evolve from helmets for soldiers. For the latter, the mechanism of injury is penetrating trauma from bullets, not energy transfer from blunt impact. Injuries to the brain in blunt trauma, such as those incurred in a bicycle crash, occur from energy transfer to the scalp, skull, and underlying brain. Bicycle helmets were designed with a liner to absorb the energy transfer, not to prevent penetrating injury. Linear as well as angular acceleration are important in head injuries from motor vehicle crashes (Ryan et al., 1994). Energy absorbing padding is a key element in reducing risk of brain injury (Nirula et al., 2003). Studies indicate that, unlike the military situation, penetrating injury to the head was rare among bicyclists (Cameron et al., 1994; Ching et al., 1997). Australia deleted the penetration test from bicycle helmet standards in 1990 (Ching et al., 1997). 4. Study question Systematic reviews, such as those of the Cochrane Collaboration, the U.S. Preventive Services Task Force, and the Canadian Task Force on Preventive Health Care, are studies and must begin with a testable question. The Cochrane review of bicycle helmets did have such a question (Thompson et al., 1999): do “bicycle helmets reduce head, brain and facial injury for bicyclists of all ages involved in a bicycle crash or fall?” In order to avoid bias in the selection of studies to be included, systematic reviews first start with clear inclusion and exclusion criteria, designed to include studies relevant to the question (Egger et al., 2001). There were clear criteria for inclusion of studies in the helmet review, including prospective identification of cases, validation of all injuries by means of medical record review, equivalent determination of exposure (helmet use) for cases and controls, appropriate selection of the control group, and some attempt to control for possible confounding factors. 5. Case-control design Case-control studies of injury outcomes date back at least to 1938 when Holcomb (Holcomb, 1938) published a study that compared the prevalence of alcohol in the blood of drivers hospitalized after a traffic crash (47%) with the prevalence of alcohol in the breath of drivers sampled on roads in Evanston, Illinois (12%). The term case-control study had not yet been coined, but Holcomb appreciated that if alcohol were a cause of crashes, the prevalence of alcohol use would be greater among drivers who crashed compared with similar drivers who did not crash. Statistical methods to estimate crude (Cornfield, 1951) and adjusted (Mantel and Haenszel, 1959) risk ratios from case-control data appeared in the 1950’s. Formal case-control studies of driver or pedestrian alcohol use and traffic crash death were published by

637

Haddon and colleagues in the early 1960’s (Haddon et al., 1961; McCarroll and Haddon, 1962). Since then many injury-related case-control studies have been published. Textbooks (Breslow and Day, 1980; Cummings et al., 2001; Kelsey et al., 1996; Koepsell and Weiss, 2003; MacMahon and Trichopoulos, 1996; Rothman and Greenland, 1998; Rothman, 2002; Schlesselman, 1982) and articles (Armenian, 1994; Roberts, 1995) have described the design of case-control studies. We will briefly review the method here. Imagine that we wished to know if wearing a bicycle helmet was associated with the risk of head injury in a bicycle crash. We might first consider a randomized controlled trial, as random allocation of bicyclists to helmet wearing or not is an excellent way of forming two comparison groups that are similar with regard to other factors that might influence the risk of head injury in a crash. We would probably have to reject this design for several reasons: (1) bicycle-related head injuries are uncommon, so a randomized trial would be large and expensive; (2) getting bicyclists to consent to random allocation of helmet use may be difficult; (3) since there is evidence (Thompson et al., 1999) that helmets protect against head injuries, a human subjects committee would likely not agree that random allocation of helmet wearing is ethical. If we cannot obtain approval or funding for a randomized trial, we might conduct a cohort study of helmet wearing and head injury. We could enroll 4 million members of cycling clubs and ask them for information about a single 100-mile ride during a year of follow-up. Let us assume that 50% of the cyclists wear a helmet, 1% (40,000) crashed during the ride, the risk of a crash was unrelated to helmet use, and among those who crashed 1% sustained a head injury if they were not helmeted. For this hypothetical example, we assume that helmets reduced the risk of head injury by 50% among those who crashed, i.e. 0.5% of helmeted riders who crashed sustained a head injury. If we collected information about crashes, helmet use, and head injuries from the cohort of 4 million cyclists, on average the data would look like Table 1. Since we wish to estimate the association between helmet wearing and head injury in a crash, we would only need data from the 40,000 cyclists who crashed. In that group we could estimate that the risk of head injury was less among helmeted cyclists who crashed compared with unhelmeted cyclists who crashed: risk ratio = [A/(A + B)]/[C/(C + D)] = (100/20,000)/(200/20,000) = 0.5 (95% confidence interval [CI] 0.39–0.64). Obtaining data from 4 million cyclists or from 40,000 cyclists who crashed, would be difficult, expensive, and time-consuming.

Table 1 Results from a hypothetical cohort study of helmet use and head injury among bicyclists who crashed Crashed Helmet use

Did not crash

Head injury

No head injury

Total

Helmeted Not helmeted Total

1980000 1980000 3960000

[A] 100 [C] 200 300

[B] 19900 [D] 19800 39700

2000000 2000000 4000000

638

P. Cummings et al. / Accident Analysis and Prevention 38 (2006) 636–643

Table 2 Results from a hypothetical case-control study of helmet use and head injury among bicyclists who crashed Head injury Helmet use

Yes

No

Total

Helmeted Not helmeted

[A] 100 [C] 200

[B] 199 [D] 198

299 398

Total

300

397

697

When a study outcome is rare, a case-control design is often easier to conduct, cheaper, and can obtain results faster, compared with a cohort study of the same question. If we could obtain information about all the 300 cyclists who had head injuries (the cases in cells A and C of Table 1) and just 1% of the 39,700 cyclists who suffered no head injury (the controls in cells B and D), on average we would have data like those in Table 2. From these case-control data we can estimate that among riders who crashed, the odds ratio for a head injury was less for a helmeted rider compared with an unhelmeted rider: odds ratio = (A/B)/(C/D) = (100/200)/(199/198) = 0.497 (95% CI 0.36–0.69). The estimated odds ratio closely approximates the risk ratio from the cohort data. The confidence intervals are only modestly wider in the case-control study, which used information about only 697 riders, compared with the cohort study, which required information from at least the 40,000 riders who crashed. The case-control design can estimate risk ratios more efficiently than a cohort study of the same association when the outcome is rare. 6. Control selection in case-control studies In the hypothetical data in Table 2, the controls were a random sample of the non-cases. In a case-control study, the controls are used to estimate the prevalence of the study exposure (helmet use in this case) in the population from which the cases arose. Many case-control studies are not able to select a random sample of non-cases; for example, a registry of cycling crashes does not exist. Instead, investigators try to pick a control group that is likely to represent the exposure prevalence of the non-cases. The textbooks about case-control studies that we cited earlier discuss control selection strategies and further details can be found in a series of articles (Wacholder et al., 1992, 1992a,b). Most case-control studies of bicycle helmets and head injury estimated the prevalence of helmet wearing among all cyclists who crashed by interviewing cyclists who crashed and who subsequently came to an emergency department or were admitted to a hospital for injuries that did not involve the head. Thus cyclists who suffered a fractured wrist or a lacerated knee, but no head injury, were used to represent the non-cases. So long as presentation to a hospital by non-cases is unrelated to helmet wearing, this group of injured cyclists should fairly represent the helmetwearing prevalence of all cyclists who crashed (Cummings et al., 1998, 2001). There is some evidence that this choice of controls may be reasonable in the case-control studies of bicycle helmets.

In one case-control study of bicycle helmets (Thompson et al., 1989), the authors gathered information from two control groups. Among non-case (no head injury) bicyclists who came to an emergency department the prevalence of helmet use was 23.8%. The investigators used a second control group that was randomly sampled from among members of a large health maintenance organization, frequency matched to the cases on age and zip code; the prevalence of helmet wearing in this group of cyclists who crashed was 23.3%. Thus both control groups had a similar prevalence of helmet use in a crash and both prevalences were greater than the helmet-wearing prevalence of the head-injured cases, 7.2%. 7. Choice of outcome in studies of helmet effectiveness Instead of estimating the association between helmet wearing and the risk of head injury in a crash, we might want to study the association between helmet wearing and the risk of brain injury in a crash. If brain injury was the outcome of interest, then the cases would be cyclists who crashed and suffered a brain injury. Who should be the controls? To help answer this question, let’s return to the cohort data of Table 1, but this time assume that 20% of the 300 people with a head injury had a brain injury (i.e. 60 cyclists) and that helmets reduced the risk of both brain injury and other head injuries by 50%. The revised data for the cyclists who crashed would look like those in Table 3. From these cohort data we can estimate that the risk ratio for brain injury among helmeted riders who crashed, compared with unhelmeted riders who crashed, was [A1 /(A1 + A2 + B)]/ [C1 /(C1 + C2 + D)] = (20/20,000)/(40/20,000) = 0.5 (95% CI 0.29–0.85). Now let us try to use Table 3 data for a case-control study. The brain-injured people are the cases and we assume that we can locate all of them at a hospital or the morgue. The people in the no-head-injury group are all eligible for selection as controls; let us assume we sample 1% of that group for a total of 397 cyclists. What about the 240 people in the other-head-injury column? They are non-cases for this outcome. Since our goal is to estimate the helmet-wearing prevalence in all of the non-cases (those with no head injury and those with head injury but no brain injury), we should also pick a 1% sample of the other-head-injury cyclists; about one cyclist from cell A2 and two from cell C2 . Doing this will result in the data in Table 4. The approximate risk ratio for brain injury among helmeted riders compared with unhelmeted riders can be estimated from the odds ratio [A1 /(A2 + B)]/ [C1 /(C2 + D)] = (20/200)/(40/200) = 0.5 (95% CI 0.27–0.91). Table 3 Results from a hypothetical cohort study of helmet use and brain injury among bicyclists who crashed Head injury Helmet use

Brain injury

Other head injury

No head injury

Total

Helmeted Not helmeted Total

[A1 ] 20 [C1 ] 40 60

[A2 ] 80 [C2 ] 160 240

[B] 19900 [D] 19800 39700

20000 20000 40000

P. Cummings et al. / Accident Analysis and Prevention 38 (2006) 636–643 Table 4 Results from a hypothetical case-control study of helmet use and brain injury among bicyclists who crashed Head injury Helmet use

Brain injury

Other head injury

No head injury

Total

Helmeted Not helmeted

[A1 ] 20 [C1 ] 40

[A2 ] 1 [C2 ] 2

[B] 199 [D] 198

220 240

Total

60

3

397

460

But what if we sampled the data in Table 3 using cyclists who crashed and came to emergency departments? Assume that we found all the people with brain injuries in emergency departments (60 cyclists) and all the people with other head injuries (240 cyclists). Most other cyclists who crashed would have no important injury and so they would not come to an emergency department; assume only 1% of those with injuries not to the brain or head (397 cyclists) would come to an emergency department. Given these assumptions, the data would look, on average, like those in Table 5. Now if we included all 240 of the other-head-injury patients in the control group, the estimated risk ratio becomes [A1 /(A2 + B)]/[C1 /(C2 + D)] = (20/299)/(40/398) = 0.64 (95% CI 0.35–1.15). This estimate is wrong. Our mistake was to ignore a principle of control selection: the probability of selecting each non-case should not be related to the exposure (helmet wearing in this instance) of interest. In our hypothetical data, helmet wearing reduced the risk of any head injury; thus the prevalence of helmet wearing was less among the other-head-injury controls (33% of bicyclists in the other head injury column in Table 5) than among all controls (50% among all the persons without brain injury in Table 3). The emergency department bicyclists without brain injury included 100% of bicyclists with other head injuries, but only 1% of bicyclists without head injuries. Including all the other-head-injury bicyclists in Table 5 in the control group is a form of selection bias. It would be correct to include these other head-injured persons in the control group if we could select them with the same probability as the controls without any head injury. But we would not know this probability in an actual study of bicyclists seen in emergency departments. The appropriate risk ratio can be approximated with the least bias from Table 5 data by ignoring the other-head-injury bicyclists: (A1 /B)/(C1 /D) = (20/199)/(40/198) = 0.50 (95% CI 0.27–0.91). Omitting the other-head-injury bicyclists from the

Table 5 Results from a hypothetical case-control study of helmet use and brain injury among bicyclists who crashed, based on information from injured bicyclists seen in emergency departments Head injury Helmet use

Brain injury

Other head injury

No head injury

Total

Helmeted Not helmeted

[A1 ] 20 [C1 ] 40

[A2 ] 80 [C2 ] 160

[B] 199 [D] 198

299 398

Total

60

240

397

697

639

comparison will result in only trivial bias in an actual bicycle helmet case-control study, because this group will represent only a small proportion of all non-case cyclists (Cummings et al., 1998). We can also estimate the risk ratio for helmet wearing and other head injuries by ignoring the few subjects who had a brain injury. We can estimate both risk ratios simultaneously using multinomial (polytomous) logistic regression: both are 0.50 using this method (Hosmer and Lemeshow, 2000; Greenland and Finkle, 1996). The potential benefits and shortcomings of using emergency department patients as controls have been reviewed in more detail elsewhere (Cummings et al., 1998, 2001). 8. Misconceptions about case-control studies Having reviewed case-control design, we now discuss some of the misconceptions related to case-control studies in the paper by Curnow (Curnow, 2005). 8.1. Misconception 1: studies should be rejected if they may be biased Curnow suggested that the studies included in the Cochrane review “are vulnerable to bias” (Curnow, 2005). All studies, even randomized controlled trials, are vulnerable to bias. In evaluating any study, we need to consider what may be the size and direction of any bias. One of the reasons to conduct a systematic review or meta-analysis is to examine how the estimates of association vary between studies that will usually differ in many details of their execution. As we noted earlier, cohort and casecontrol studies are more vulnerable to bias due to confounding compared with randomized controlled trials, but we cannot rely on randomized trials to answer all questions. Case-control studies have made important contributions to public health on topics as diverse as the association between alcohol use and death in a traffic crash (Borkenstein et al., 1964), the relationship between aspirin and Reyes syndrome (Hurwitz et al., 1987), and the association between prone sleep position and sudden infant death syndrome (Guntheroth and Spiers, 1992). 8.2. Misconception 2: controls must be a random sample Curnow suggested that the only valid control group would be a random sample of all potential controls (Curnow, 2005). While random sampling of controls is one method of avoiding selection bias, it is not the only method. One bike helmet study did use randomly sampled controls (conditional on frequency matching to the cases) from a health maintenance organization and found a prevalence of helmet use similar to that of emergency department controls (Thompson et al., 1989). Compared with a random sample of all bicyclists who crashed, emergency department controls may actually reduce bias from two sources. First, confounding by crash severity may be reduced because the controls, like the cases, were hurt sufficiently to seek medical care. Second, bias in recall of helmet use may be reduced, as both cases and controls have a prominent recent event (the emergency visit) to help their memory regarding helmet use and

640

P. Cummings et al. / Accident Analysis and Prevention 38 (2006) 636–643

both were interviewed within a similar time interval after the crash. 8.3. Misconception 3: confusion regarding the study question Curnow expressed concern that helmet-wearing cyclists may be more careful or less careful than other cyclists (Curnow, 2005). This may be true, but it has little relevance to the bicycle helmet studies that were summarized in the systematic review (Thompson et al., 1999). Those studies sought to assess whether helmet wearing was associated with head injury among cyclists who crashed. All cyclists crashed in the five studies summarized and the use of injured controls helped ensure that most crashes were not trivial. It is possible that crash severity differed by helmet use, but one study reported that adjusting for hitting a motor vehicle, estimated bicycle speed, type of surface hit, and damage to the bicycle made little difference to the odds ratio estimate (Thompson et al., 1996b); this implies either that the case and control groups differed little in regard to these factors, or these factors were not associated with helmet use, or both. 8.4. Misconception 4: misunderstanding the study outcome Curnow stated that the bicycle helmet studies assumed that cyclists could only injure their brain by hitting their heads (Curnow, 2005). To the contrary, all the studies selected headinjured persons (including those with brain injury) as cases, regardless of how the injury occurred in the crash. 8.5. Misconception 5: failure to understand confounding in case-control studies Curnow criticized (Curnow, 2005) one study (Thompson et al., 1996b) because the authors failed to show that the cases and controls had equal probabilities of hitting their heads. We suspect that no study could show this. It would be hard to measure the outcome of striking the head in any study. For cyclists with skull fractures, scalp lacerations, or contusions of the scalp, we could reasonably infer that they struck their heads. But some cyclists who hit their heads with little force may be unaware that this happened. If helmets prevent head injury, some helmeted cyclists may not know if they struck their heads. Some patients with brain injuries may have no memory of the event and, as Curnow points out, brain injury can occur without a blow to the head. We suspect that Curnow’s concern is that in an ideal study the helmeted and unhelmeted riders (not cases and controls) would have equal probability of having a head or brain injury, aside from any effect of helmet use. To put this a little differently, in a case-control study the ideal controls would have: (1) the same distribution of the main study exposure (in this instance the use of helmets) as the population from which the cases arose, (2) the same distribution of factors that influenced the likelihood of the exposure (helmet use), and (3) also be like the cases in regard to other factors that would influence the risk of the study outcome (aside from any causal relationship those factors might have with the exposure) (Koepsell and Weiss, 2003). In practice,

few case-control studies will have a control group that satisfies all three criteria perfectly. For this reason, investigators who conduct case-control studies often devote considerable effort to prevent or control for bias due to confounding. Confounding arises when the true association between an exposure (helmet use) and an outcome (brain injury) is distorted by the presence of some other factor which is distributed differently among the exposed and unexposed, and is also related to the occurrence of the outcome. Textbooks (Breslow and Day, 1980; Cummings et al., 2001; Kelsey et al., 1996; Koepsell and Weiss, 2003; MacMahon and Trichopoulos, 1996; Rothman and Greenland, 1998; Rothman, 2002; Schlesselman, 1982) devote considerable attention to confounding and additional information can be found in several articles (Greenland and Morgenstern, 2001; Maldonado and Greenland, 1993; Mickey and Greenland, 1989). For example, in studies of bicycle helmets and brain injuries, imagine that helmeted riders crashed at a slower speed than unhelmeted riders and that crashing at a slower speed reduced the risk of brain injury; if this were so, then crash speed could confound (distort or bias) the estimated risk ratio for the effects of helmet wearing on the outcome of brain injury. In this example, failure to account for confounding by crash speed would result in a risk ratio that would exaggerate any protective effect of helmets. In case-control studies there are three basic strategies used to control for possible confounding by crash speed. One method is restriction; cases and controls could all be restricted to highspeed (or low-speed) crashes only. If the restricted speed range were sufficiently narrow, this would eliminate crash speed as a confounder. The second method is matching: one or more controls could be selected to match each case in regard to crash speed (using a sufficiently narrow range) and this matching would be accounted for in the statistical analysis. The third method is statistical adjustment, usually in regression. Statistical adjustment of the risk ratio of interest is an excellent way of examining the data to see if there are important differences between the cases and controls that bias the estimated association. Authors sometimes assess whether a variable is a confounder by examining p-values for the association between the potential confounding variable and the outcome. But a large (i.e. not significant) p-value may miss important confounding. The p-value might be large if the variable is very common or rare in the data, or if the outcome is uncommon, even when the variable is a confounder (Lang et al., 1998; Mickey and Greenland, 1989). The p-value may also be large because a variable’s relationship with the outcome is also confounded by other variables; the addition of other variables may unmask the confounding nature of a potential confounder. A small pvalue may not indicate confounding by a variable if the variable is not related to the exposure. Instead of examining p-values, many analysts examine what happens to the risk ratio estimate when adjustment is made for a potential confounding variable. By adjusting we directly examine whether the variable actually confounds the association of interest; if the estimated risk ratio changes little with adjustment, then the variable does not distort the association of interest. If the estimated risk ratio changes to some degree, then there is at least some confounding present

P. Cummings et al. / Accident Analysis and Prevention 38 (2006) 636–643

and, depending on the amount of change, we may adjust for this variable. This approach, with many details, is well described on pages 255–259 in a textbook (Rothman and Greenland, 1998). In one study, three of us (Thompson et al., 1996b) used statistical adjustment with logistic regression to assess confounding. The published manuscript presented crude (unadjusted) risk ratio estimates as well risk ratios adjusted for both age and whether the crash was with a motor vehicle. This adjustment revealed little evidence of confounding by these two variables; the crude risk ratio estimate was 0.32 for head injury and the adjusted risk ratio was 0.31. The publication reported that there was no important further change in the risk ratio when it was adjusted for sex, education, income, riding experience, hospital attended, estimated bicycle speed, type of surface hit, or damage to the bicycle. Thus the study presented evidence that the helmeted and unhelmeted riders were either similar in regard to several variables or the outcomes were unrelated to these variables, or both were true. No study can be shown to be perfectly free of possible confounding. No matter how many variables were measured and examined, there is always the possibility of some residual confounding because a variable was not measured and used in the analysis, or because a confounding variable was measured with error, or because a measured variable was not adjusted for in a manner that removed all of its confounding influence. In interpreting estimates of association from any study, even a randomized trial, investigators and readers have to assess how likely is it that the results may be confounded, and what might be the direction and size of any confounding bias. 8.6. Misconception 6: interpreting differences in risk ratio estimates without considering the role of chance The reported odds ratio for brain injury among helmet wearers compared with those not wearing helmets was 0.12 (95% CI 0.04–0.40) in a study published in 1989 (Thompson et al., 1989) and was 0.35 (95% CI 0.25–0.48) in a study published in 1996 (Thompson et al., 1996b). Curnow (Curnow, 2005) suggested this was evidence that helmets have become less effective over time due to less use of hard shell helmets. This interpretation is possible, but it is also possible that these two odds ratios are simply two different estimates of the same true association in the study populations; i.e., they differ by chance because studies have finite sample sizes. When a study is repeated, the estimate of association from the first study may not be exactly replicated in the second study. We can formally test whether two or more odds ratios are similar using statistical tests (Altman and Matthews, 1996; Altman and Bland, 2003; Egger et al., 2001; Matthews and Altman, 1996a, 1996b); the null hypothesis is that the observed odds ratios arose from populations in which the true (but unobserved) odds ratios were the same. The alternative hypothesis is that the observed odds ratios arose from populations in which the true odds ratios differed. A p-value less than 0.05 for a test of the similarity of two (or more) odds ratios is commonly interpreted as statistical evidence rejecting the hypothesis of no difference in the odds ratios; i.e., the tested odds ratios differed by an amount greater

641

Table 6 Results from an actual case-control study of helmet use, brain injury, and other head injury, among bicyclists who crashed, based on information from injured bicyclists seem in emergency departments Head injury Helmet use

Brain injury

Other head injury

No head injury

Total

Hard shell Soft shell No shell/foam No helmet

23 22 14 141

79 40 37 394

741 433 281 1137

843 495 332 1672

Total

200

550

2592

3342

than expected by chance only. A p-value of 0.05 or greater suggests that the observed difference could reasonably be attributed to chance alone; i.e., the difference arose solely due to the finite sample sizes of the studies. A test that the odds ratios 0.12 and 0.35 for bicycle helmet effects were the same yielded a p-value of 0.08; by this standard the difference in these odds ratios might be due to chance. These results do not lend support to Curnow’s suggestion that helmet protectiveness has decreased over time. In the 1996 study (Thompson et al., 1996b), three of us reported that the adjusted odds ratio for a brain injury was 0.17 for hard shell helmets, 0.30 for soft shell helmets, and 0.36 for no shell or foam helmets. A p-value for a test that these odds ratios were from populations with the same true odds ratios was not statistically significant (p = 0.5); this means there was little statistical evidence that the three odds ratios differed from each other for reasons other than chance. To put this differently, if the three helmet types were equally protective, the observed differences might have easily arisen by chance in the study sample. Any protective effects that helmets offer against brain injury may vary by helmet type, but the evidence for this is currently weak; further studies of differences by helmet type would be a useful addition to our current knowledge. 8.7. Misconception 7: treating an injury outcome group as controls The data regarding brain injuries, other head injuries, and controls from Thompson’s paper (Thompson et al., 1996b) are in Table 6. The other-head-injury column excludes persons with brain injuries. Using odds ratios from multinomial logistic regression, we generated crude risk ratio estimates for both brain injury and for other head injuries, associated with wearing each helmet type compared with not wearing a helmet (Table 7). The Table 7 Crude risk ratios and 95% confidence intervals for brain injury or for head injury other than brain injury, by helmet type, from data in Table 6 Helmet type

Brain injury

Other head injury

Hard shell Soft shell No shell/foam Any helmet

0.25 (0.160.39) 0.41 (0.26–0.65) 0.40 (0.23–0.71) 0.33 (0.24–0.45)

0.31 (0.24–0.40) 0.27 (0.19–0.38) 0.38 (0.26–0.55) 0.31 (0.25–0.38)

These are risk ratios for each outcome among bicyclists wearing each type of helmet compared with no helmet.

642

P. Cummings et al. / Accident Analysis and Prevention 38 (2006) 636–643

Table 8 Crude odds ratios and 95% confidence intervals for brain injury generated by Curnow’s method Helmet type

Brain injury

Hard shell Soft shell No shell/foam Any helmet

0.81 (0.49–1.34) 1.54 (0.88–2.68) 1.06 (0.56–2.01) 1.06 (0.74–1.51)

risk ratio estimates in Table 7 show some variation, but formal tests of any difference in the risk ratios by helmet type (p = 0.2) or by outcome type (p = 0.8) suggest that the differences found could easily be due to chance. Curnow reported different ratios from the same data in Table 6 (see Table 1 of Curnow’s paper (Curnow, 2005)). We show some of these, with confidence intervals that we have added, in Table 8. Curnow interpreted these odds ratios as representing the effects of helmet wearing on the outcome of brain injury (Curnow, 2005). They were produced from Table 6 data by ignoring the controls who had no head injury. Instead, Curnow divided the odds of each helmet type versus no helmet among braininjured cyclists, by the same odds among cyclists with other head injuries. In essence, the cyclists with other head injuries were treated as controls. The odds ratios in Table 8 do not estimate the effect of helmet use on brain injury. They estimate whether helmets are more or less effective against brain injury than against head injury. The odds ratio of 0.81 for hard shell helmets estimates that a cyclist who crashed with a hard shell helmet was less likely to sustain a brain injury, compared with another type of head injury, than a cyclist who crashed without a helmet. Without other information this odds ratio cannot tell us if hard shell helmets protect against brain injury, are unrelated to the risk of brain injury, or actually increase the risk of brain injury; the odds ratio only suggests that users of helmets have less risk of brain injury compared with their risk of other head injuries. This design has no useful application to the bicycle helmet data, as we can easily estimate the association of helmet wearing with each type of head injury outcome, as we have done in Table 7. The type of case-control study design Curnow employed has occasionally been used and its merits (or lack of merit) have been described on pages 389–391 of a textbook (Koepsell and Weiss, 2003). The additional analysis in Table 8 adds no new information; the odds ratio of 0.81 for hard shell helmets in Table 8 can be derived by dividing the brain injury risk ratio for hard shell helmets in Table 7 by the other head injury risk ratio for hard shell helmets in Table 7: 0.25/0.31 = 0.81. The odds ratios in Table 8, with their wide confidence intervals, are simply an inelegant way of showing that we cannot distinguish with available data whether the apparent protection offered by bicycle helmets is greater for brain injuries or other head injuries. 9. Conclusions Due to the practical difficulties involved in conducting large randomized controlled trials or cohort studies to estimate the

effectiveness of bicycle helmets in preventing head and brain injuries among riders who crash, case-control studies are a scientifically valid alternative design. The case-control studies conducted to date, summarized in the Cochrane systematic review (Thompson et al., 1999), provide evidence for a protective effect of helmets in preventing injuries to the brain, head, and face among bicycle riders who crash. This review of the proper application of case-control methodology indicates that Curnow’s conclusions (Curnow, 2005) are based on a number of misconceptions that we have attempted to clarify.

References Altman, D.G., Bland, J.M., 2003. Interaction revisited: the difference between two estimates. BMJ 326, 219. Altman, D.G., Matthews, J.N., 1996. Statistics notes. Interaction 1: heterogeneity of effects. BMJ 313, 486. Armenian, H.K. (Ed.), 1994. Applications of the case-control method. Epidemiol. Rev. 16 (1). Borkenstein, R.F., Crowther, R.F., Shumate, R.P., Ziel, W.B., Zylman, R., 1964. The Role of the Drinking Driver in Traffic Accidents. Indiana University, Department of Police Administration, Bloomington, Indiana, pp. 1–245. Breslow, N.E., Day, N.E., 1980. Statistical Methods in Cancer Research, vol. 1. The Analysis of Case-Control Studies. International Agency for Research on Cancer, Lyon, France. Cameron, M.C., Finch, C., Vulcan, P., 1994. The protective performance of bicycle helmets introduced at the same time as the bicycle helmet wearing law in Victoria. Australian Road Research Board Ltd., Victoria, Australia. Ching, R.P., Thompson, D.C., Thompson, R.S., Thomas, D.J., Chilcott, W.C., Rivara, F.P., 1997. Damage to bicycle helmets involved with crashes. Accident Anal. Prev. 29, 555–562. Cornfield, J., 1951. A method of estimating comparative rates from clinical data. Applications to cancer of the lung, breast, and cervix. J. Natl. Cancer. Inst. 11, 1269–1275. Cummings, P., Koepsell, T.D., Roberts, I., 2001. Case-control studies in injury research. In: Rivara, F.P., Cummings, P., Koepsell, T.D., Grossman, D.C., Maier, R.V. (Eds.), Injury Control: A Guide to Research and Program Evaluation. Cambridge University Press, New York, NY, pp. 139– 156. Cummings, P., Koepsell, T.D., Weiss, N.S., 1998. Studying injuries with case-control methods in the emergency department. Ann. Emerg. Med. 31, 99–105. Curnow, W.J., 2005. The Cochrane Collaboration and bicycle helmets. Accident Anal. Prev. 37, 569–573. Egger, M., Smith, G.D., Altman, D.G., 2001. Systematic reviews in health care: meta-analysis in context. BMJ Publishing Group, London. Greenland, S., Finkle, W.D., 1996. A case-control study of prosthetic implants and selected chronic diseases. Ann. Epidemiol. 6, 530–540. Greenland, S., Morgenstern, H., 2001. Confounding in health research. Annu. Rev. Public Health 22, 189–212. Guntheroth, W.G., Spiers, P.S., 1992. Sleeping prone and the risk of sudden infant death syndrome. JAMA 267, 2359–2362. Haddon Jr., W., Valien, P., McCarroll, J.R., Umberger, C.J., 1961. A controlled investigation of the characteristics of adult pedestrians fatally injured by motor vehicles in Manhatten. J. Chron. Dis. 14, 655–678. Holcomb, R.L., 1938. Alcohol in relation to traffic accidents. J. Am. Med. Assoc. 111, 1076–1085. Hosmer, D.W., Lemeshow, S., 2000. Applied Logistic Regression, 2nd ed. John Wiley & Sons, New York, pp. 260–273. Hurwitz, E.S., Barrett, M.J., Bregman, D., Gunn, W.J., Pinsky, P., Schonberger, L.B., Drage, J.S., Kaslow, R.A., Burlington, D.B., Quinnan, G.V., et al., 1987. Public health service study of Reye’s syndrome and medications. Report of the main study. JAMA 257, 1905–1911.

P. Cummings et al. / Accident Analysis and Prevention 38 (2006) 636–643 Kelsey, J.L., Whittemore, A.S., Evans, A.S., Thompson, W.D., 1996. Methods in Observational Epidemiology, 2nd ed. Oxford University Press, New York, pp. 188–243. Koepsell, T.D., Weiss, N.S., 2003. Epidemiologic Methods: Studying the Occurrence of Illness. Oxford University Press, New York, pp. 105–108, 247–280, 374–402. Lang, J.M., Rothman, K.J., Cann, C.I., 1998. That confounded P-value [editorial]. Epidemiology 9, 7–8. MacMahon, B., Trichopoulos, D., 1996. Epidemiology: Principles and Methods, 2nd ed. Little, Brown, Boston, pp. 229–302. Maimaris, C., Summers, C.L., Browning, C., Palmer, C.R., 1994. Injury patterns in cyclists attending an accident and emergency department: a comparison of helmet wearers and non-wearers. BMJ 308, 1537–1540. Maldonado, G., Greenland, S., 1993. Simulation study of confounder selection strategies. Am. J. Epidemiol. 138, 923–936. Mantel, N., Haenszel, W., 1959. Statistical aspects of the analysis of data from retrospective studies. J. Natl. Cancer Inst. 22, 719–748. Matthews, J.N., Altman, D.G., 1996a. Statistics notes. Interaction 2: compare effect sizes not P values. BMJ 313, 808. Matthews, J.N., Altman, D.G., 1996b. Interaction 3: how to examine heterogeneity. BMJ 313, 862. McCarroll, J.R., Haddon Jr., W., 1962. A controlled study of fatal automobile accidents in New York City. J. Chron. Dis. 15, 811–826. McDermott, F.T., Lane, J.C., Brazenor, G.A., 1993. The effectiveness of bicyclist helmets: a study of 1710 casualties. J. Trauma 34, 834–845. Mickey, R.M., Greenland, S., 1989. The impact of confounder selection criteria on effect estimation. Am. J. Epidemiol. 129, 125–137. Nirula, R., Kaufman, R., Tencer, A., 2003. Traumatic brain injury and automotive design: making motor vehicles safer. J. Trauma 55, 844–848. Roberts, I., 1995. Methodologic issues in injury case-control studies. Injury Prev. 1, 45–48. Rothman, K.J., 2002. Epidemiology: An Introduction. Oxford University Press, New York.

643

Rothman, K.J., Greenland, S., 1998. Modern Epidemiology, 2nd ed. Lippincott-Raven, Philadelphia, p. 62, 93–161, 255–259. Ryan, G.A., McLean, A.J., Vilenius, A.T., Kloeden, C.N., Simpson, D.A., Blumbergs, P.C., Scott, G., 1994. Brain injury patterns in fatally injured pedestrians. J. Trauma 36, 469–476. Schlesselman, J.A., 1982. Case-Control Studies: Design, Conduct, Analysis. Oxford University Press, New York. Smith, M.L., Grady, M.S., 2005. Neurosurgery. In: Pollock, R.E. (Ed.), Schwartz’s Principles of Surgery. McGraw-Hill, New York. Thompson, D.C., Nunn, M.E., Thompson, R.S., Rivara, F.P., 1996a. Effectiveness of bicycle safety helmets in preventing serious facial injury. JAMA 276, 1974–1975. Thompson, D.C., Rivara, F.P., Thompson, R., 1999. Helmets for preventing head and facial injuries in bicyclists. Cochrane Database Syst. Rev. (4) (Art. No.: CD001855, DOI: 10.1002/14651858.CD001855). Thompson, D.C., Rivara, F.P., Thompson, R., 2004. Helmets for preventing head and facial injuries in bicyclists. Available at: http://www. cochranefeedback.com/cf/cda/citation.do?id=9316#931612/7/2005. Thompson, D.C., Rivara, F.P., Thompson, R.S., 1996b. Effectiveness of bicycle helmets in preventing head injuries: a case-control study. JAMA 276, 1968–1973. Thompson, R.S., Rivara, F.P., Thompson, D.C., 1989. A case-control study of the effectiveness of bicycle safety helmets. N. Engl. J. Med. 320, 1361–1367. Wacholder, S., McLaughlin, J.K., Silverman, D.T., Mandel, J.S., 1992. Selection of controls in case-control studies. Part I. Principles. Am. J. Epidemiol. 135, 1019–1028. Wacholder, S., Silverman, D.T., McLaughlin, J.K., Mandel, J.S., 1992a. Selection of controls in case-control studies. Part II. Types of controls. Am. J. Epidemiol. 135, 1029–1041. Wacholder, S., Silverman, D.T., McLaughlin, J.K., Mandel, J.S., 1992b. Selection of controls in case-control studies. Part III. Design options. Am. J. Epidemiol. 135, 1042–1050.