Vaccine 29 (2011) 1935–1940
Contents lists available at ScienceDirect
Vaccine journal homepage: www.elsevier.com/locate/vaccine
Influenza vaccination status is not associated with influenza testing among children: Implications for observational studies of vaccine effectiveness Jill M. Ferdinands a,∗ , Edward A. Belongia b , Chinyelu Nwasike c , David K. Shay a a b c
Influenza Division, U.S Centers for Disease Control and Prevention, Atlanta, GA 30333, USA Marshfield Clinic Research Foundation and Marshfield Clinic, Marshfield, WI, USA School of Public Health, University of Massachusetts, Amherst, MA, USA
a r t i c l e
i n f o
Article history: Received 3 May 2010 Received in revised form 15 December 2010 Accepted 22 December 2010 Available online 8 January 2011 Keywords: Influenza Influenza vaccines Bias (epidemiology)
a b s t r a c t Estimates of influenza vaccine effectiveness from observational studies that rely on physician-ordered influenza tests may be biased if physician testing behavior is influenced by patient vaccination status. To assess the potential for differential diagnostic testing of children by vaccine status, we examined the association between receipt of a commercial influenza diagnostic test and influenza vaccination among children aged 6–59 months who sought care at the Marshfield Clinic for acute respiratory or febrile illnesses during the 2004–05 through 2007–08 influenza seasons. There was no significant association between prior influenza vaccination and receipt of a diagnostic test for influenza. These findings suggest that estimates of vaccine effectiveness derived from observational studies among children are unlikely to be biased due to differential diagnostic testing. © 2011 Published by Elsevier Ltd.
1. Introduction Influenza is an important cause of morbidity among young children, with rates of influenza-associated hospitalization in the United States estimated at 0.6–2.7 per thousand in children under five years of age [1–3]. The primary prevention strategy for influenza is vaccination, and annual influenza vaccination has been recommended for all U.S. children aged 6–59 months since early 2006 [4], for all children aged 6 months to 18 years since 2008 [5], and for everyone over the age of 6 months since 2010 [6]. Because it is difficult to justify randomized controlled trials among individuals for whom influenza vaccination already is recommended, studies of influenza vaccine effects among U.S. children must rely on observational studies, with the case–control study being the most commonly used study design. Biased ascertainment of cases may occur in case–control studies of vaccine effectiveness [7]. Although laboratory-confirmed influenza outcomes are generally preferred over syndromic outcomes for assessing vaccine effectiveness because of their greater specificity, the use of laboratory confirmation to determine case
∗ Corresponding author at: Influenza Division, U.S Centers for Disease Control and Prevention, 1600 Clifton Road, NE, Mailstop A-32, Atlanta, GA 30333, USA. Tel.: +1 404 639 2814; fax: +1 404 639 3866. E-mail address:
[email protected] (J.M. Ferdinands). 0264-410X/$ – see front matter © 2011 Published by Elsevier Ltd. doi:10.1016/j.vaccine.2010.12.098
eligibility can introduce bias if testing is influenced by patient vaccination status. For example, if case eligibility in a study of influenza vaccine effectiveness depends on laboratory confirmation of influenza infection, and laboratory confirmation depends on likelihood of a physician to order a test for influenza, then bias will be introduced if a physician is less likely to order diagnostic testing for a subject known to be vaccinated. If cases preferentially have lower rates of vaccination, the relationship between influenza and lack of vaccination will appear stronger and estimates of vaccine effectiveness will be inflated. Although bias due to differential diagnostic testing has been recognized as a potential problem affecting estimates of vaccine effectiveness [8,9], to our knowledge there has been no empirical confirmation of its existence or estimates of its potential magnitude. The objective of this study was to investigate the presence and magnitude of differential diagnostic testing for influenza among children with acute respiratory infection or febrile illness. To assess this possible bias, we examined the association between receipt of a clinical diagnostic test for influenza and influenza vaccination status among children 6–59 months old who sought care at the Marshfield Clinic for acute respiratory or febrile illness during the 2004–05 through 2007–08 influenza seasons. A clinical diagnostic test for influenza was defined as a clinician-ordered immunofluorescence test (i.e., DFA) or clinician-ordered commercial rapid influenza diagnostic test, and hereafter, “clinical diagnostic tests” refers collectively to both types of tests.
1936
J.M. Ferdinands et al. / Vaccine 29 (2011) 1935–1940
2. Methods 2.1. Source population The source population included residents of the Marshfield Epidemiologic Study Area (MESA), a population-based cohort of residents of Marshfield, Wisconsin, and surrounding areas. The MESA population has been described in detail elsewhere [10]. Briefly, residents of this area receive almost all health care from Marshfield Clinic facilities, including ≥90% of outpatient visits. During the 2004–05 through 2007–08 influenza seasons, residents for whom vaccination was recommended and who had a clinical encounter for acute respiratory or febrile illness were recruited to participate in a study of trivalent inactivated influenza vaccine effectiveness [11]. Patients who reported feverishness, chills, or cough with less than eight days between symptom onset and clinical encounter were eligible for enrollment. A variety of methods were used to determine the beginning of influenza season in the study area. First, Wisconsin state influenza virus laboratory data were available. Second, local data on influenza-like illness were used. Third, results of testing done by local clinicians, primarily using commercial rapid diagnostic tests, were available. Enrollment of patients began when two or more of these indicators suggested that the influenza season had started, with the goal of initiating RT-PCR testing as soon as there was an indication of widespread influenza circulation. This study was approved by the Marshfield Clinic institutional review board. The analysis reported here is limited to children 6–59 months of age who were enrolled in the study of vaccine effectiveness. A nasal swab sample was taken from all participating children, usually immediately following their clinical encounter. Viral culture and real-time reverse transcriptase polymerase chain reaction (rRT-PCR) were used to test for influenza infection for all study participants. Treating physicians received a courtesy email notifying them of positive rRT-PCR results, but research culture and rRT-PCR test results were not included in patient records. Study recruitment, consenting, and swab collection were performed by trained research coordinators, and clinicians were not involved. All clinicians were encouraged to maintain their routine testing practices throughout the duration of the vaccine effectiveness study. Thus, some participating children were independently tested for influenza during the medical care visit by order of the clinician. The “patient dashboard,” a component of the electronic medical record that provides the patient’s influenza vaccination status, was routinely viewed by primary care clinicians during or prior to each patient’s clinical encounter. All clinician-ordered clinical diagnostic tests were performed by clinical laboratory personnel within 2–4 h of receipt of the specimen by the laboratory. During normal working hours, Monday through Friday, only direct fluorescent antibody (DFA) staining was performed. Results of DFA testing were available within 4 h of receipt of the specimen by the laboratory. During the evening and weekend hours, only commercial rapid diagnostic tests were performed. Results were available within 2 h of receipt of the specimen by the laboratory. Commercial rapid diagnostic tests that were used included the Binax NOW Flu A and Flu B (Binax, Portland, ME) and Directigen flu A ± B (Becton Dickinson, Sparks, MD) tests. 2.2. Variable definitions For this analysis, children for whom 2 doses of vaccine were recommended were classified as fully vaccinated if they had received both doses, as partially vaccinated if they had received one dose, and as unvaccinated if they had received no vaccination. Influenza vaccination status was determined by a real-time,
internet-based vaccination registry used by all public and private vaccination providers serving the study population. A validation of this registry found that it correctly identified 96–98% of influenza vaccinations delivered to this study population [12]. Individuals were classified as having a medical condition that placed them at high risk for complications of influenza infection if they had two or more clinical encounters during the preceding 12 months involving an International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnosis code for cardiac disease, pulmonary disease, renal disease, liver disease, diabetes mellitus, immunosuppressive disorders, malignancies, neurological/musculoskeletal disease, metabolic disease, cerebrovascular disease, or circulatory system disease, based on ACIP recommendations [13]. Individuals were classified as having public insurance, private insurance, or a combination of public and private insurance. The number of health care visits in the preceding 12 months, defined as all interactions with the health care system (including laboratory, physical therapy, radiography, etc.), was used as a proxy for propensity to seek health care and was categorized by quartile for use in regression modeling. When analysing the setting in which the clinical encounter occurred, the pediatric outpatient clinic (including combined medical and pediatric clinics) was used as the reference group.
2.3. Data analysis Bivariate correlations and multivariate logistic regression were used to examine the association between receipt of a clinicianordered clinical diagnostic influenza test and influenza vaccination status, controlling for age, presence of fever, duration of symptoms prior to clinical encounter, history of one or more high-risk condition, medical care resource utilization, type of department in which the clinical encounter occurred, insurance status, week of influenza season, and year of enrollment. Analyses were conducted with SAS version 9.2 (Cary, NC).
2.4. Sensitivity analysis To examine the magnitude of potential bias from differential diagnostic evaluation more fully, we developed a decision tree model that mimics the structure of a pediatric influenza vaccine effectiveness case–control study and performed sensitivity analysis to examine the impact of differential diagnostic evaluation on simulated estimates of vaccine effectiveness. In the simulated case–control study, children with acute respiratory infection who received a rapid diagnostic test for influenza at clinician discretion and tested positive are defined as cases; controls were age-matched children without influenza or a respiratory condition. We assumed a true VE of 70%. By varying the likelihood of receiving a diagnostic test for influenza based on vaccine status, we were able to examine the impact of this variation on the observed VE. Because we were interested in the influence of differential diagnostic testing, we assumed perfect test sensitivity and specificity. Values of other factors known to influence VE, including vaccination coverage and the probability that a child with symptoms of acute respiratory infection truly has influenza, were based on information from observational studies of VE and clinical literature. We simulated the total number of cases, number of vaccinated cases, and ratio of vaccinated to unvaccinated cases. This ratio multiplied by the ratio of unvaccinated to vaccinated controls yielded the observed odds ratio; 1 minus the observed odds ratio gave the observed VE. Bias was measured as the difference between true VE (70%) and the estimate of observed VE generated by the simulation model.
J.M. Ferdinands et al. / Vaccine 29 (2011) 1935–1940
1937
Table 1 Study sample characteristics. Characteristic
Age group (mo), N (%)* 6–23 24–59 Male, N (%) High-risk condition, N (%)** Influenza vaccination status, N (%) Full Partial None Fever or feverishness, N (%)*** Symptoms for less than two days at time of visit, N (%)*** Insurance, N (%) Private Public Both None or missing Influenza season, N (%)*** 04–05 05–06 06–07 07–08 Department type, N (%)*** Pediatrics or medical/pediatrics Family practice Urgent care Emergency department Inpatient Other Medical care visits in past 12 months, median (IQR)*** Week of influenza season, median (IQR)
Received clinician-ordered clinical diagnostic influenza test
All subjects (n = 1410)
Yes (n = 205)
No (n = 1205)
137 (67) 68 (33) 110 (54) 38 (19)
692 (57) 513 (43) 639 (53) 132 (11)
829 (59) 581 (41) 749 (53) 170 (12)
89 (43) 35 (17) 81 (40) 194 (95) 122 (60)
608 (50) 140 (12) 457 (38) 943 (78) 466 (39)
697 (49) 175 (12) 538 (38) 1137 (81) 588 (42)
114 (56) 59 (29) 31 (15) 1 (<1)
659 (55) 325 (27) 214 (18) 7 (<1)
773 (55) 384 (27) 245 (17) 8 (<1)
70 (34) 10 (5) 50 (24) 75 (37)
246 (20) 84 (7) 479 (40) 396 (33)
316 (22) 94 (7) 529 (38) 471 (33)
101 (49) 45 (22) 30 (15) 18 (9) 10 (5) 1 (<1) 13 (8–22) 5 (3–7)
667 (55) 247 (21) 233 (19) 39 (3) 3 (<1) 16 (1) 11 (7–19) 4 (2–7)
768 (54) 292 (21) 263 (19) 57 (4) 13 (<1) 17 (1) 12 (7–19) 4 (2–7)
*p < 0.05, **p < 0.01, ***p < 0.001 for test of difference using chi-square test of heterogeneity for Rx2 contingency table for categorical variables or ANOVA F test for continuous variables, transformed as needed; IQR = interquartile range.
3. Results The study sample included 1410 children, of whom 697 (49%) and 175 (12%) were fully or partially vaccinated for influenza, respectively. Two hundred and five (15%) children received a clinician-ordered clinical diagnostic influenza test during their medical visit. Sample characteristics are summarized in Table 1. In bivariate analyses, children receiving a clinical diagnostic test for influenza were more likely to be 6–23 months old (p < 0.05), to have a high-risk condition (p < 0.01), and to have greater use of medical care within the preceding 12 months (p < 0.01). In addition, children who presented with fever or feverishness, those presenting with symptoms of less than two days duration, and children seen in inpatient or emergency departments were more likely to receive a clinical diagnostic test for influenza (p < 0.001 for all). In multivariate analysis (n = 1400), receipt of a clinical diagnostic test for influenza was not significantly associated with full vaccination (adjusted odds ratio [aOR] = 0.9, 95% confidence interval [CI] 0.6–1.2; Fig. 1) or partial vaccination (aOR = 1.3, 95% CI 0.8–2.2; p = 0.23) when controlling for age, presence of fever/feverishness, duration of symptoms prior to clinical encounter, history of one or more high-risk condition, and type of department in which the clinical encounter occurred. Insurance status and week of influenza season were eliminated from the final multivariate model because these factors were not independent predictors of clinicianordered diagnostic influenza testing or confounders of the main association. Year of enrollment and medical resource utilization were associated with influenza testing and vaccination. However, the magnitude and statistical significance of the association between receipt of a clinical diagnostic test for influenza and vaccination status was virtually unchanged whether these factors were included in the model or not. Because these factors were not predictors of interest and their exclusion did not change
the results of the model, they were eliminated from the final model. Presence of fever (aOR = 4.9; 95% CI 2.5–9.4), duration between symptom onset and clinical encounter of ≤2 days (aOR = 2.1; 95% CI 1.5–2.9), history of a high-risk condition (aOR = 1.9; 95% CI 1.2–2.9), and age 6–23 months (aOR = 1.6; 95% CI 1.1–2.2) were positively associated with receipt of a clinician-ordered clinical diagnostic test for influenza. Compared with being seen in a pediatric outpatient clinic, children seen as inpatients (aOR = 14.6; 95% CI 3.8–56.0) or in an emergency department (aOR = 2.5; 95% CI 1.4–4.8) were more likely to be tested for influenza. We found no interactions between vaccination status and age, history of a high-risk condition, or year of enrollment. 4. Discussion We found no significant association between influenza vaccination status and receipt of a clinician-ordered clinical diagnostic test for influenza among children 6–59 months old who presented to medical care with acute respiratory infection or febrile illness. Although previous investigators have acknowledged the potential for bias in estimates of vaccine effectiveness due to differential diagnostic testing [8,9,14], this study is, to our knowledge, the first to quantitatively evaluate the possible magnitude of any differential diagnostic testing for influenza because of vaccine status. Our results address a theoretical concern with the use of clinicianordered influenza tests in studies of the effectiveness of influenza vaccines. Differential diagnostic testing based on clinician knowledge of influenza vaccine status could result in biased estimates of vaccine effectiveness. For example, if physicians are less likely to order a diagnostic test for a vaccinated child (which we believe the more likely direction of bias), then differential diagnostic testing would lead to an overestimation of the true vaccine effective-
1938
J.M. Ferdinands et al. / Vaccine 29 (2011) 1935–1940
Fig. 1. Odds of receiving a clinician-ordered clinical diagnostic test for influenza during a medical visit for acute respiratory or febrile illness for children 6–59 months old in a population cohort in Marshfield, Wisconsin, 2004–05 to 2007–08 (n = 1400). Shown are the adjusted odds ratios with bars indicating 95% confidence intervals.
ness; the greater the magnitude of differential testing between vaccinated and unvaccinated patients, the greater the degree of this overestimation. Using a decision tree model that mimics the structure of a pediatric influenza vaccine effectiveness case–control study and an assumed true influenza vaccine effectiveness of 70%, we performed a sensitivity analysis to examine the impact of differential diagnostic evaluation on simulated estimates of vaccine effectiveness. Our results suggest that if unvaccinated patients are twice as likely to receive a clinician-ordered influenza test as vaccinated patients, then a case–control study of vaccine effectiveness that relies on clinician-ordered diagnostic tests will overestimate vaccine effectiveness by about 15% (the study will generate an estimated VE of 85% compared to a true VE of 70%). If the degree of testing differential is much smaller, for instance, if unvaccinated patients are only 10% more likely to be tested, then the degree of overestimation is minimal (an estimated VE of 73% compared to true VE of 70%). Although these simulations represent a simplistic scenario in which influenza diagnostic tests are assumed to have perfect sensitivity and specificity, they provide insight into the potential amount and direction of bias that could result from differential diagnostic testing. Our simulation results suggest that although it is possible that small differences in rates of diagnostic testing could influence estimates of influenza vaccine effectiveness, the magnitude of any potential bias is likely less than that created by misclassification of influenza case status because of use of tests with imperfect sensitivity and specificity [9]. Existing evidence on the possible influence of vaccination status on physician utilization of influenza diagnostic tests is equivocal. Surveys of provider practices and attitudes have identified factors that influence physicians’ use and choice of diagnostic test for influenza, including duration of illness, desire to elucidate diseases etiology, need to determine appropriate antiviral treatment options, and clinical practice setting [15,16]. These surveys did not identify vaccination status as a factor affecting testing decisions; however, they did not explicitly ask about this possibility. On the other hand, a web-based survey of 123 primary care providers (PCPs) in the Marshfield Clinic in 2009 found that 43% of providers indicated that patient vaccination status would influence their decision to order a diagnostic test for influenza, with 11% reporting that receipt of influenza vaccination by the patient would increase the likelihood of ordering a test and 32% reporting that vaccination would decrease the likelihood of ordering a test. The remaining 57%
of surveyed PCPs said that patient vaccination status would have no effect on their decision to order a test [unpublished data]. However, the survey response rate was less than 40%, and it is unclear if these results reflect the attitudes and practices of nonrespondents. The only previously published report describing an association between influenza vaccination status and diagnostic testing for influenza in a clinical setting used viral culture. Scharfstein et al. [17] found that among healthy children presenting with fever or respiratory symptoms during the 2000–01 influenza season, vaccinated children were significantly less likely to have been cultured for influenza (5.6%) than unvaccinated children (7.7%; p < 0.05). Whether vaccination status directly influenced diagnostic evaluation decisions or if this pattern was driven by the fact that unvaccinated children may have had more severe disease is unknown. A strength of our study is that it was conducted in a medical setting with access to both an electronic medical record and a regional electronic, real-time vaccine registry. Thus, we were able to link physician test utilization data with patient vaccination status, thereby allowing us to examine differential diagnostic testing patterns using empirical data rather than physician self-report of testing behavior. Furthermore, for differential diagnostic testing to occur, providers need to be aware of vaccination status, which is arguably more likely where an electronic medical record is available and vaccination is promoted via a centralized care delivery system. All Marshfield Clinic primary care providers were provided with influenza vaccination status as part of the electronic medical record for each patient, but we did not confirm that providers were aware of the information. However, in our web-based survey of 123 Marshfield Clinic primary care providers, 61 (49%) responded that they routinely checked vaccination status in patients presenting with acute respiratory infection [unpublished data], suggesting that vaccination status was likely to have been known by the treating clinicians for a significant proportion of patients in our sample. A limitation of the study is that the research RT-PCR testing may have changed routine physician testing behavior since physicians knew they would be notified of positive RT-PCR results for influenza. However, because physicians typically received positive RT-PCR results one to two days after the clinical encounter, it seems unlikely that physicians would have foregone use of clinician-order diagnostic influenza testing in favor of waiting for RT-PCR results, especially if they intended to treat these patients
J.M. Ferdinands et al. / Vaccine 29 (2011) 1935–1940
for influenza with antiviral medication. Unfortunately, we did not have information about antiviral treatment with which to explore the association of clinician-ordered testing with use of antiviral therapy. Because results of the clinician-ordered diagnostic test for influenza were available at most within 4 h, clinicians were able to use test results to guide treatment choices (although this may have required follow-up telephone contact with some patients). We did not have data to analyse separately the use of immunofluorescence tests versus commercial rapid antigen tests; however, because the two types of tests provided results in roughly the same amount of time, this distinction has little bearing on our overall findings. Because the Marshfield Clinic system has many unique features, the results observed in this population need to be validated in other healthcare management systems before being considered generalizable. We did not detect a difference in the utilization of clinical diagnostic influenza tests by clinicians among children presenting for care with acute respiratory symptoms or febrile illness by influenza vaccine status. These results may provide some reassurance that this potential source of bias did not affect the results of published observational studies that have documented a substantial influenza vaccine effect. Although our findings suggest that vaccine effectiveness estimates derived from case–control studies are unlikely to be strongly biased because of differential diagnostic testing, these estimates may be biased due to other factors, including the imperfect specificity of many commercial rapid diagnostic influenza tests [9,18–23]. In clinical practice, the greater concern with commercial rapid diagnostic tests is typically inadequate sensitivity, which leads to false negative test results and missed opportunities for timely antiviral treatment. However, for the assessment of vaccine effectiveness, it is imperfect test specificity that has greater potential to bias VE estimates. Even a slight reduction in test specificity can lead to a significant number of influenza-negative patients being erroneously enrolled as cases in a case–control study of VE, which dilutes the case set with non-diseased individuals and biases the study hypothesis toward the null (and VE toward zero). Selection bias is a fundamental consideration in observational studies of vaccine effectiveness. Coleman et al. found that VE based on clinician-order testing substantially underestimated VE when compared to a gold standard VE derived from independent enrollment in a research study with RT-PCR testing [24]. At least part of this difference was attributed to selection bias arising from preferential diagnostic testing of patients with more severe illness or other factors such as insurance status. In contrast, the active surveillance and testing by research staff is more resource intensive, but it eliminates selection bias that may occur with clinical diagnostic test requests. Sentinel influenza surveillance networks may represent a reasonable alternative context for estimating VE if selection bias can be minimized through the use of uniform testing and case identification procedures [25]. Sentinel surveillance using clinician-ordered testing is currently used for generating annual VE estimates in Canada [26], Australia [27], and Europe [25]. Although sentinel physicians are instructed to limit testing to those patients who meet pre-specified symptom criteria, compliance with the screening guidelines is not routinely monitored or reported. In these studies, the relative lack of data validation and quality monitoring at the physician level limits comparisons with protocol-based VE studies that rely on systematic screening and testing by research staff rather than clinical staff. Assessment of different types of bias in observational studies of vaccine effectiveness is a difficult task that deserves additional exploration, particularly with the use of datasets that include both prospectively collected research data and clinical data elements from electronic medical records. The recommendation by the U.S. CDC’s Advisory Committee on Immunization Practices in February 2010 [6] that all US residents should receive an annual influenza
1939
vaccination likely means that future comparisons of influenza outcomes in vaccinated and unvaccinated subjects in the Unites States will be made in observational studies rather than clinical trials. Efforts to better understand the biases that may affect observational studies of influenza vaccine effects deserve a high priority, particularly as new types and formulations of influenza vaccines are introduced into widespread use. Acknowledgements We thank Mary Vandermause of the Marshfield Clinic and Paul Gargiullo and Mark Thompson of the Centers for Disease Control and Prevention for their comments on the manuscript. We are also grateful to the research coordinators, interviewers, data analysts, programmers, and Core Laboratory staff at the Marshfield Clinic Research Foundation for their support of this study. References [1] Thompson WW, Shay DK, Weintraub E, Brammer L, Bridges CB, Cox NJ, et al. Influenza-associated hospitalizations in the United States. JAMA 2004;292(September (11)):1333–40. [2] Poehling KA, Edwards KM, Weinberg GA, Szilagyi P, Staat MA, Iwane MK, et al. The underrecognized burden of influenza in young children. N Engl J Med 2006;355(July (1)):31–40. [3] Grijalva CG, Craig AS, Dupont WD, Bridges CB, Schrag SJ, Iwane MK, et al. Estimating influenza hospitalizations among children. Emerg Infect Dis 2006;12(1):103–9. [4] Smith N, Bresee J, Shay D, Uyeki T, Cox N, Strikas R. Prevention and control of influenza: general recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR Recomm Rep 2006;55(July (RR-10)). [5] Fiore AE, Shay DK, Broder K, Iskander JK, Uyeki TM, Mootrey G, et al. Prevention and control of influenza: recommendations of the Advisory Committee on Immunization Practices (ACIP), 2008. MMWR Recomm Rep 2008;57(August (RR(07)):1–60. [6] Advisory Committee on Immunization Practices (ACIP). ACIP Provisional Recommendations on the Uses of Influenza Vaccines. 2010 Feb 24. [7] Comstock GW. Evaluating vaccination effectiveness and vaccine efficacy by means of case–control studies. Epidemiol Rev 1994;16(1):77–89. [8] Simonsen L. Commentary: observational studies and the art of accurately measuring influenza vaccine benefits. Int J Epidemiol 2007;36(3):631–2. [9] Orenstein EW, De Serres G, Haber MJ, Shay DK, Bridges CB, Gargiullo P, et al. Methodologic issues regarding the use of three observational study designs to assess influenza vaccine effectiveness. Int J Epidemiol 2007;36(3):623–31. [10] DeStefano F, Eaker ED, Broste SK, Nordstrom DL, Peissig PL, Vierkant RA, et al. Epidemiologic research in an integrated regional medical care system: the Marshfield Epidemiologic Study Area. J Clin Epidemiol 1996;49(6):643–52. [11] Belongia EA, Kieke BA, Donahue JG, Greenlee RT, Balish A, Foust A, et al. Effectiveness of inactivated influenza vaccines varied substantially with antigenic match from the 2004–2005 season to the 2006–2007 season. J Infect Dis 2009;199(January (2)):159–67. [12] Irving SA, Donahue JG, Shay DK, Ellis-Coyle TL, Belongia EA. Evaluation of selfreported and registry-based influenza vaccination status in a Wisconsin cohort. Vaccine 2009;27(November (47)):6546–9. [13] Harper S, Fukuda K, Uyeki T, Cox NJ, Bridges CB. Prevention and control of influenza: Recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR Recomm Rep 2005;54(July (RR-08)): 1–40. [14] Ozasa K. The effect of misclassification on evaluating the effectiveness of influenza vaccines. Vaccine 2008;26(November (50)):6462–5. [15] Fazio D, Laufer A, Meek J, Palumbo J, Lynfield R. Influenza-testing and antiviralagent prescribing practices – Connecticut, Minnesota, New Mexico, and New York, 2006-07 influenza season. MMWR Morb Mortal Wkly Rep 2008;57(January (3)):61–5. [16] Rothberg MB, Bonner AB, Rajab MH, Stechenberg BW, Rose DN. Do pediatricians manage influenza differently than internists? BMC Pediatr 2008; 8:15. [17] Scharfstein DO, Halloran ME, Chu H, Daniels MJ. On estimation of vaccine efficacy using validation samples with selection bias. Biostatistics 2006;7(4):615–29. [18] Charles PG. Early diagnosis of lower respiratory tract infections (point-of-care tests). Curr Opin Pulm Med 2008;14(3):82–176. [19] Grijalva CG, Poehling KA, Edwards KM, Weinberg GA, Staat MA, Iwane MK, et al. Accuracy and interpretation of rapid influenza tests in children. Pediatrics 2007;119(January (1)):e6–11. [20] Petric M, Comanor L, Petti CA. Role of the laboratory in diagnosis of influenza during seasonal epidemics and potential pandemics. J Infect Dis 2006;194(November (Suppl. 2)):S98–110. [21] Storch GA. Rapid diagnostic tests for influenza. Curr Opin Pediatr 2003;15(1):77–84.
1940
J.M. Ferdinands et al. / Vaccine 29 (2011) 1935–1940
[22] Uyeki TM. Influenza diagnosis and treatment in children: a review of studies on clinically useful tests and antiviral treatment for influenza. Pediatr Infect Dis J 2003;22(2):164–77. [23] Uyeki TM, Prasad R, Vukotich C, Stebbins S, Rinaldo CR, Ferng YH, et al. Low sensitivity of rapid diagnostic test for influenza. Clin Infect Dis 2009;48(May (9)):e89–92. [24] Coleman LA, Kieke B, Irving S, Shay DK, Vandermause M, Lindstrom S, et al. Comparison of influenza vaccine effectiveness using different methods of case detection: Clinician-ordered rapid antigen tests vs. active surveillance and testing with real-time reverse transcriptase polymerase chain reaction (rRT-PCR). Vaccine 2011;29(January (3)):387–90.
[25] Valenciano M, Kissling E, Ciancio BC, Moren A. Study designs for timely estimation of influenza vaccine effectiveness using European sentinel practitioner networks. Vaccine 2010;28(46):7381–8. [26] Skowronski DM, Masaro C, Kwindt TL, Mak A, Petric M, Li Y, et al. Estimating vaccine effectiveness against laboratory-confirmed influenza using a sentinel physician network; Results from the 2005–2006 season of dual A and B vaccine mismatch in Canada. Vaccine 2007;25(15):2842–51. [27] Kelly H, Carville K, Grant K, Jacoby P, Tran T, Barr I. Estimation of influenza vaccine effectiveness from routine surveillance data. PLoS One 2009; 4(3):e5079.