Measuring the Impact of Prevention Research on Public Health Practice Ross C. Brownson, PhD, Eduardo J. Simoes, MD, MSc, MPH Context:
Prevention research involves the translation of established and promising methods of disease prevention and health promotion to communities. Despite its importance, relatively little attention has been paid to systematic approaches to determining the impact of prevention research on public health practice. Evaluation of these effects is challenging, particularly in light of multi-factor causation, long time periods between exposure and disease occurrence, and difficulties in determining costs and benefits.
Objective:
To develop a framework that allows the prospective or retrospective evaluation of the effects and effectiveness of prevention research.
Results:
The proposed framework allows assessment of prevention research in five areas of public health practice: surveillance and disease investigation, program delivery, policies and regulations, recommendations to the public, and public health education and training. A brief case study of environmental tobacco smoke illustrates the public health impact of prevention research.
Conclusions: Greater translation of prevention research findings is needed to accomplish public health goals— efforts are enhanced by academic–practice partnerships. The relevance and utility of the current framework needs additional testing with a variety of public health issues. Medical Subject Headings (MeSH): cost, effectiveness, evaluation, prevention, public health practice, surveillance. (Am J Prev Med 1999;16(3S):72–79) © 1999 American Journal of Preventive Medicine
Introduction
P
revention research involves focused efforts to determine the underlying causes of death, injury, and disability and to apply research discoveries at the community level. It includes studies focused on etiology, intervention, and methods.1 Prevention research involves the “direct and immediate application of effective strategies to benefit the public’s health,”2 representing a continuum from basic research, to hypothesis testing in controlled settings, to application of interventions in large populations. Often, the most important research issue is not only the efficacy of the technology itself, but the effectiveness of the application of the intervention to the general population and the adaptation of the research to population subgroups at highest risk.3 Although prevention research can be defined in
From the Department of Community Health and Prevention Research Center, School of Public Health, Saint Louis University, St. Louis, MO; and the Division of Chronic Disease Prevention and Health Promotion, Missouri Department of Health, Columbia, MO. Address correspondence and reprint requests to: Dr. Brownson. Department of Community Health and Prevention Research Center, School of Public Health, Saint Louis University, 3663 Lindell Boulevard, St. Louis, MO 63108-3342.
72
Am J Prev Med 1999;16(3S) © 1999 American Journal of Preventive Medicine
several contexts,1 this discussion emphasizes research applications at the community level. This focus on the community has been reflected in national health plans4,5 and is recognized as perhaps having the greatest potential for affecting overall health status.6,7 The community refers to a group of persons with one or more common characteristics and may involve a geographically coherent place such as a worksite.8 Wallack and Dorfman define the community as “...not just the sum of its citizens, but rather the web of relationships between people and institutions that hold communities together.”9 Luepker describes seven criteria for determining when to utilize a community intervention approach: (1) common condition, (2) established risk factors, (3) sociocultural determinants, (4) reasonable interventions available, (5) benefits of intervention demonstrated, (6) safety, and (7) acceptable to the community.10 Particularly in relation to the final criterion, researchers and practitioners have increasingly recognized the importance of community involvement in the design and conduct of public health research— sometimes called participatory research.11 It helps make research questions more relevant to the community, methods more acceptable, and results more meaningful.12 In a variety of current public health interventions,
0749-3797/99/$–see front matter PII S0749-397(99)00014-8
participation of the community through coalitions or advisory boards is mandated.13 Despite the recognized importance of measuring the impact of prevention research on community health and public health practice,2 there has been relatively little focus on developing useful approaches for such assessments. This article presents a framework for determining the impact of prevention research on public health practice. It then briefly illustrates the retrospective application of the framework to a contemporary public health issue. Although coverage of several important issues is brief, this paper is intended to provide entry points into a large and dispersed body of literature.
Framework for Assessing Research Translation We propose a framework that should assist practitioners in developing comprehensive approaches to evaluating the effects of prevention research advances. In a practice setting, it may be impractical to analyze all relevant inputs and outputs; however, we argue that the more systematically one applies such a framework, the more likely that public health improvements can be documented. In some cases, this may involve a retrospective review of a particular discovery or intervention (as in the case study presented) and in other instances, this framework may be used prospectively to help in planning that evaluation of an intervention.
Evaluation Approaches to Measuring Process and Effectiveness Measuring prevention effectiveness can rely on three levels of evaluation: process, impact, and outcome. This section briefly discusses potential contributions of each type of evaluation, with a more comprehensive discussion available elsewhere.14,15 Process measures. Initially, one should seek to determine
which (if any) changes have occurred as the result of a preventive technology (i.e., prevention effects). This often involves process evaluation—the analysis of inputs and implementation experiences to track changes as a result of a program or policy.14 In temporal sequence, process evaluation occurs at the early stages of a public health intervention and is often helpful in assessing the fidelity of implementation for making “mid-course corrections.” At the organizational level, much of the work in measuring core functions of public health is useful in determination of prevention effects on local public health practice.16 Researchers have recently proposed clear and scientific protocols to appraise performance of these core functions in public health agencies.17,18 These protocols measure inputs such as workforce information and organizational relationships and outputs or services such as disease screenings or environ-
mental inspections. The Assessment Protocol for Excellence in Public Health (APEX-PH), may also be useful for local public health practitioners attempting to measure the effects of organizations changes.19 Effectiveness. Evaluation of prevention effectiveness involves impact and outcome evaluation. Impact evaluation can be considered a subset of outcome evaluation that assesses whether intermediate objectives have been achieved. Indicators may include changes in knowledge, attitudes, or risk factor prevalence.14 The ultimate measures of prevention effectiveness rely on outcome evaluation such as changes in morbidity, mortality, and quality of life. Measuring prevention effectiveness is complex and challenging. Once the efficacy of a preventive technology has been demonstrated (i.e., the effect obtained with a specific technique in expert hands under ideal circumstances), it is necessary to evaluate effectiveness—i.e., the impact of the preventive activity in the “real world.”20 Determination of effectiveness takes into account not only the efficacy of the intervention, but the practical aspects of delivering the intervention in community. An effective intervention may be available but if public acceptance of an intervention is low, little prevention benefit is accrued.21 It describes the relationship between the level of input and the level of output. In community-based research, a variety of study designs is used to assess effectiveness. Commonly, these are not “true” experiments. They are rather observational studies using quasi-experimental designs, casecontrol methods, or time-series analyses. The strength of the evidence for effectiveness is highly dependent on study quality. In on-going work of the U.S. Public Health Service,22,23 study quality is assessed according to seven criteria: (1) definition and selection of the study (and comparison) population; (2) definition and measurement of the exposure or intervention; (3) assessment of outcomes; (4) follow-up and/or completion rates; (5) presence of bias; (6) confounding; and (7) appropriate data analysis. Analytic techniques such as meta-analysis24 are often helpful in determining prevention effectiveness. The most useful communitybased interventions show high internal validity (i.e., Can the observed results be attributed to the program or intervention?). Further, external validity relates to whether the observed results can be generalized to other settings and populations. The determination of when evidence is sufficient for public health action is complex and dependent on multiple factors including the magnitude, severity, and preventability of a condition.25 Cost effectiveness. Economic evaluation, commonly through
cost-effectiveness studies, can be an important component of prevention research at the community level. Am J Prev Med 1999;16(3S)
73
Figure 1. Examples of indicators of prevention effectiveness according to category of public health practice. *Core Public Health Functions Steering Committee, Office of the Assistant Secretary for Health, US DHHS.
These methods assess the relative appropriateness of public health programs and policies. Cost effectiveness compares the net monetary costs of an intervention with some measure of health impact or outcome (e.g., years of life saved).26,27 Cost effectiveness is operationally defined as: net cost adverse outcomes averted In practice, it can be difficult to measure cost effectiveness for community-based interventions because cost data are often not reported and indirect costs (e.g., lost work productivity) are difficult to measure. Potential hazards. Many preventive technologies are asso-
ciated with potential hazards.20 These may be the direct result of the technology (e.g., surgical complications from biopsies of lesions originally detected with mammography).20 Indirect hazards may also occur. For example, anxiety may be created by the finding of a suspicious lesion by mammography. 74
Role of qualitative evaluation. The preceding discussion has focused on the role of quantitative evaluation in determining effectiveness. Qualitative evaluation can be a useful complement to a quantitative approach,28 seeking to answer questions on “how” and “why” intervention results were obtained. Qualitative approaches often rely on interviews with program stakeholders, detailed observations of intervention activities, and review of program documents.28,29
Major Public Health Areas Affected by Prevention Research Five main categories of public health practice can be affected by prevention research (Figure 1). It is important to note that the following categories are not mutually exclusive, with considerable overlap among areas. For example, the goal of a comprehensive public health program will often involve surveillance, policy development, public information, and training. Although it is not named as a separate category, it is
American Journal of Preventive Medicine, Volume 16, Number 3S
important to note that public health intervention begins with etiologic research in which health risks are identified and quantified through epidemiologic and clinical research. Several of the key steps in moving from epidemiologic research to intervention include: (1) examining the population-attributable risk to determine the potential benefits of public health intervention; (2) determining whether intervention options exist to address the health issue of concern; (3) assessing the most vulnerable or highest-risk populations; and (4) selecting a behavioral science model that appropriately addresses the issue.
broad areas of health policy can be strongly influenced by prevention research: (1) control of environmental and occupational hazards; (2) reduction of behavioral risk factors such as smoking or the lack of cancer screening; (3) regulation of drugs and medical devices; and (4) improvement in the delivery and quality of health care.34 Recommendations to the public. Federal, state, and local
Policies and regulations. It is increasingly recognized that
health agencies are important sources of information on health issues. Challenges in translating prevention research findings into meaningful information for the general public are sizable. Most of the information that Americans receive about health, science, and technology comes from television news programs and newspapers.35 This information can be communicated through numerous vehicles including health-related stories based on public health research and health risks, public health surveillance information disseminated by public health agencies, and social marketing techniques that segment the target audience.36 There are numerous relevant theories and models that form the foundation for effective public health communication.37 The basis for public health recommendations often derives from expert panels and consensus conferences. The main goal of expert panels is to provide peer review—i.e., using scientific experts to review the quality of the science and scientific interpretations that underlie public health recommendations, regulations, and policy decisions. Expert panels commonly rely on multiple public health disciplines (e.g., epidemiology, behavioral sciences, medicine, biostatistics, economics, ethics), consist of 8 –15 members, and submit draft findings for public review and comment prior to final recommendations. An important contribution of expert panels has been evidence-based guidelines such as the Guide to Clinical Preventive Services38 and a similar set of guidelines under development for community-based interventions.22,23 These guidelines translate the findings of research and demonstration projects into accessible and useable information for public health practice. Consensus conferences are a related mechanism that are commonly used to review evidence on a particular health issue. The National Institutes of Health (NIH) has used consensus conferences since 1977 to resolve controversial issues in medicine and public health. To date, the NIH has conducted 120 such conferences.39
health policies can have profound impacts on the health of the public.32,33 Policies are “those laws, regulations, formal and informal rules and understandings that are adopted on a collective basis to guide individual and collective behavior.”32 Policy interventions tend to alter or control the legal, social, economic, and physical environment, and are supported by the notion that individuals are strongly influenced by the sociopolitical and cultural environment in which they act. Four
Public health education and training. In order for prevention research innovations to be put into practice, a wellinformed public health workforce is needed that strengthens linkages between academics, practitioners, and community advocates.13 The interests of private organizations in community heath and the burgeoning information technologies available to health professionals make public health a rapidly changing disci-
Surveillance and disease investigation. Public health surveil-
lance (in some literature called epidemiologic surveillance) is a core function of public health that involves the ongoing systematic collection, analysis, and interpretation of outcome-specific health data, closely integrated with the timely dissemination of these data to those responsible for preventing and controlling disease or injury.30 The evaluation of the usefulness of public health surveillance systems is necessary for making rational decisions in the allocation of limited resources. An important related public health function is the study of the departure of the observed pattern of disease incidence from the expected pattern—i.e., outbreak and cluster investigations. The goal of these disease investigations is to quickly determine etiology so that control measures can be taken to alleviate the health concern. An additional benefit of surveillance systems is the use of their data for community health assessments or “report cards.”31 Program delivery. A public health program can be de-
fined as a structured intervention with the intent of improving the health of the total population or a subpopulation at particularly high risk. Public health programs depend on a variety of inputs and outputs, and often have short-term, intermediate, and long-term objectives. For example, many public health programs now rely on coalitions to accomplish prevention objectives. A short-term objective of these programs is to develop viable coalitions. An intermediate objective is to enact effective policies, and longer-term objectives relate to decreasing premature death from a given risk factor or health condition.
Am J Prev Med 1999;16(3S)
75
pline.8,40 Public health training programs can take numerous forms. They can include formal graduate training in schools of public health and departments of preventive medicine, summer programs to enhance skills, and distance learning networks for working professionals. The CDC is currently developing a national Public Health Training Network8 that will be valuable in moving prevention research closer to public health practice.
Effects of Time and Multiple Causation Time and multiple causation can confound the assessment of the impacts of prevention research on the health of the public. Many of the “modern epidemics” such as heart disease, cancer, and HIV/AIDS develop over a period of many years and have complex etiologies. Such considerations make it nearly impossible to ascribe a change in an outcome measure to any single program or policy. In practical terms, it may be possible to measure impacts only over the life of many public health interventions. Increasingly sophisticated analytic methods have been developed to assist in the evaluation of complex community interventions.41,42
Importance of a Causal Model Development of a program theory (e.g., “causal” or “logic” models) can lead to well-designed interventions and to selection of appropriate public health indicators. A range of program staff and policy makers should generally be involved in model development. Frameworks for these models are available in the literature.43,44 Causal models should be developed well in advance of program formation and implementation.29
Case Study Illustration A brief case study of environmental tobacco smoke (ETS) helps to illustrate the public health impact of prevention research.
Research Findings The first studies linking ETS with adverse health effects were epidemiologic studies of lung cancer published in the early 1980s.45,46 In 1986, two landmark reviews were published by the U.S. Surgeon General47 and the National Academy of Sciences.48 These reports, the results of expert consensus, deemed ETS a cause of lung cancer in healthy adult nonsmokers. In addition, the U.S. Environmental Protection Agency published a comprehensive review of the health effects of ETS in 1992.49 It reviewed 31 studies of ETS and lung cancer and concluded that ETS was a human lung carcinogen in adults, accounting for approximately 3000 U.S. lung cancer deaths in adult nonsmokers annually. In addition to health effects among adults, 12 studies 76
that were reviewed by the Surgeon General47 and the National Academy of Sciences48 and 14 additional studies reviewed by the U.S. EPA49 showed strong evidence that children who are exposed to ETS in their home environment are at considerably higher risk for acute lower respiratory tract illnesses. The available data also are supportive of a causal relationship between ETS exposure and middle-ear disease,49,50 including acute otitis media and persistent middle-ear effusion. Figure 2 depicts a causal model for ETS and health.
Process Measures and Effectiveness by Category of Public Health Practice Surveillance and disease investigation. There have been con-
sistent efforts at the national level to collect surveillance information on ETS attitudes, knowledge, and exposure. Ongoing surveillance systems such as the National Health Interview Survey contribute such information.51 Efforts to collect, analyze, and disseminate ETS-related surveillance information at state and local levels in the United States have been more sporadic. Although some states and counties52,53 collect such information, many areas lack temporal data on ETS exposure needed to measure the effectiveness of interventions. Program delivery. Public health programs to control to-
bacco use, including ETS exposure, have been implemented on a widespread basis in recent years. Presently, all state public health agencies have tobacco control programs, including the American Stop Smoking Study for Cancer Prevention (ASSIST) and Initiatives to Mobilize for the Prevention and Control of Tobacco Use (IMPACT).54,55 In addition, some states like Alaska, Arizona, California, Massachusetts, Oregon, and Utah have enacted dedicated tobacco taxes to support tobacco control programs aimed at prevention and cessation of smoking and reduction of ETS exposure. For example, a program in Colorado promotes tobaccofree schools and antitobacco education among youth.56 Policies and regulations. Momentum to regulate public
smoking increased in 1986 when ETS was deemed a cause of lung cancer in nonsmokers.47,48 Since then, governmental and private business policies that limit smoking in public places have become increasingly more common and restrictive.57 The designation of ETS as a group A (known human) carcinogen by the U.S. EPA in 199249 has stimulated further restrictions on smoking in public places. The progressive increase in ETS regulations at the local level (process effects) are shown in Figure 3. In addition, studies from California52and Missouri53 have shown the intermediate impacts of clean indoor air laws—namely decreased exposure to ETS among nonsmokers following enactment of new ordinances and laws. Policy changes
American Journal of Preventive Medicine, Volume 16, Number 3S
Figure 2. Causal model of the relationships between predisposing factors, environmental tobacco smoke (ETS), public health interventions, and health outcomes.
related to ETS can also affect short-term smoking behavior such as reduced smoking following worksite smoking bans.58 Recommendations to the public. Following the etiologic studies linking ETS with lung cancer and childhood respiratory disorders, public health practitioners have made numerous recommendations to reduce ETS exposure. Among these efforts, the CDC’s national campaign on ETS risks (i.e., “Secondhand Smoke: We’re All At Risk”) included television advertisements, print and radio advertisements, and an action guide for the
public.59,60 It was the first national campaign to target nonsmokers who could be affected by ETS. The campaign resulted in 17,840 PSA airings and 32,593 calls to an 800 number.60 Public health education and training. Tobacco control, in-
cluding reduction of ETS, is a relatively new endeavor for public health practice. Therefore, specific education programs have been developed only recently. One example is the Tobacco Use Prevention Summer Institute, designed for practitioners in public health agencies, voluntary health agencies, and other communitybased organizations.61 The course focuses on improving skills to enact clean indoor air policies at state and local levels.
Economic Evaluation
Figure 3. Cumulative number of local clean indoor air laws and amendments enacted, United States, 1975-1995.
Measurement of the cost-effectiveness of ETS interventions can present difficulties. There is often a paucity of data on the effectiveness of an intervention. In many cases, it is difficult, if not impossible, to weigh adequately costs (or benefits) that are not easily quantified.62 For example, in regulating smoking, how does one quantify the values of a nonsmoking employee’s desire to work in a smoke-free environment or of a smoker’s loss of ability to smoke anywhere in a worksite? To date, the only systematic cost-effectiveness Am J Prev Med 1999;16(3S)
77
evaluation has been that of the U.S. EPA that estimated the saving associated with a nationwide, comprehensive clean indoor air at $4 billion to $8 billion per year in operational and maintenance costs of buildings.63
Challenges The ETS example illustrates the difficulty and complexity in determining inputs and outcomes of public health applications of prevention research. Because the health effects of ETS exposure have become wellestablished only over the past decade and health consequences such as lung cancer take many years to develop, it is impossible to precisely quantify outcomes of public health interventions to eliminate ETS.
Summary It has been noted that without translation of prevention research, accomplishments remain “on the shelf” and health benefits are not conferred to the community and nation.2 Assessing the effects of prevention research on public health practice presents an irony. As research discoveries move from the laboratory to the community, there is an increasingly large potential for population-wide benefits. Yet due to the wide array of physical, mental, genetic, economic, and social factors that influence community health,64 it is challenging to show the effectiveness of community-based interventions. Due to this complexity, any framework for evaluating the impact of prevention research faces a tension between being too simplistic to represent all relevant inputs and outputs and too complex to be useful in practice settings. While the framework proposed is intended for research translation in traditional, governmental public health practice, with minor modifications it may also be relevant for assessing the impact of preventive technologies in the health care environment, particularly among well-established health maintenance organizations. Successful translation of research findings into practice is enhanced by academic–practice partnerships.8,65-67 In general, public health agencies have greater access to populations at risk and more experience working at the community level. University researchers can add evaluation expertise and information on relevant theories and promising interventions. With the increasing focus on accountability of public health,68,69 there is a growing need for new approaches to measuring the impact of prevention research on public health practice and the current framework aims to contribute to this assessment. While the relevance of the model to ETS and health is illustrated, this is a retrospective application. In future work, the framework needs to be tested prospectively with a variety of public health issues. 78
This study was funded in part through the Centers for Disease Control and Prevention contract U48/CCU710806 (Centers for Research and Demonstration of Health Promotion and Disease Prevention), including support from the Community Prevention Study of the NIH Women’s Health Initiative. The authors appreciate the helpful suggestions of Mr. Garland Land and Drs. Jonathan Fielding, David Fleming, Michael McGinnis, and Stephen Teutsch.
References 1. McGinnis JM. Prevention research and its interface with policy: defining the terms and challenges. Prev Med 1994;23:618 – 621. 2. Institute of Medicine. Stoto MA, Green LW, Bailey LA, eds. Linking Research to Public Health Practice. A Review of the CDC’s Program of Centers for Research and Demonstration of Health Promotion and Disease Prevention. Washington, DC: National Academy Press; 1997. 3. McKenna MT, Taylor WR, Marks JS, Koplan JR. Current issues and challenges in chronic disease control. In: Brownson RC, Remington PL, Davis JR, eds. Chronic Disease Epidemiology and Control. 2nd Edition. Washington, DC: American Public Health Association; 1998:1–26. 4. US Dept of Health and Human Services. Healthy People 2000: National Health Promotion and Disease Prevention Objectives. Washington, DC: US Govt Printing Office, publication no. 017– 001-00473-1;1990. 5. Pederson A, O’Neill M, Rootman I, eds. Health Promotion in Canada. Toronto: W.B. Saunders; 1994. 6. Green LW, Raeburn J. Contemporary developments in health promotion. Definitions and challenges. In: Bracht N. Health Promotion at the Community Level. Newbury Park, CA: Sage Publications, Inc; 1990:29 – 44. 7. Fawcett SB, Paine AL, Francisco VT, Vliet M. Promoting health through community development. In: Glenwick DS, Jason LA, eds. Promoting Health and Mental Health in Children, Youth, and Families. New York: Springer Publishing Company; 1993:233–255. 8. Baker EL, Melton RJ, Stange PV, et al. Health reform and the health of the public: forging community health partnerships. JAMA 1994;272:1276–1282. 9. Wallack L, Dorfman L. Media advocacy: a strategy for advancing policy and promoting health. Health Educ Q 1996;23:293–317. 10. Luepker RV. Community trials. Prev Med 1994;23:602– 605. 11. Whyte WF, ed. Participatory Action Research. Newbury Park, CA: Sage; 1991. 12. Institute of Health Promotion Research The University of British Columbia and the B.C. Consortium for Health Promotion Research. Study of Participatory Research in Health Promotion. Review and Recommendations for the Development of Participatory Research in Health Promotion in Canada. Vancouver, British Columbia: The Royal Society of Canada; 1995. 13. Institute of Medicine. Committee on Public Health. Stoto MA, Abel C, Dievler A, eds. Healthy Communities: New Partnerships for the Future of Public Health. Washington, DC: National Academy Press; 1996. 14. Green LW, Kreuter MW. Health Promotion Planning: An Educational and Environmental Approach, 2nd Edition. Mountain View, CA: Mayfield; 1991. 15. Rossi PH, Freeman HE. Evaluation. A Systematic Approach. Newbury Park, NJ: Sage Publications; 1993. 16. Turnock BJ, Handler AS. From measuring to improving public health practice. Annu Rev Public Health 1997;18:261–282. 17. Miller CA, Moore KS, Richards TB, Monk JD. A proposed method for assessing the performance of local public health functions and practices. Am J Public Health 1994;84:1743–1749. 18. Turnock BJ, Handler A, Dyal WW, et al. Implementing and assessing organizational practices in local health departments. Public Health Rep 1994;109:478 – 484. 19. National Association of County Health Officials. Assessment Protocol for Excellence in Public Health (APEX-PH). Washington, DC: NACHO; 1991. 20. Teutsch SM. A framework for assessing the effectiveness of disease and injury prevention. MMWR. 1992;41 (RR-3):1–13. 21. Fielding JE. Successes of prevention. Milbank Mem Fund Q 1978;56:274–302. 22. Pappaioanou M, Evans C. Developing a guide to community preventive services: a U.S. Public Health Service initiative. J Public Health Manag Pract 1998;4:48 –54. 23. Website: http://web.health.gov/communityguide/ 24. Petitti DB. Meta–Analysis, Decision Analysis, and Cost–Effectiveness Analysis: Methods for Quantitative Synthesis in Medicine. New York: Oxford University Press, 1994.
American Journal of Preventive Medicine, Volume 16, Number 3S
25. Brownson RC, Gurney JG, Land G. Evidence-based decision making in public health. J Public Health Manag Pract 1999 (in press). 26. Weinstein MC, Stason WB. Foundations of cost-effectiveness analysis for health and medical practices. New Engl J Med 1977;296:716 –721. 27. Tengs TO, Adama ME, Pliskin JS, et al. Five-hundred life-saving interventions and their cost-effectiveness. Risk Analysis 1995;15:369 –390. 28. Steckler A, McLeroy KR, Goodman RM, Bird ST, McCormick L. Toward integrating qualitative and quantitative methods: an introduction. Health Educ Q 1992;19:1– 8. 29. Goodman RM. Principles and tools for evaluating community-based prevention and health promotion programs. J Public Health Manag Pract 1998;4:37– 47. 30. Thacker SB, Thacker SB, Berkelman RL. Public health surveillance in the United States. Epidemiol Rev 1988;10:164 –190. 31. Studnicki J, Steverson B, Myers B, Hevner AT, Berndt DJ. A community health report card: Comprehensive Assessment for Tracking Community Health (CATCH). Best Pract Benchmarking Healthcare. 1997;2:196 –207. 32. Schmid TL, Pratt M, Howze E. Policy as intervention: environmental and policy approaches to the prevention of cardiovascular disease. Am J Public Health 1995;85:1207–1211. 33. Brownson RC, Newschaffer CJ, Ali-Aborghoui F. Policy research for disease prevention: challenges and practical recommendations. Am J Public Health. 1997;87:735–739. 34. Brownson RC. Epidemiology and health policy. In: Brownson RC, Petitti DB. Applied Epidemiology: Theory to Practice. New York: Oxford University Press; 1998:349 –387. 35. Nelkin D. Selling science: how the press covers science and technology. In: Communicating Science to the Public. Ciba Foundation Conference. Chichester, NY: John Wiley & Sons; 1987. 36. Remington PL. Communicating epidemiologic information. In: Brownson RC, Petitti DB. Applied Epidemiology: Theory to Practice. New York: Oxford University Press; 1998:323–348. 37. US Dept. of Health and Human Services. Making Health Communication Programs Work. A Planner’s Guide. Bethesda, MD: US Dept of Health and Human Services, National Cancer Institute, Office of Cancer Communications; NIH publication 92-1493; 1992. 38. US Preventive Services Task Force. Guide to Clinical Preventive Services, 2nd ed. Baltimore: Williams & Wilkins, 1996. 39. Nelson NJ. The mammography consensus jury speaks out. J Natl Cancer Inst. 1997;89:344 –347. 40. Brownson RC, Kreuter MW. Future trends affecting public health: challenges and opportunities. J Public Health Manag Pract 1997;3:49 – 60. 41. Koepsell TD, Martin DC, Diehr PH, et al. Data analysis and sample size issues in evaluations of community-based health promotion and disease prevention programs: a mixed-model analysis of variance approach. J Clin Epidemiol 1991;44:701–13. 42. Murray DM. Design and Analysis of Group-Randomized Trials. New York, NY: Oxford University Press; 1998. 43. Lipsey MW. Theory as method: small theories of treatment. In: Sechrest L, Perrin E, Bunker J, eds. AHCPR conference proceedings. Research methodology: Strengthening causal interpretations of nonexperimental data. DHHS Pub No (PHS) 90-3454;1990:33–51. 44. Goodman RM, Wandersman A. FORECAST: a formative approach to evaluating community coalitions and community-based interventions. In: Kaftarian SJ, Hansen WB, eds. Mongraph Series CSAP Special Issue. J Commun Psych 1994:6 –25. 45. Correa P, Pickle LW, Fontham E, Lin Y, Haenszel W. Passive smoking and lung cancer. Lancet 1983;2:595–97. 46. Hirayama T. Cancer mortality in nonsmoking women with smoking husbands on a large-scale cohort study in Japan. Prev. Med 1984;13:680 –90. 47. US Dept. of Health and Human Services. The Health Consequences of Involuntary Smoking. A Report of the Surgeon General. Washington, DC: US Govt. Printing Office; 1986. US Dept of Health and Human Services publication 87-8398; 1986.
48. National Research Council. Board on Environmental Studies and Toxicology. Committee on Passive Smoking: Environmental Tobacco Smoke. Measuring Exposures and Assessing Health Effects. Washington, DC: National Academy Press; 1986. 49. US Environmental Protection Agency. Respiratory Health Effects of Passive Smoking: Lung Cancer and Other Disorders. Washington, DC: US Environmental Protection Agency; EPA/600/6-90/006F; 1992. 50. Samet JM, Lewit EM, Warner KE. Involuntary smoking and children’s health. Critical Issues for Children and Youth 1994;4:94 –114. 51. Pirkle JL, Flegal KM, Bernert JT, et al. Exposure of the US population to environmental tobacco smoke. The Third National Health and Nutrition Examination Survey, 1988 to 1991. JAMA 1996;275:1233–1240 52. Pierce JP, Shanks TG, Pertschuk M, et al. Do smoking ordinances protect non-smokers from environmental tobacco smoke? Tobacco Control 1994; 3:15–20. 53. Brownson RC, Davis JR, Jackson-Thompson J, Wilkerson JC. Environmental tobacco smoke awareness and exposure: the impact of a statewide clean indoor air law and the report of the US EPA. Tobacco Control 1995;4:132–38. 54. Siegfried J. Largest tobacco-control program begins. J Natl Cancer Inst 1991;83:1446 –1447. 55. CDC’s Tobacco Use Prevention Program: Working Toward a Healthier Future. At-A-Glance. Atlanta, GA: CDC; 1996. 56. Peck DD, Acott C, Richard P, Hill S, Schuster C. Colorado Tobacco-Free Schools and Communities Project. J School Health 1993;63:214 –217. 57. Brownson RC, Eriksen MP, Davis RM, Warner KE. Environmental tobacco smoke: health effects and policies to reduce exposure. Annu Rev Public Health 1997;18:163–185. 58. Longo DR, Brownson RC, Johnson JC, et al. Hospital smoking bans and employee smoking behavior. Results of a national survey. JAMA 1996;275: 1252–1257. 59. US Dept of Health and Human Services. HHS news. [Press Release]. Washington, DC: US Dept of Health and Human Services; January 7, 1993. 60. Environmental tobacco smoke (ETS) campaign analysis. Atlanta, GA: Office on Smoking and Health, Centers for Disease Control and Prevention; January 15, 1996. 61. University of North Carolina-Chapel Hill. Center for Health Promotion and Disease Prevention. 3rd Annual Tobacco Use Prevention Summer Institute. Albuquerque, NM: University of New Mexico Health Sciences Center, Center for Health Promotion and Disease Prevention; June 15-20, 1997. 62. Warner KE. Public policy issues. In: Greenwald P, Kramer BS, Weed DL. Cancer Prevention and Control. New York: Marcel Dekker, Inc., 1995:451– 472. 63. US Environmental Protection Agency. The Costs and Benefits of Smoking Restrictions. An Assessment of the Smoke-Free Environment Act of 1993 (H.R. 3434). Washington, DC: US Environmental Protection Agency, Office of Air and Radiation, Indoor Air Division, April 1994 64. Evans RG, Stoddart GL. Producing health, consuming health care. Social Science Med 1990;31:1347–1363. 65. Lancaster B. Closing the gap between research and practice. Health Educ Q 1992;19:408 – 411. 66. Schwartz R, Smith C, Speers MA, et al. Capacity building and resource needs of state health agencies to implement community-based cardiovascular disease programs. J Public Health Policy 1993;14:480 – 494. 67. Schwartz R, Capwell E. Advancing the link between health promotion researchers and practitioners: a commentary. Health Educ Res 1995:i-v. 68. US Dept of Health and Human Services. Government Performance and Results Act (GPRA) of 1993. Washington, DC: US Dept. of Health and Human Services; May 1997. 69. Institute of Medicine. Durch JS, Bailey LA, Stoto MA, eds. Improving Health in the Community. A Role for Performance Monitoring. Washington, DC: National Academy Press; 1997.
Am J Prev Med 1999;16(3S)
79