Emerging communication responsibilities of epidemiologists

Emerging communication responsibilities of epidemiologists

J Clin Epidemiol Vol. 44, Suppl. I, pp. 41S-SOS, 1991 0895-4356/91 $3.00 + 0.00 Copyright0 1991PergamonPressplc Printed in Great Britain. All right...

1MB Sizes 0 Downloads 44 Views

J Clin Epidemiol Vol. 44, Suppl. I, pp. 41S-SOS, 1991

0895-4356/91 $3.00 + 0.00

Copyright0 1991PergamonPressplc

Printed in Great Britain. All rights reserved

Session III: Ethical Considerations and Responsibilities when Communicating Health Risk Information EMERGING

COMMUNICATION RESPONSIBILITIES EPIDEMIOLOGISTS

OF

PETER M. SANDMAN Environmental Communication Research Program, Cook College, Rutgers University, New Brunswick, NJ 08903, U.S.A.

Abstract-Epidemiologists are increasingly called upon to communicate with affected publics when designing, interpreting, and reporting their work. The author offers eight guidelines for public communication: (1) Tell the people who are most affected what you have found-and tell them first. (2) Make sure people understand what you are telling them, and what you think its implications are. (3) Develop mechanisms to bolster the credibility of your study and your findings. (4) Acknowledge uncertainty promptly and thoroughly. (5) Apply epidemiological expertise where it is called for, and do not misapply it where it is unlikely to help. (6) Show respect for public concerns even when they are not “scientific.” (7) Involve people in the design, implementation, and interpretation of the study. (8) Decide that communication is part of your job, and learn the rudiments-it’s easier than epidemiology. Communication

Ethics

public involvement

INTRODUCTION

My background epidemiology. ignore some

is risk communication, not In this exploratory essay I will but well-established important

communication responsibilities of epidemiologists-not to distort the data, to honor confidentiality commitments, etc. I will focus instead on some more debatable responsibilities emerging from recent work in risk communication. The eight guidelines that follow are not eight of the Ten Commandments-or if they are they’re “draft” commandments aimed at provoking discussion, not pre-empting it. Epidemiology is a varied and complex field. Randomized trials, community studies, field interventions, and case-control studies each raise somewhat different communication issues. Epidemiologists who work within industry may have different communication responsibilities

Community

concern

than those who work for government agencies; academic researchers under contract may face different responsibilities than those doing more basic grant-funded research. (This essay was originally prepared for a conference on ethics of the Industrial Epidemiology Forum, and tends to emphasize industry-related issues, especially the epidemiology of environmental contamination.) Epidemiologists studying huge risks that the public is inclined to underestimate may feel very different communication obligations from those studying modest risks that the public is inclined to exaggerate. These and other important distinctions are largely ignored in this preliminary discussion. The recommendations that follow are grounded in several assumptions: (1) That epidemiologists spend much of their time trying to guide policy by determining the public health risks of particular social and technological ar41s

42s

PETERM. SANDMAN

rangements. (2) That whether these particular social and technological arrangements ought to be encouraged or discouraged, maintained, regulated, or abandoned is appropriately a public controversy, resolved through political processes. (3) That potentially affected publics are stakeholders who ought to play a significant role in the resolution of risk controversies, and therefore ought to have access to epidemiological assistance they can seldom commission on their own. (4) That epidemiologists are likely to work for clients or employers who are also stakeholders, and are not satisfactory proxies for affected publics. (5) That epidemiologists are therefore ethically obliged to share their plans and their findings with affected publics. It is possible to dispute any of these assumptions-to argue that epidemiology aims at advancing science rather than guiding policy, or that policy issues should be resolved technocratically rather than democratically, or that laypeople have no use for epidemiological information, or that government and industry adequately represent the interests of affected publics, or that epidemiologists owe allegiance only to the interests of their client or employer. Readers who reject one or more of my assumptions will probably find much to dispute in my recommendations. Even readers who accept my assumptions may find the recommendations more onerous than they can tolerate. Many epidemiologists lack not only the knack for popularization and consensus-building, but also the appetite for these activities. They chose a career in science, not public relations. So did their colleagues, who tend to look askance at researchers who seek the spotlight and become controversial. Clients, funders, and employers are also typically less than supportive of an epidemiologist who suddenly announces that he or she plans to take matters up with the workers or the community. Proposal budgets that call for communication outlays are likely to be trimmed; unbudgeted communication efforts are likely to be half-hearted. Even the academic journals have registered their distaste for scholars who share their conclusions with the public before they have been peer-reviewed. Much of the advice that follows, in other words, means trouble. A lot of effort will be needed to figure out how to adjust the adviceor the world in which epidemiologists do their work-to make it truly feasible.

1. TELL THE PEOPLE WHO ARE MOST AFFECTED WHAT YOU HAVE FOUNDAND TELL THEM FIRST

Most epidemiologists today recognize their ethical responsibility to publish their findings, especially findings that a health problem exists. And most sponsors of epidemiological studies, corporate and otherwise, are learning that it is wiser (as well as more honest) to take their licks than to incur the incredible liability burdens of a secret health study. Often there are laws requiring public notification of study results. But that still leaves room for much disagreement over whom to tell, how to tell them, and when to tell them (not to mention what to say). To an academic epidemiologist, publication means a journal. Unfortunately, the community’s right to know is not satisfactorily served by professional publication. Nor is it satisfactorily served by waiting to inform the community until after a paper is accepted by a refereed journal-a wait that is often measured in years. Especially if the news is bad, but even it it is good or inconclusive, people are entitled to the news as soon as you have it. Excessive delays while you are “analyzing the data” are also ethically suspect, even when they are not mere corporate excuses for postponing the day of reckoning. Ideally, part of epidemiological data collection procedures should be determining and announcing when the results will be available-and then adhering strictly to the timetable. Of course when a short wait can clarify a muddy finding, it makes sense to wait. If half a dozen studies are underway about a particular hazard, for example, and all will be completed within a matter of months, there is little to be gained from releasing the first study prematurely-especially if it is inconclusive. Similarly, few would argue that you are obliged to announce a finding you don’t yet understand when you are about to meet with colleagues to help you make sense of it. The difference between these situations and a long delay for “peer review” in the journals is a matter of judgment. Only occasionally do epidemiologists err on the side of precipitous public announcement; the usual temptation is to wait too long to tell the public. Who deserves to be told? Apart from the people you studied (if they are still alive), the specification of affected groups is a matter of degree. An occupational mortality study based on death certificates, for example, is of immedi-

Communication

Responsibilities of Epidemiologists

ate concern to retirees of the facility studied; to the extent that conditions may not have changed, it is also important to current workers; if there are environmental emissions, it may well be relevant to plant neighbors. Your responsibility to inform workers at other plants with similar exposure patterns is weaker, but worth considering, especially if you have found a health effect. When you know something that can save lives, you have a real responsibility to try to inform the people whose lives are at stake. Where possible, people should be told directly, not via the mass media. Even a prompt news conference falls short on two grounds. If it is not well-covered, the people most directly affected may not get the news. And if they do get the news, they get it second-hand, filtered, possibly exaggerated, possibly garbled, and certainly missing some detail. People especially deserve the courtesy of hearing distressing news firsthand; it is a rude shock to find out about a threat to one’s health from the six o’clock news. In routine situations, some mix. of community meetings, direct mail, contact via physicians, and later a media announcement is usually appropriate. Where the risk or the public concern is sizeable, door-to-door canvassing may be wise. What if a warning provokes panic, with attendant health and economic impacts, and then turns out to be technically unjustified? I am not qualified to assess the possible liability implications of inaccurately warning people that their health is at risk, although I suspect these implications are exaggerated by those who do not want to see people warned. (Activists deliver urgent health warnings-justified and unjustified-without visible legal repercussions.) Certainly the ethics of frightening people inappropriately or prematurely deserve to be discussed-but the issue ought not to be used as an excuse for leaving people in the dark. Ultimately, when to inform the public is an issue of technocracy vs democracy. Good science never considers a question closed; new evidence is always welcome. But good policy can’t wait forever. The more evidence you accumulate before deciding that x is a health problem worth solving, the less likely you are to be wrong-but the more damage people have endured in the meantime. There are real costs also if you act too quickly and turn out wrong: not just unnecessary anxiety and expense, but also the health risks of options chosen to replace the one erroneously thought to be hazardous.

43s

But all this is true whether the public or the experts make the call. Single, unreplicated studies can be misleading. But in a society where policy is set by political processes, the job of science is to tell us what is known so far, tell us how much confidence attaches to what is known, and go find out more. Public consideration cannot replace peer review-but neither can peer review replace public consideration. The tendency to appropriate political decisions as if they were scientific ones has backfired, threatening rather than consolidating the public’s trust in science. It is not the epidemiologist’s job to decide when you know enough to let the rest of us in on what you know. 2. MAKE SURE PEOPLE UNDERSTAND WHAT YOU ARE TELLING THEM, AND WHAT YOU THINK IT’S IMPLICATIONS ARE

When you take on an epidemiological study, you take on an educational responsibility as well. Telling people what you found is only the start. The tough part is making sure that they understand, that they are neither inappropriately panicked nor inappropriately reassured, that they have a real comprehension of the size of the risk and the nature of the key risk factors. This means the communication effort has to be a planned, budgeted component of the study, not a hurried afterthought. If the study is substantial, it may mean hiring a community relations specialist or borrowing the community relations staff of the company or agency that funded the study. And ideally it should mean some kind of communication evaluation, an explicit effort to find out if people understand what you are telling them and to follow up with more information on the points they have not mastered. The two most difficult aspects of explaining technical findings, I think, are simplifying and interpreting the data. For good reasons as well as bad ones, epidemiologists are reluctant to do either. SimpliJication Simplifying a technical explanation is not just a matter of pruning the jargon out of your language-though getting rid of excess jargon is certainly important (and feasible; experts typically use more jargon talking to laypeople than talking to each other, unconsciously building barriers of professionalism). You have to simplify content too. The risk when you simplify

44s

PETER M. SANDMAN

epidemiological information is of course distortion. But you cannot make everyone into an epidemiologist. Since only some of the truth can be successfully communicated, it is part of your job to make the choice. List two or three key points that you want people to understand. Add to the list anything they will be demanding to know (whether you think it’s important or not), and any background items they need to make sense of the points already on your list. Then resist the temptation to talk about anything else-unless, of course, you are asked. There is one crucial exception to the principle of simplification. Suppose your findings are reassuring except for one or two items that might seem alarming at first glance-for example, you found a non-significant excess of a kind of cancer you are confident is unrelated to the exposure you are studying. Considerations of simplification might lead you to ignore the potentially misleading result. Considerations of credibility, however, strongly suggest that you discuss it. Explaining why it does not mean much will be difficult enough if you have been forthright, but impossible if you have not and a skeptical activist has dug the finding out of a footnote. Simplification should never be an excuse for failing to provide information that is “too complicated,” and completeness should never be an excuse for bewildering people with more than they can master. Nearly every well-designed technical communication effort comes in layers: some items short and simple, others covering the same ground with more technical detail, still others reporting on peripheral issues. Offered a menu that varies in length and complexity, people have no trouble deciding what they want. Interpretation The trouble with interpretation is that epidemiologists, like all scientists, are taught not to go beyond their data. But the questions people most want answered force you to do exactly that. What does it all mean? Am I personally likely to get sick? What should I do to protect myself and my family? What can the company or the government do to reduce or eliminate the problem? Would you personally let your children live here, work here, drink this water? These are difficult questions to answer, especially if your results are equivocal. They call for expertise in fields other than epidemiology, and they call for a personal response as well as an expert response. They are nonetheless pre-

dictable and appropriate questions for people to ask when being told the results of an epi study. Be prepared to answer them. Just how you answer them depends on your views and the views of your client or employer as much as it depends on your findings. Certainly when you go beyond the data you must be careful to make this clear. If you are part of a research team, it may help to offer different responses from different researchers, thus driving home the distinction between epidemiological findings and policy judgments. Even one epidemiologist can offer a range of responses by citing the views of colleagues and even family members (“I would drink the water, but my husband wouldn’t”). The least acceptable answer is a flat refusal to go beyond the data. Why did you do the study if you do not want people to draw meaning from it? And why, then, won’t you tell us what meaning you think we should draw?

3. DEVELOP MECHANISMS TO BOLSTER THE CREDIRILITY OF YOUR STUDY AND YOUR FINDINGS

An epidemiological study that is not considered credible by the public does real damage. At best it is a waste of money and a lost opportunity. At worst it can fuel the futile furor and policy deadlock it was intended to help resolve, inappropriately raise the anxiety level of the community, and damage the stature of epidemiology as a discipline. The time to think about how to prevent these problems is at the start of the study-not at the stormy public hearing over the results. Epidemiological studies that find serious health problems seldom suffer from lack of credibility in the public arena. The public’s assumption is that the sponsoring agency or company always hopes to find that the problem is minimal. A finding that it is serious is thus what lawyers call an “admission against interest,” and is almost universally believed. The skepticism comes when the finding is reassuring. Mechanisms to bolster credibility are thus especially important when community distress and distrust are high and serious health effects are unlikely. In such a situation, “doing good science” is of course essential; those who distrust the findings are sure to question the methodology, often with the help of knowledgeable technical advisors. It is irresponsible to conduct a study so

Communication Responsibilities of Epidemiologists

flawed that the findings are not useful guides to action. But while good science is essential, unassailable science just is not possible, especially when budget constraints, time pressures, and the scantiness of the data are figured into the equation. So acknowledging the methodological weaknesses of a study is as critical as trying not to have too many of them. It is often said that technical people overqualify their statements, while the non-technical public wants just the bottom line dichotomy: safe or dangerous. There is some truth to this claim, but the opposite pattern is just as common. Especially when they anticipate controversy, technical people are likely to leave out their reservations about methodology, data quality, and the like. When they are challenged-and by a layperson!--then they may get hypertechnical and polysyllabic in defense. The result is a loss of credibility, best avoided by acknowledging the study’s weaknesses in advance. Of course it is important not to go overboard and seem to be trying to discredit your own research. Insulation from possible conflicts of interest is another key to credibility. The days are long gone when a health study was above reproach simply because it was conducted by an academic consultant. But an outside consultant still has more credibility than a company or agency epidemiologist-especially if the consultant is provided with some insulation from the client. Among the options: (1) A letter of agreement, made public at the outset, providing that the consultant is to report the results directly to the community regardless of what they say. (2) A review board of reputable, independent outsiders who assess the study methodology before it is carried out and review the findings before they are announced. (3) Participationon the review board if not the study team itself-of an expert known to be not only neutral but actually inclined toward the “alarming” rather than the “reassuring” side of technological risk controversies. Even more important to credibility than insulation from the client is involvement of the community. The greater the involvement of concerned citizens in the design, conduct and interpretation of the study, the more credible the study results are likely to be-both because the study has considered the concerns of the community and because the community has considered the concerns of the researchers (see Section 7). Again there are many models,

45s

ranging from direct citizen participation in the research itself to the use of technical assistance grants to fund a community advisor who can monitor the study, raise objections where appropriate, and certify the integrity of the process. In most cases a community involvement effort need not require you to figure out “who represent the community.” An open process that invites all affected constituencies to participate quickly shakes down to an appropriate mix of skeptics and supporters. If you are a corporate or government epidemiologist, you may have read the last few paragraphs with mounting irritation, justly offended by the assumption that your integrity and your employer’s integrity cannot be trusted. The asymmetry of these recommendations is especially off-putting: insulate the work from companies and agencies while involving the public as much as possible, get an alarmist onto the study review board, worry about credibility with the community (potentially even at the expense of credibility with the profession), etc. It is best to view this, if you can, as a conservative bias, a way of bending over backwards to make sure public health is protected. If need be, view it as a realistic accommodation to political reality. I am sure that most epidemiologists are honorable, and I have no idea whether the public statements of epidemiologists working for chemical companies are more or less accurate than those of epidemiologists working for environmental advocacy groups (to pick two convenient extremes). I do know that their public statements are different, that both sides have clear motives to hedge in different directions, that chemical companies fund more epidemiology than advocacy groups, and that the public trusts the advocacy groups more than the chemical companies. It follows that a chemical company that intends an honest study and wants to be believed would be well-advised to commission a neutral researcher, and to involve the public and even an advocacy group in the design of the study and the interpretation of the findings. 4. ACKNOWLEDGE UNCERTAINTY PROMPTLY AND THOROUGHLY

Experts readily acknowledge uncertainty when communicating with their peers, qualifying and even overqualifying each statement with the appropriate methodological hedges. Some experts carry this tendency into their communi-

46.3

PETER

M. SANDMAN

cations with laypeople, and the result can be incomprehensible. But others adopt a peremptory, almost godlike tone with the public, sounding much more certain than they know themselves to be. This may be overcompensation in an effort to be clear. It may be relief from the burden of professional nitpicking. It may be professional defensiveness and a largely unconscious effort to ward off methodological attack from the citizenry. Whatever its sources, the impulse to pontificate at audiences outside one’s field is strong. (It is not confined to epidemiologists, either. This essay is arguably an example of the malady.) Acknowledging uncertainty is essential on four grounds. First, given budget constraints, data constraints, and methodological constraints, most epidemiological findings are far from certain. It is an ethical obligation to acknowledge that this is so; to state clearly which confounders you could control for, which you could not, and how confident you are of the findings that emerged. Second, science as a process is tentative. Our society is unlikely to develop sensible responses to technical controversies until we abandon the false dichotomy by which the public too often assesses scientific findings as either certain or worthless. Third, claiming more confidence than you can justify sets you up for attack. Grudging and belated acknowledgement of problems that should have been front-and-center at the outset justifiably diminishes the credibility of your work. And fourth, uncertainty is the allimportant context for the ultimate decision, individual or collective, on what to do about the problem. Without understanding how certain or uncertain your study is, people cannot draw from it the guidance they are seeking. Acknowledging uncertainty is not the same thing as claiming that your results are valueless. There is a scale here, with a range of positions between “you bet your life” and “who knows?” Since the public’s capacity for error bars and confidence limits is modest, the job is to calibrate your tone and bottom-line conclusions to match the quality of your data, to replicate in your audience the degree of uncertainty you believe the data justify. Ideally, you will not wait until the results are in to begin the process of calibrating uncertainty. It is unfair and unwise to let a community expect a definitive answer from a study you know from the start will not be definitive. When the qualifiers come only after the results are in,

people feel betrayed and tend to suspect that the researchers are covering their tracks. Part of the communication job is to build realistic expectations. 5. APPLY EPIDEMIOLOGICAL EXPERTISE WHERE IT IS CALLED FOR, AND DO NOT MISAPPLY IT WHERE IT IS UNLIKELY TO HELP

This is not a communication recommendation, but its communication implications are so central that the issue cannot be avoided in a discussion of the ethics of epidemiological communication. The problem in a nutshell is this. Health studies are demanded by communities more often than there is a budget to handle the job, requiring a kind of triage. Political pressure is usually the main factor that determines which health studies are done and which are not. Too often the outcome is both bad epidemiology and bad communication. At a typical Superfund site, for example, the community is concerned chiefly about health, but the agencies managing the clean-up are concerned chiefly about containment. Especially in the first few years of site management, the community wants to talk about cancers in the neighborhood and the agency wants to talk about engineering. The result is often a build-up of community frustration and outrage, culminating in the emergence of a powerful political movement that eventually succeeds in demanding a health study, In the politicized environment of the site, such a study may become a practical necessity even if the results are extremely unlikely to prove useful, and even though the community is extremely unlikely to believe them. People are entitled to serious, prompt attention to their health concerns-before the outrage builds. But are they entitled to a full-fledged epidemiological study? Only if the study will prove useful. The highest priority for epidemiological work is obviously when the exposure data are solid, the outcome data are available, and the problem seems likely to be serious-i.e. when you expect to demonstrate that Situation X probably caused Problem Y. The second priority, I think, should be when the data are good and the problem seems unlikelyi.e. when you expect to demonstrate that there is no Problem Y or, if there is, Situation X probably did not cause it. Third priority, perhaps, is when you cannot tell much about whether Situation X is responsible, but at least

Communication

Responsibilities of Epidemiologists

you can determine rigorously whether Problem Y is a genuine cluster or a statistical artifact. But when the data are scanty and the population is tiny, when you know before you start that your study can neither show the effect nor show its absence, when your findings will not decrease your uncertainty or even prove what you already surmise, when the study (in short) will be useless-don’t do it. That does not mean don’t do anything. In the absence of a retrospective or cross-sectional study, it may still be appropriate to launch a prospective study that will eventually shed light on the problem. Toxicological studies, exposure pathway analyses, and other risk assessment work may well be worthwhile even where an epidemiological study is not. And whether or not a research study can help them, people are entitled to informed medical advice. They are also entitled to an explanation, in considerable detail, of why a health study won’t help-being extra-careful to make clear that this does not necessarily mean they have not got a health problem. One way to do this is with dummy tables, showing that there is no study design with enough power to demonstrate a health effect convincingly even if it is there. If you and the community can agree on an interpretation of your dummy tables-this outcome would be alarming and that one would be reassuring-then the study is probably worth doing. Above all, it is profoundly unethical to use the limitations of epidemiology as a rationale for asking people to accept unnecessary risks. When a factory is accused of exposing a few hundred people to a l-in-1000 cancer risk, there is simply no way of documenting the effect or its absence. An epidemiological study would be worthless. And a claim that there is “no proof” of a health risk, though technically accurate, would be irresponsible. Often when the feared health effects can be neither demonstrated nor ruled out, they can be significantly reducedsometimes for less than the cost of the proposed health study. When you tell people there is no point in doing a health study, be sure you do not seem to be saying there is no point in taking action to protect their health. 6. SHOW RESPECT EVEN WHEN THEY

FOR PUBLIC CONCERNS ARE NOT “SCIENTIFIC”

One of the most offensive things technical people sometimes do in their communications CE U,S”PPL. I-E

47s

with laypeople is to suggest-or seem to suggest-that non-technical concerns and nontechnical approaches to technical concerns are intrinsically without merit. There are three issues here that commonly come up in community health studies: rigorous data collection vs anecdotal evidence; biologic significance vs statistical significance; and hazard vs outrage. Let’s consider them in turn. Anecdotal evidence

The scientist’s disdain for anecdotal data is an acquired trait; medical practitioners regularly base their diagnoses in large part on the patient’s somewhat haphazard recollections. More to the point, scientists are statistically unusual-several standard deviations from the mean-in the extent to which they trust the abstract over the concrete, the data array over the anecdote. Laypeople have good reasons for the opposite preference. Not only is anecdotal evidence easier to understand; it is also less dependent on trust in the experts. My great aunt’s cancer may not be the proof of industry misbehavior I think it is-but at least I know my great aunt really had cancer; how can I independently assess the accuracy and relevance of your tables and graphs? Of course it is part of your expertise (and part of your job) to avoid jumping to conclusions based on unreliable evidence. But it is also part of your expertise (and part of your job) to known how to use anecdotal data wisely, certainly as a clue to where to look and what to look for, and often as a guide to how urgently the search should be pursued or whether it should be pursued at all. The unwillingness of some epidemiologists to consider citizen complaints and even citizen health surveys smacks of professional defensiveness or professional hubris. And if they are in fact using anecdotal evidence as a starting point but are unwilling to admit it, that is certainly defensiveness or hubris. What is especially galling, of course, is when experts simultaneously dismiss anecdotal evidence as insufficiently reliable and decline to collect more reliable evidence of their own. Even when you are obliged to point out the pitfalls in interpreting anecdotal evidence, you can still show a decent respect for the human tragedies that underlie the stories and the pain that motivates their collection. My great aunt is not just an outlier in your data array. If you cannot show me that you realize that, I will have

-TERM.

48s

great trouble trusting family’s health.

your

counsel

on my

Statistical signijkance

Epidemiology is a science largely by virtue of its ability to calculate the probability that a particular “health effect” in the data could have occurred by chance-and so epidemiologists are justly proud of their reliance on confidence limits and related statistical concepts. It is nonetheless worth remembering that statistical significance is a function of two variables: the size of the effect and the power of the study (which depends in turn on sample size and data quality). Biologic significance-and therefore social significance-is a function only of the size of the effect. What this means, of course, is that a well-designed study with a large sample and a strong database can achieve findings that are statistically significant without being very important in health terms. And, conversely, a small study based on limited data can fall short of statistical significance even though the health effects it does not quite demonstrate may well be not only genuine but serious. It is of course impossible to prove statistically that a relationship does not exist. Nonetheless, negative findings may be a convincing indicator that a suspected problem is not a problem-the exposed population might be healthier than the control, for example. Or they may represent chiefly the limitations of the research itself. When explaining negative findings to concerned citizens, it is critically important to go beyond generalizations about “no evidence of harm” to make clear whether you think what you found is reassuring or not. Especially if your failure to find a “significant effect” says more about statistics than about community health, it is wise not to use the term “significance” at all. Laypeople are understandably offended by the notion that their neighbors’ fatal cancers do not strike you as significant. Outrage

Epidemiologists and other health and risk professionals define “risk” in terms of expected mortality and morbidity, period. Laypeople, on the other hand, have a much broader definition, including in the concept such factors as voluntariness, control, familiarity, dread, fairness, trust, and process. This is often seen by the professionals as a problem of public misperception. A much better way to frame the issue, I think, is as a definitional dispute.

SANDMAN

For example, risk perception research shows that the public reliably considers a coerced risk more serious than a voluntary risk with the same associated mortality. (Imagine the furor if people were required to slide down snowcovered mountains on slippery sticks.) A technical expert is likely to assert that coercion “distorts” the public’s perception of the risk. But even people who know the mortality and morbidity data-risk assessment experts, for example-accept voluntary risks much more readily than coerced ones, and at least in their non-professional lives consider the voluntary risks “less risky.” It is a deeply held social value that people should have a lot more leeway to endanger themselves than to endanger each other. The public wisely packs such factors as voluntariness into its definition of risk-not because people are misperceiving the data, but because they want to live in a society that defines risk more broadly than the data, that views voluntary risks differently from coerced risks. To clarify the conflict, partition “risk” into two components. Call the technical portion “hazard” and the non-technical portion “outrage.” Quantitative risk assessors typically focus on hazard; distressed communities typically focus on outrage. A coerced risk, in these terms, is no more hazardous than a voluntary one, but it is more outrageous, and therefore riskier. The same is true of risks that are controlled by the individual vs those that are controlled by others (e.g. driving vs flying), risks that are fair vs those that are unfair, and a score of other non-technical “outrage” distinctions. There is room for dispute over how “risk” ought to be defined and how much of risk policy ought to be determined by outrage as opposed to hazard-but there is no disputing that the public defines risk largely in terms of outrage and insists on the pre-eminence of outrage in risk policy-making. A factory has been pouring dimethylmeatloaf (DMML), a “suspected animal carcinogen,” into the community’s air and water for decades. Until SARA Title III forced it to do so, management did not even bother to measure its DMML emissions, much less tell people about them. With a recent DMML spill in an adjacent state still making headlines, a local environmental group is demanding that the plant reduce emissions by 80%. The company’s response is that such reductions would be an expensive waste of stockholders’ money, that the plant’s neighbors are being hysterical, that the DMML emissions

Communication

Responsibilities of Epidemiologists

legal and the neighborhood should mind its own business and trust the company to do the right thing. Enter the epidemiologist-youwith a study that fails to show any excess cancer as a result of DMML exposures. You have reason to believe the risk-the hazard, in my terms-is small. But it is coerced, unfair, unfamiliar, memorable, and dreaded; it is also eminently reducible, but the company has decided to stonewall. The outrage level, in short, is astronomical. A wise company, of course, avoids this sort of battle, and a wise epidemiologist avoids working with unwise companies. But in less extreme form this conflict between outrage and hazard is an everyday experience. And explaining a genuinely small hazard to an appropriately outraged community is an everyday challenge for epidemiologists. In coping with that challenge, the key starting point is to understand that the outrage is justified, that to the community this means the risk is unacceptable, and that this in turn makes it very difficult for the community to listen as you explain why you believe the hazard is trivial.

are

7. INVOLVE PEOPLE IN THE DESIGN, IMPLEMENTATION, AND INTERPRETATION OF THE STUDY

The first six recommendations have focused on the one-way communication responsibilities of epidemiologists, which are daunting enough. But effective communication is two-way. It is virtually impossible to do a good job of talking to people (especially angry or frightened people) without listening to them as well. Everything discussed so far in this essay is enormously easier to achieve in an on-going dialogue than in a last-minute monologue. Consider the communication advantages of working collaboratively with concerned citizens: (1) People will have an easier time knowing and understanding what you found because they will have helped you find it. (2) People will see your results as more credible because they will have had sufficient involvement to trust the findings without relying blindly on your integrity, and sufficient contact to build some trust in your integrity. (3) People will know about the methodological weaknesses in your study, the reasons why those weaknesses were unavoidable, and the extent to which those weaknesses justify reduced confidence in your conclusions. (4) People will have considered in advance what

49s

an alarming outcome and a reassuring outcome might look like in data terms, and will thus have a clear assessment of the extent to which the study can help resolve the questions that concern them. (5) People will have had a chance to absorb technical concepts like case-control research, statistical significance, and quantitative risk assessment, making your reliance on these concepts less alienating. That is what the community learns from dialogue. What the epidemiologist learns is perhaps harder to predict, but not impossible. At a minimum, the neighborhood’s detailed historical knowledge and anecdotal health data provide priceless clues about where or when the exposure might have been greatest and what outcomes are worth looking for. Knowledge of the community’s health concerns can suggest new hypotheses, and discussion with the community about what is doable and what is not can suggest new methodologies. Advance interpretations of dummy data tables can help you design a study that will be of real value, or at least help you decide not to conduct one that will not. Collaboration with the community can also unearth new resources-data archives or funding sources you did not know about, for example. And community labor is itself a valuable resource. The use of volunteers to collect and record health data can conserve the budget and make a stronger study financially feasible. And when it comes time to discuss your findings, positive or negative, conclusive or equivocal, it can only help that you are already conversant with the anger and fear that underlie community concern. There are of course disadvantages to community involvement. It may add to the cost of the study; it may lengthen the schedule; it may leave you open to political posturing, methodological bias, or at least the appearance of them. Citizens will inevitably raise a mix of questions: some relevant ones that you are glad you heard about; some misguided ones that you can convince them to let go of; and some you consider irrelevant, implausible, or untestable that you may nonetheless feel enormous pressure to investigate. Even if you forge a sound working relationship with the community, colleagues may wonder if you have forsaken your scientific independence and credibility. While some epidemiologists have adopted “barefoot epidemiology” with enthusiasm, others continue to resist, deterred by concerns such as these.

50s

PETER M. SANDMAN

In my own judgment, the advantages of dialogue outweigh the problems even when trust is high and the community is calm. When distrust and distress are substantial, or threaten to become substantial, it is essentially impossible to meet your communication obligations with a monologue. Patterns of community involvement can range from a formal advisory committee to regularly scheduled open meetings to a real working task force that accepts volunteers. The prospect of such uncontrolled openess to citizen input is alarming to many experts, but the usual experience is surprisingly (even disappointingly) calm. Just the opportunity to participate in the study is visibly reassuring, and many who were pounding on the door when it was shut feel little need to walk in once it is open.

8. DECIDE THAT COMMUNICATION IS PART OF YOUR JOB, AND LEARN THE RUDIMENTSIT’S EASIER THAN EPIDEMIOLOGY

To meet their communication responsibilities, epidemiologists may have to pick up some communication skills. Every once in a while a communication problem may come along that is thorny enough or explosive enough that you will want to bring in a specialist. But the fundamentals of communication are a great deal less complex than the fundamentals of epidemiology. Two manuals that may help are Improving Dialogue with Communities: A Risk Communication Manual for Government, by Billie Jo Hance, Caron Chess and Peter M. Sandman; and The Environmental News Source, by Peter M. Sandman, David B. Sachsman, and Michael R. Greenberg. Both are available from the Environmental Communication Research Program, 122 Ryders Lane, Rutgers University, New Brunswick, NJ 08903, U.S.A. The biggest challenge is not learning how to do communication. It is deciding that communication is part of your job after all.

Can you delegate your communication responsibilities? Certainly . . . in principle. I can imagine an agency or a corporation that has a protocol for communicating with the public as thorough and professional as your protocol for the health study itself. Such a protocol will doubtless have a role for you as the epidemiologist in charge of the study, during the design and data analysis as well as when the report is done. You can fulfill your assigned role, secure in the knowledge that relevant publics are being consulted and informed. But in many situations communication will not be a component of the epidemiological study unless you make it one yourself. Your client, employer, or funder may see communication as a catch-as-catch-can afterthought; it may never have been considered one way or the other; it may even be steadfastly opposed as “looking for trouble.” In these circumstances you must determine for yourself what your communication responsiblities to the public are and how you can meet them. If you cannot meet them, and if obviously no one else is going to meet them for you, you must determine whether you can continue nonetheless to participate in something akin to “secret epidemiology.” Many factors will go into this decision, including the integrity of the study itself, the seriousness of the hazards being uncovered, and the conditions of your employment. I do not argue that epidemiologists should always quit if their work is not being properly communicated. It do argue that poor communication compromises even the best epidemiology, and that epidemiologists therefore have communication responsibilities that cannot be ignored. Acknowledgements-An earlier version of this paper was presented to the Conference of Ethics in Epidemiology, Industrial Epidemiology Forum, Birmingham, Alabama, 12 June 1989. The work was funded by E. I. du Pont de Nemours & Co., which is not, however, responsible for the views expressed. The author is grateful to Daniel Wartenberg, Helen Spiro, Lorraine J. Lucas and Jody S. Lanard for their comments.