Public Health (2003) 117, 301–304
INVITED CONTRIBUTION
How do we make health impact assessment fit for purpose? M. Joffe* Department of Epidemiology and Public Health, Imperial College London, St Mary’s Campus, Norfolk Place, London W2 1PG, UK Received 17 March 2003; received in revised form 17 March 2003; accepted 27 March 2003
KEYWORDS Health Impact Assessment; Evidence base; Quantitative analysis; Qualitative analysis
Summary Progress has been made in recent years in the process of health impact assessment (HIA), including community involvement. The technical side is less well developed. A minimum requirement is that there should be some consistency or robustness, so that the outcome of an HIA does not depend just on who happens to carry it out, that it is not easily swayed by the vested interests that typically surround any project, and that it can withstand legal challenge. Validity is an important criterion, as well as repeatability, as the latter can be achieved merely by propagating errors. All types of evidence should be considered legitimate, including qualitative and quantitative methods. The quality of evidence, and its generalisability, need to be carefully assessed; we should leave behind the divisive discourse around "positivism". Typically there is less information on the links from interventions (policies or projects) to changes in determinants of health than there is on the immediate precursors of health and ill-health. A practical question is, how the best existing knowledge can be made available to HIA practitioners. Other issues are more tractable than is often thought, e.g. that an HIA has to be able to trade off positive and negative impacts to different groups of people, and that the complexity of social causation prevents clear analysis of cause and effect. Q 2003 The Royal Institute of Public Health. Published by Elsevier Ltd. All rights reserved.
Milner et al. have set out a broad and pragmatic approach to health impact assessment (HIA), which is a useful contribution to the growing literature.1 I agree with the authors’ declared aim of challenging narrow thinking in all its guises, not only ‘antipositivism’ but ‘positivism’ too, in fact. This is a dispute which we should try and put behind us, but more of this later. *Tel.: þ44-20-7594-3338; fax: þ 44-20-7402-5253. E-mail address:
[email protected]
Consultation and political context The way in which HIA has developed has emphasized community involvement and consultation with stakeholders. This is a positive feature of the HIA process, and Milner et al. put it nicely when they refer to enhancing the placebo or Hawthorne effect. To an extent, I agree with Hubley’s view, which they cite, that this is good in itself quite apart from any tangible effects on health. But one must also
0033-3506/03/$ - see front matter Q 2003 The Royal Institute of Public Health. Published by Elsevier Ltd. All rights reserved. doi:10.1016/S0033-3506(03)00102-1
302
remember that members of ‘the community’ devote time and energy to an HIA with the expectation of getting an improvement in their lives, and it is extremely important not to let them down. The other side of the coin is that the technical side of HIA is often relatively undeveloped. It is rare that an HIA is able to specify the positive and negative effects of a proposal in any depth, because insufficient time and resources are devoted to it, and/or because the required evidence does not exist. People outside the HIA field, including those who are putting forward the proposal and making the decisions on how (or whether) to implement the HIA, and other key collaborators, may expect the process to deliver what the term suggests literally, an assessment of impacts. The term ‘HIA’ promises more than it can usually deliver. It is therefore important to manage expectations. The term ‘fit for purpose’ makes explicit the importance of context. It also has the merit of suggesting two inter-related questions, which are not explicitly considered by Milner et al. They stem from the sociopolitical context of the proposal (policy, programme or project), and the possibility that it is subject to vested interests and contestability. The questions are: (a) who decides the purpose of the HIA, and (b) how is it decided what approach leads to the ‘best fit’?—by whom and on what criteria? I sympathize with the implied desire of Milner et al. to analyse ‘the societal dynamics of health (the why)’ as well as ‘the modelling of complex relationships between supposed risk factors (the what)’. It is an open question, how far HIA should go in the direction of policy analysis. For example, it is possible to map out the various ways in which car use damages health, especially the health of the non-car-user, as an analysis of ‘the what’. But a complete analysis would also consider why so few authorities are prepared to take on the roads-and-cars lobby, as well as individuals’ own attitudes to car use (which is likely to depend on whether or not they have access to a car). Awareness of the sociopolitical context is important for a number of reasons, the most significant of which are probably the ability to anticipate the moves of the participants, and to maximize the likelihood of the HIA being effective. A key issue is whether an HIA is able to withstand a legal challenge, which raises questions about reliability.
M. Joffe
Consistency and reliability of outcome HIAs cannot be cloned. As stated previously, the purpose of an HIA and key decisions about its conduct are subject to sociopolitical influences, and possibly to manipulation by vested interests. HIA practitioners also differ, as do the time and resources that are available. To adapt Milner et al.’s phrase, how can we ensure that a particular HIA fits its purpose well, rather than being shaped by forces other than its purpose? This connects with an important issue: would an HIA in any particular context result in substantially the same analysis and recommendations, irrespective of who did it, and of these other factors? Or better, rather than posing this as a dichotomy, what is the range of variability in the outcome of an HIA that is likely to occur as a result of differences of this kind? And, are we comfortable with that? This is raised by Milner et al. but left unclear. On one hand, they emphasize managing expectations: we need ‘to guard against unrealistic expectations and illusions of total objectivity and precision in the HIA process’. On the other, they say that it is important for predictions to be ‘quantitatively and qualitatively credible, transferable, dependable and confirmable’. Maybe the second is less demanding than the first, although the suggested need for transferability is exacting in itself. In the HIA world, views vary on how consistent and reliable HIAs can be expected to be, but I think all would agree on the need for some degree of repeatability. A more difficult question is, how to achieve this. While HIA practitioners are increasingly able to draw on the experience of previous HIAs on related topics, it is a huge task to draw together a large quantity of disparate types of evidence, of variable quality, and to come out the other end with a well-reasoned and balanced assessment. It is even more difficult to do this in the all-too-frequent circumstances of insufficient time—because an HIA has to fit between a proposal’s being clear enough to assess and being too much set in stone to be influenced—and insufficient resources. Repeatability is not validity; especially in the situation where people are collaborating or borrowing from one another; it is quite possible for all the HIAs on a particular topic to be closely similar but wrong, e.g. because they have used a particular source of ‘evidence’ that has been discredited among experts in that field of enquiry. The propagation of errors can happen in any area of knowledge, but in doing HIAs, we have to be
How do we make health impact assessment fit for purpose?
especially careful because of the broad range of expertise required, which is beyond the capacity of any individual and of most feasible groups. At present, it is hard to see a way around these difficulties, and HIA practitioners have to manage the best they can, aided by resources such as the HDA’s website.2 It would be far preferable if a sustained effort could be made to collate and make available the evidence base, at least for those HIA topics that recur frequently. This would need to include all the relevant types of information, and to provide a guide to the degree of confidence that can be placed in each of the items.
Evidence vs experience? An opposition is often set up between the importance of objective evidence on one side and subjective experience on the other. I think that this is a pity, as both are important, and an HIA is an opportunity for the two to be considered in partnership. For example, the experience of living in a poor, rundown area with problems of drug abuse and having to deal with the fear of crime is no trivial matter. There is objective evidence that parallels this and, in some cases, quantifies the relationships, and this can be useful in agenda setting and for other purposes too. It does not mean that the aspects of the lived experience for which scientific-standard evidence are unavailable become unimportant, and still less that only the quantifiable features are legitimate. One also encounters criticisms of a so-called ‘positivist’ approach. I was disappointed to see that in the paper by Milner et al., and not convinced by their argument. They equate positivism with “the belief that there is an ‘objective truth’”, and then criticize it by saying (a) that funders, consumers and researchers may be interested in different questions, and (b) that experts have their biases too, which can affect their interpretation of the evidence. Neither of these are valid criticisms of the belief that there is objective truth; (a) each of the different questions will require different evidence to answer them, but to anyone interested in a particular question, the answer to it will in principle still be the same. For example, if the funder of a proposed light railway is interested in the likely revenue that it may generate, whereas consumers are more interested in its route, frequency, convenience etc, each set of questions needs its particular source of information (the questions are, of course, related).
303
Again, (b) all scientists know that the interpretation of data is a matter of judgement, and views will tend to differ, unless and until the evidence is overwhelming (e.g. cigarettes and lung cancer, but such examples are rare)—but this does not mean that the data themselves are flawed. A more serious criticism of the belief in the existence of objective truth, or at least in its attainability, is that particular topics (or particular views of topics) may be less likely to form the subject of research, because they are less fashionable or less readily funded. This means that the ‘existing evidence’ should always be approached with a healthy degree of scepticism, if it is likely that other perspectives have been excluded systematically. This obviously applies to all types of research, not only quantitative or ‘positivist’ research. The main point is that all perspectives have their appropriate validity. It is useful to have epidemiological and other quantitative data, narratives and qualitative research. They are complementary. An important cross-cutting issue is generalizability, as this affects the confidence that one has in using information from one setting in a different one—and this can apply to lay expertise just as much as to quantitative findings. It depends on the degree of similarity in the conditions of the two cases, and on the manner in which the information was collected. Milner et al. place some emphasis on the importance of ‘psychosocial factors, for example bereavement, loss of employment, lack of control and support, lack of a confiding relationship and depression’ to health. Unfortunately, they place this in the context of a criticism of epidemiology for being mechanistic and unsuited to the complexity of the issues faced in HIA. However, it is clear that they do not really mean that implicit criticism, as they back up their statements on the importance of psychosocial factors by referring to epidemiological evidence (see Milner et al.’s references [9 – 12,14,15]). So again, it is a question of complementarity. My problem with the role of epidemiology in HIA is different: it is that when people have the epidemiological evidence, they may think they have all the evidence that is needed. But epidemiology will only tell you about the link between the health outcome and one or more determinants; equally important is to know how the proposal will alter these determinants. That is just as much part of ‘the evidence base’ as the epidemiological data, but many people—not just epidemiologists—are unable to see that. Also, it is often the part of the evidence base that is most lacking. This is not surprising, as an issue is ‘on the health map’
304
generally because of epidemiological (or toxicological) knowledge, but experts in, say, transport or regeneration or food/nutrition do not necessarily have a body of knowledge that addresses the particular questions that we are interested in from the health viewpoint. For example, a lot is known about physical activity being beneficial, and it is at least plausible that many types of transport policy initiatives can facilitate physical activity, but there has not been research that traces those connections, and which needs to be broken down both by sociodemographic categories and by degree of prior activity.
Sources of evidence I think Milner et al. are being realistic when they say that HIA is a process that aims to make a judgement that is, by default, subjective. Another way of saying this is, HIA has to rely to a large extent on what is already in the mind of the practitioner, or rather, in the minds of the people to which s/he has access. It is largely about what is already regarded as ‘commonsense’. For example, the existence of strong social inequalities in health is now accepted as commonsense, because there is a lot of good evidence, and this has been widely disseminated (at least within the UK). I agree that it is important not to confuse an HIA, which is about directly informing policy in a particular situation, with a randomized controlled trial or other study design that is intended to generate new knowledge. There may be circumstances when the two aims can be combined, for example if it is possible to establish a pre- and postimplementation monitoring system; but if so, their distinct goals need to be born in mind constantly. This is a further argument for a division of labour between carrying out an HIA and assembling the evidence base to make it quicker and easier to do, as well as more reliable and valid.
Some ways of simplifying things Some issues are, I think, less problematic for HIA practitioners than Milner et al. suggest. One is that because there are potential ‘gainers’ and ‘losers’ from a proposal, it is hard to make an overall judgement. Account must also be taken of the non-health implications of the proposal. My
M. Joffe
view on this one is simple: it is not our job, when carrying out an HIA, to do this overall balancing. We should be able to say who stands to gain, in what ways (changes in health and its determinants), and who stands to lose. And we then convey this message clearly to the decision makers and also to the other stakeholders. The weighing up and overall judging needs to be done by those who have lines of democratic accountability—even if we sometimes have reservations about how this is actually done— as, in principle, it is a political judgement not a technical one. So this indicates the need for a second type of division of labour.3 A second is a more technical issue. The web of causation, or the other ways that have been used to conceptualize the broad model of health such as the well-known diagram of Dahlgren and Whitehead,4 at best tell us how things are and why they are like that. They do not tell us how a change in one part of the system will bring about changes further on. Secondly, such a schema is so complex as to defy comprehensive analysis. One way around these problems is to use difference models, which focus on change rather than on the static picture.3 This greatly simplifies the analysis, as unless there is reason to suspect effect modification, everything that does not change can be held constant. More important, it relates directly to what we want to know, practically: how a proposal is likely to bring about specific changes in the health determinants, and eventually in health outcomes.
Acknowledgements I would like to thank Dr Jenny Mindell for her comments.
References 1. Milner SJ, Bailey C, Deans J. Fit for purpose health impact assessment: a realistic way forward. Public Health 2003, in press (doi:10.1016/S0033-3506(03)00127-6). 2. http://www.hiagateway.org.uk, accessed 17 March 2003. 3. Joffe M, Mindell J. A framework for the evidence base to support health impact assessment. J Epidemiol Comm Health 2002;56:132—8. 4. Dahlgren G, Whitehead M. Policies and strategies to promote equity in health. Copenhagen: WHO Regional Office for Europe; 1992. (EUR/ICP/RPD 414(2) 9866n).