Meaning and contextualisation in risk assessment

Meaning and contextualisation in risk assessment

Reliability Engineering and System Safe(v 59 (1998) 79-89 PII:S0951-8320(97)00122-I ELSEVIER © 1998 Elsevier Science Limited All rights reserved. P...

1MB Sizes 0 Downloads 14 Views

Reliability Engineering and System Safe(v 59 (1998) 79-89

PII:S0951-8320(97)00122-I

ELSEVIER

© 1998 Elsevier Science Limited All rights reserved. Printed in Northern Ireland 0951-8320/98/$19.00

Meaning and contextualisation in risk assessment Tom Horlick-Jones* Risk Research Group, Centre for Environmental Strategy, University of Surrey, Guildford, UK This paper presents an analysis of the construction of risk as a social process. It provides a critique of Jasanoff's 'two cultures' model of the risk assessment community, arguing that corresponding rhetorics serve to obscure the hybrid nature of risk. It is argued that a new perspective, based on the contextualisation of expert scientific knowledge is needed, which transcends reductionist tendencies that view risk assessment as simply about either material artefacts or social constructions. Such approaches have the potential, it is suggested, to address not only the complexity but also the moral and political dilemmas associated with a wide range of risk management problems. © 1998 Elsevier Science Limited.

1 INTRODUCTION: THE BATTLE OF CARLTON

about the new chapters 5 and 6? Although much of the report had been written by distinguished engineers and natural scientists, the Society had felt the need to invite a group of social scientists to address questions concerning risk perception and 'related matters in the social context'. This decision resulted in the production of two chapters, one on risk perception 4 and the other on risk management s, and these were the controversial ones ~. At face value, what seems to have taken place is the rejection of a 'soft', judgemental and value-laden view of risk by an institution dedicated to the promotion of orthodox 'hard', quantitative scientific activity. Here, perhaps, was a stark example of the fundamental incompatibility between what Sheila Jasanoff 6 terms the 'two cultures of risk analysis', with engineering and the physical sciences on one side and the social sciences on the other of a chasm that she argues needs 'bridging'. Whatever the exact nature of the micropolitics that resulted in these particular events, they seemed to have signalled a crisis in UK establishment thinking about risk. Some three years later, however, prior to the publication of an official examination of risk assessment by British Government departments7~ the chairman of the investigating committee s made clear that in his view: A particular characteristic of risk assessment as a subject in the scientific domain is that it is multidisciplinary and hence poses the challenge of

H O U S E T E R R A C E FA

In 1992, Britain's Royal Society, one of the w o r l d ' s most prestigious scientific institutions, published a report entitled Risk: Analysis, Perception and Management 1. In his carefully worded preface to the report, Sir Francis GrahamSmith, Vice-President of the Society, reflected upon its lineage and composition, and went on to say that: Chapters 5 and 6 differ somewhat, in style and in content, from the earlier chapters. In particular, chapter 6 sets up, as an expository device, a series of referenced points of view as opposed positions in the debate, Some of the contending positions will undoubtedly strike many practitioners as extreme... The Report, intended as an 'updated study' to replace the Society's influential report on risk assessment that had been published some ten years before 2, failed to receive the same famous imprimatur. 'The views expressed in the r e p o r t . . . ' asserted Graham-Smith, ' . . . a r e those of the authors alone, or of those quoted by them.' Although ostensibly the reasons for this decision included a wish to avoid pre-empting debates the Society wished to encourage, for two of the offending chapters' authors, it was best understood in Orwellian terms 3 as a simple case of 'four chapters good, two chapters bad'. Why did the apparently straightforward process of revising the Royal Society's reference work on risk assessment result in this fiasco? What was so strange or problematic

¢This group included Nick Pidgeon (Social Psychology), Christopher Hood (Political Science), David K. C. Jones (Physical Geography) and the late Barry Turner (Organisational Sociology); with Pidgeon and Hood being the lead authors on Chapters 5 and 6 respectively. Perhaps I should also put on record my own bit-part involvement in the production of Chapter 6.

*Tel.: 441483 25 9074; fax: 441483 25 9394; e-mail: t.horlickj,ones@ surrey.ac.uk This is the central London address of the Royal Society. 79

80

T. Horlick-Jones integrating the contributions of different disciplines...the subject requires the bridging of the gulf between the physical and natural sciences on the one hand and sociology and economics on the other.

Note the analogy of bridging a gap is used once again. The report in question, whilst including some discussion of risk perception, concentrated exclusively on psychometric and social amplification approaches to understanding perception and communication issues associated with given risks 9. These approaches share the characteristic that they regard the risks in question as objective pieces of the world, and thus, in themselves, not affected by social factors. A perspective is therefore imposed upon the discussion that tends to exclude the more problematic recognition that risk assessment itself is influenced and framed by a range of contextual factors. Rather than an integration of disciplines, 'bridging the gap' in this case amounts to a selection of those social science perspectives that do not appear to challenge the dominant philosophical perspective of natural science. Thus, fundamental questions about the nature of scientific knowledge, the relationship between physical artefacts and social structures such as organisations, and the existence of social relations that determine the way in which associated discourse about science is framed are excluded. This approach is fundamentally at variance with the insights that sociological studies of risk l°'lt, of organisationally based accidents 12'~3, and of science and technology ~4"~5 have provided in recent years. Moreover, I suggest, they are inconsistent with the actual practices of science and technology. Paradoxically, although the Royal Society Report's Chapter 5, on risk perception, does provide a balanced review that includes some consideration of the social framing of risk assessment, the same cannot be said, in my view, of Chapter 6. This chapter's discussion of risk management adheres predominantly to a form of instrumental rationality consistent with some of the institutional analysis literature. Therefore, although Chapter 6 was written by a group predominantly formed by social scientists, in practice it side-steps the fundamental epistemological and ontological distinction between an artefact-based view of risk and one that recognises that the 'risk object' (to coin Hilgartner's expression) is, at least in part, socially constituted. In this paper I first examine critically this distinction between risk assessment as uncovering objective reality and as a process of constructing socially constituted knowledge. In particular, I will identify a number of distinct mechanisms by which the social is built into the risk object, and how these result in a blurring of the formal, and in my view rather contrived, distinction between risk assessment and risk management. I will then argue that the conception of 'two cultures', corresponding to a natural science and social science view of risk, that according to some needs to be bridged, is a rather unhelpful one. Rather, 1 suggest, what is needed is a new perspective that transcends

reductionist tendencies that view risk assessment as simply about either material artefacts or social constructions. During the course of this discussion, I recognise the central role of the contextualisation of scientific knowledge and integration of different forms of expert knowledge in enhancing risk assessment. Such approaches have the potential to address not only the complexity but also and the moral and political dilemmas associated with a wide range of risk management problems. These include risk controversies in public arenas, which recent experience in North America and Europe, backed up by contemporary developments in social theory, suggest may become increasingly problematic. However, dangers exist in this enterprise, and l warn against throwing out the epistemological baby with the positivist bath water in the rush to resolve political difficulties.

2 THE CONSTRUCTION OF RISK Above all, the identification and assessment of risk is both a human and a social activity and, as such, is concerned with the production of meaning and a shared understanding of reality 16. The role of the social in the construction of meaning needs to be invoked in order to appreciate how different societies, and indeed subcultures, have sometimes radically different beliefs and sense of what is real and true. Moreover, our understanding of the world is fundamentally ambiguous by virtue of the existence of symbolism, resulting in any given entity assuming a multitude of different meanings, according to context ~7. This multiplicity of meanings lies at the heart of why a given risk is sometimes perceived by different social groups as posing a very different degree of threat. Of particular importance here, of course, are the contrasting perceptions corresponding to an expert scientific assessment and that of a non-expert audience. Similarly, contrasting meanings associated with risks also contribute in pivotal ways to complicating the processes involved in their organisational and inter-organisational management. Even within a purely positivist perspective that views risk as being a measure of objective characteristics of physical artefacts, the social inevitably intrudes. Any assessment of risk involves, by necessity, a summation over a variety of potential harms posed by a given hazard. Such a summation involves a weighting process that either implicitly or explicitly introduces valuations of the relative importance of each harm ls,lg. The process of risk assessment itself, as conducted by scientific or technical experts, introduces valuations and potential biases. These emerge by virtue of the need to use imperfect and uncertain knowledge, and to make professional judgements that call upon experience, assessments of the quality of data and other subjective elements 2°'2~. Hence the politics of risk management cannot be excluded from the assessment process. Of course, risk assessment may be a highly technical process, as, for example, in regulatory standard-setting,

Risk assessment

investigating the toxicity of a given chemical or in the design work for a new aircraft or power station. In many contexts, however, managers and technicians within industry routinely need to make decisions on the basis of risk assessments that are considerably more informal in nature. Therefore, there exists a proportionately greater potential for subjective and context-dependent influences on the outcomes. Perhaps, one might argue, the presence of human subjectivity introduces such ambiguity, but surely scientific knowledge itself is objective, and so provides some source of certainty in these matters? Not so; it is important to recognise that there is now a considerable body of evidence that brings into doubt even this proposition. Such insights seem to demonstrate that scientific knowledge is underdetermined by data, so leading to ambiguity of interpretation. The conclusion of these findings has been that commitments and pre-conceptions held by scientists, and processes of negotiation involved in scientific activity, can play important roles in the construction of scientific

facts 14,22-24. There is much empirical evidence of the lack of 'scientific objectivity' in risk assessment; for example a number of contrasting findings about the carcinogenicity or toxicology of chemicals in Britain and the United States 25'26. Supporting evidence can also be found in practical engineering, for example from the very different findings of a number of national teams involved in 'benchmark' experiments to assess the risk of failure of a nuclear reactor 27 and the diffusion of plumes of toxic gas 28. For some, especially practitioners of applied natural science, the suggestion of the contingent nature of scientific knowledge is particularly unsettling. An implied relativism, in which there is no special epistemological status for scientific knowledge, set against other forms of knowledge, appears absurd or dangerous to many. Resulting disagreements have sometimes become quite acrimonious, with the sociology of science recently coming under sustained polemical attack by some of its critics 29"3°. It should be noted that relativism has been adopted as a methodological tool by some sociologists of science 23, so creating an important source of potential misunderstanding; indeed, as Potter 31 notes, 'exactly what is being suggested about knowledge and truth by workers in this perspective is not always clear'. Despite the details of these sometimes arcane debates, it is difficult to disagree with Collins's 23 view that: Neither anarchy nor nihilism follows from the recognition of the human basis of expertise; instead comes the recognition that there is no magical escape from the pangs of uncertainty that underlie our decisions. Turning now to a third mechanism by which the social is incorporated into the risk object, one needs to consider the organisational and social setting in which a given substance or technology finds application. The resulting hybrid systems, termed sociotechnical, display emergent properties

81

as the social and material each form a context for, and condition the behaviour of, the other ~5. This perspective is very important in order to appreciate the complexity of risks associated with many disasters 13, and it has been used powerfully by Vaughan 12 to demonstrate how the failures associated with the Challenger space shuttle disaster were rooted in the apparent normality of day-to-day operations: ...its origins were in routine and taken for granted aspects of organizational life that created a way of seeing that was simultaneously a way of not seeing There exists a number of useful case studies showing how official scientific assessments of risk proved wildly wrong because the characteristics of the context in question had not been properly taken into account 32-34. The practical circumstances in which the herbicide 2,4,5-T was applied was one of these, and perhaps the best known example concerns the movement of radioisotopes, from the Chernobyl nuclear accident fallout, through the Cumbrian hill top environment. In this case, official estimates of the necessary length of bans on the sale of sheep meat proved inaccurate as ministry experts' knowledge of first the local geology and then of hill farming practices proved inadequate. It is important to recognise that when conducting a risk assessment, the characteristics of the technology in question and the human, managerial and material circumstances of its application cannot be considered in isolation. Rather, these factors interact in a contingent open-ended process that may actually preclude deterministic analysis 35. Moreover, these sociotechnical systems interact with their operating environment, including socio-economic, regulatory and other factors, in ways that may introduce indeterminacies in behaviour, and possibly even impair their safety characteristics 35,36. Clearly, the social is built into risk in a number of distinct ways. What then are we to make of claims that risk is, as the social science jargon puts it, 'socially constructed'? When Hilgartner 1° discusses the construction of his risk objects he is rather uncomplimentary about most risk analysts, asserting that, in his view, they employ 'an antiquated, artefact-centred view of technology'. In contrast, his own perspective is 'thoroughly constructivist '§. In practice, Hilgartner's subsequent analysis focuses largely on the construction of sociotechnical systems, and so is consistent with only part of the discussion presented above. If one compares Hilgartner's position with some characterisations of the social constructionist view then his '~Many writers use 'constructivist' and 'constructionist' interchangeably, although some of the more esoteric writings in this area identify the latter with a slightly more reductionist version of the concept. Here I shall use 'constructionist' as a generic term to encapsulate a number of theoretical nuances, without suggesting that I adhere to any of these specific positions. For a wide-ranging discussion of the role of constructionist ideas in the social and human sciences see Potter 31.

82

T. Horlick-Jones

perspective seems somewhat moderate in comparison. Consider the version that appears in Renn's 37 comprehensive review of conceptual positions: Issues such as health threats, inequities, fairness, control and others cannot be determined by objective scientific analysis but only reconstructed from the beliefs and rationalities of the various actors in society...The fabric and texture of these constructions reflect both the interests/values of each group or institution in the various risk arenas and the shared meanings of terms, cultural artifacts, and natural phenomena among groups...Technical risk analyses are not necessarily superior to any other construct of risk because they are also based on group conventions, specific interests of elites, and implicit value judgements. This apparently relativist analysis seems close to the work of the anthropologist Mary Douglas and her collaborators 38 who regard cultural formations as determining both the selection of certain risks for special concern, and the adoption of risk handling styles. Although it should be stressed that Douglas does not seek to deny the reality of risks; indeed as she puts it: Note that the reality of the dangers is not at issue. The dangers are only too horribly real...this argument is not about the reality of the dangers, but about how they are politicized. This point cannot be emphasized too much 39 Meaning lies at the heart of this human-centred perspective on risk assessment. When Wynne 4° stated that: The risk assessment fraternity is well aware by now of the limited role played by various 'technical', 'rational', or 'analytical' approaches to risks in real decision making contexts. He was, of course, identifying a phenomenon widely recognised by anyone with experience of these matters. Narrow, artefact-centred, risk assessments fail to identify the multiple social meanings associated with a given physical risk. These may include perceived threats to community, home, jobs, ways of life, and so on. A physical risk may come to play a symbolic, or proxy, role according to an apparently unrelated agenda 41'42. These considerations may apply in organisational or public arenas, and they are every bit as real as scientific assessments, even if they are more obviously judgemental. Among opponents of this perspective, Mayo's 43 careful (if laboured) critique of the 'sociological view' of risk assessment is particularly interesting. She advocates what she terms a 'metascientific' approach that moves beyond 'naive positivism' but avoids what she perceives as the suggestion that 'there is little objective or empirical basis on which to criticize risk assessments'. Whilst recognising that values cannot be separated from risk assessments, Mayo sees uncertainty as the source of these difficulties,

and claims that risk management can be enhanced by including a better understanding of the nature of uncertainties in associated risk assessment processes. Although she makes a number of useful suggestions, Mayo seems to have missed the key points articulated by sociological approaches to risk assessment; in particular, she fails to appreciate that the reduction of scientific uncertainties would neither provide scientific 'truth' nor dispel symbolic meanings. Moreover, her characterisation of all sociological positions as absolute relativist ones is a parody of what is being argued. In this way, in effect, a 'man of straw' is invoked and then ridiculed in order to discredit a perspective that is seen as threatening. As we have seen, however, social factors come to be built into risk by means of a number of mechanisms. Its inherent nature necessitates value judgements, but the social is also incorporated from the human, and judgemental, activity of assessment, the organisational and social context in which it is embedded, and the fundamental ambiguity of meaning inherent in human understanding and action. None of these observations is intended to attack or discredit scientific knowledge, or to question its utility; rather, it argues that risk assessment should be seen for what it is, an essentially human activity. A recent analysis by Thompson and Dean 44 provides yet another perspective on these matters. This work explores the potential of developing a frame of reference which is, above all, objectivist, yet inclusive of many conceptual positions which are apparently incompatible, for example scientific expertise or lay knowledge. This approach places different conceptions of risk upon a spectrum, ranging from positions which, at one extreme, view risk purely as about the probability of events, over to positions termed 'contextualist', which may entail elements of an additional set of characteristics including voluntariness, familiarity, and so on. These additional characteristics are, of course, some of the categories of subjective attributes which feature in the classic psychometric work on risk perception. Although Thompson and Dean share my preoccupation with context, their risk characteristics appear static, and there is no sense of risks being dynamically constructed. Moreover, the fundamental ambiguity of interpretation that plays a central role in the social construction of risk is omitted from their model, and indeed they recognise that social constructionist positions cannot be accommodated within what they term the probabilist/contextualist continuum. As they note, fundamental philosophical differences exist between their essentially positivist view and that of social constructionist approaches. Thompson and Dean then engage in an extraordinary attack on social constructionist approaches to risk, suggesting that: ...the majority of risk constructivists have simply not thought about the epistemological and ontological implications of their claims. We think that many constructivists overstate their position in an attempt

Risk assessment

either to stress the importance of social context in selecting and framing which risk issues to address, or to stress how factors we identify in the contextualist view may be more important that probability or consequence. We think that they might choose their words more carefully if they were aware of the fits that they cause. Such serious claims surely require significant corroborating evidence, yet one can find none in their paper. This somewhat intemperate outburst mars an otherwise elegant and scholarly piece of work. Significantly, it serves to demonstrate how threatening constructionist ideas are viewed in some quarters. More importantly, once again, a caricature of constructionist approaches has been presented in order to discredit them. As should be clear, the perspective I have attempted to develop here seeks to transcend reductionist approaches to risk assessment that view risk as purely about either scientifically objective characteristics of artefacts or social constructions. Risk is a construct, and all assessments of risk, to some extent, embody elements of social constructionism ~. In practice, this means that attempts to separate risk from its context, or the assessment from the management of risk, are doomed to failure. As we will see, this recognition has important epistemological, moral and political implications. Rhetorical appeals to idealistic notions of objectivity and truth not only deny the inherent ambiguity and valueloaded nature of risk; they also potentially compromise our ability to effectively and equitably manage the risks posed by our increasingly complex world.

3 BRIDGING THE T W O CULTURES? The analysis set out above has much in common with Jasanoff's 6 work which accompanies her concept of the 'two cultures' of risk analysis, contrasting the 'soft', qualitative side of the social sciences with 'the field's "hard", quantitative core'. In particular, she recognises the constructed and contingent nature of risk, and the importance of taking into account its contextual situation. These ideas are introduced by means of a number of concepts: 'scale', which focuses on potential mismatches between scale factors in modelling and real-world applications, 'interactivity', which recognises the key roles of human factors in the construction of risks, and 'contingency', which underlines the importance of context-dependent factors. However, I take issue with the 'two cultures' idea itself, which suggests a polarity of outlook and practice that mirrors the very reductionism that I wish to avoid. Jasanoff's model implies a somewhat caricatured view of the sort of knowledge produced by 'hard' scientists involved ~lMy concept of varying degrees of constructionism is therefore close to Steven Yearley's2 4 'Moderate Constructionism', introduced in his more general discussion of science and technology.

83

in risk analysis; a view, one should note, that is often reinforced by statements from within their own ranks 45. In order to clarify matters, two related distinctions need to be made. Firstly, there is Ravetz's 46 important distinction between 'technical' and 'practical' problems; the former belonging to the largely academic world of scientific disciplines and the latter to the 'real' world of experience and practice. Secondly, there is the distinction between formal and informal accounts of scientific and technical practices as portrayed by their practitioners. With regard to the latter, I will draw especially on the work of Gilbert and Mulkay 47. According to Ravetz's view, risk assessment would fall in the category of 'practical problems', in the sense that this activity is defined by its purpose. Clearly, however, it draws upon the findings of 'technical' scientific activity. However, Jasanoff's 'two cultures' picture is concerned largely with differences between perspectives associated with relevant academic disciplines. This distinction is complicated by the close association between 'hard' scientific knowledge and the activity of practical risk assessment; a connection more openly articulated in the United States than, for example, in the United Kingdom. Nevertheless an important epistemological distinction does exist, which, I suggest, sometimes gets lost in the politics of expertise. Clearly, academic squabbles over the nature of risk exist, with any suggestion of relativism raising the hackles of natural scientists, and a whiff of essentialism causing similar aggravation for many social scientists. Such exaggerated and confrontational positions may display versions of what Gieryn 48 has termed 'boundary work', as scientific professionals seek to enhance the status of their activities by portraying their work and that of 'non-scientists' as separated by a categorical demarcation. Similarly, professional groups involved in a range of risk management situations may possess distinct subcultures that sometimes result in 'tribal' disagreements about how best to conduct operations (e.g. the case of civil protection49). The manner in which scientific activity is discursively portrayed plays a crucial role in understanding these matters. During the course of an extended series of studies of the social world of biochemists, Gilbert and Mulkay 47 identified two different 'interpretive repertoires' employed by scientists in giving accounts of their activities. In essence, these scientists used very different descriptions of scientific work in formal contexts, such as in academic papers or conference presentations, than in informal settings such as in interviews with the researchers. The formal 'language' or 'empiricist repertoire' portrayed scientific findings as arising unequivocally from experimental data, gathered according to impersonal rules. Human and social factors are omitted from, and indeed ignored by, this type of portrayal. In contrast, informal settings prompted the use of another, 'contingent', repertoire in which the human and social factors were acknowledged and, moreover, regarded as important in influencing scientists' actions and beliefs. Significantly, however, whilst current theories held by them were warranted by reference

84

T. Horlick-Jones

to the application of impersonal scientific method, the contingent repertoire was used to explain what they considered mistaken, or failures to identify 'the facts'. The concept of an objective, authoritative scientific expertise is built upon the notion of 'science as truth' that has deep historical roots in the history of science. Accredited experts are able to draw on this belief in order to legitimate their role, and so provide credible recommendations for use in policy 5°. Therefore, both pure 'hard' scientists and scientific experts, who legitimate their roles by appeal to traditional beliefs about science, possess strong vested interests in perpetuating these beliefs. How does this stereotyped view of scientific activity compare with actual practice? Risk assessors, like other professionals, need to employ a significant amount of skill and judgement in their work in order to operate effectively5~'52. Such craft skills require experience and the ability to improvise and adapt that can only be learned by practice. Evidence presented in the 1980s to the Sizewell B nuclear power station public inquiry in the UK was clear in recording the views of the plant's design engineers: [They] do not believe that risks can be accumulated into single numbers or that any given safety investment necessarily reduces collective risk. They look at design parameters and their implications for operator error and accident sequences .... Risk regulation cannot be achieved by regulation alone, nor can riskcost-benefit analysis provide an answer. 53 Risk professionals are therefore involved, to some extent, in contextualising 'technical' scientific knowledge for application in 'practical' problems. The craft skills involved in this work draw on formal procedures like mathematical modelling not so much as 'gateways to truth', but as what Polanyi 54 termed 'heuristics', namely as devices to assist intuition. According to this human-centred view of scientific activity, the tacit knowledge and skill of the scientist forms a key ingredient in the production of scientific knowledge. It follows that in the practice of risk assessment, experts need to recognise the limits of narrow instrumental knowledge, and develop an appreciation of the complexity of the problem context. The analysis above suggests that the actual practice of risk assessment, and indeed, even pure science itself, is rather different in nature from the stereotyped view that is used in establishing boundaries between disciplines and in legitimating professional expertise. Rather than two cultures of risk analysis that embody distinct understandings and practices, it is perhaps more useful to recognise the existence of rhetorics 55 that are deployed by professional groups to fulfil a number of political functions. As we will see, the political context in which the 'two cultures' distinction has been drawn is an important factor in influencing the form in which it is presented. Jasanoff suggests the need for some form of 'exchange programs' that might result in the bringing together of social and natural science perspectives in risk assessment. This

proposal shifts the analysis towards process and interaction between knowledges, and is to be warmly welcomed, but the 'two cultures' characterisation of the problem itself may result in the reinforcement of those very reductionist tendencies that work against this objective.

4 C O N T E X T U A L I S A T I O N AND I N T E G R A T I O N OF KNOWLEDGE In line with the emerging argument in this paper, a number of recent strands of risk research have recognised the need to contextualise expert scientific knowledge and to extend the range of knowledge that is seen to constitute 'expertise,50,56 59. Taking into account a wider range of both expert and tacit knowledge, it can be argued, allows both contested values and sources of uncertainty, as well as possible impacts on related policy areas, to be incorporated into the decision-making process. This approach seems to offer the possibility of addressing both epistemological and moral/political shortcomings in the process of assessing and managing risks. Unfortunately, the rationales for these two objectives are sometimes confused, or there becomes an imbalance between them. Consider, for example, the work of Brian Wynne 33, which identifies the source of environmental disputes as arising from the systematic disregard of lay knowledge by powerful, alienating bureaucracies, which seek legitimation for their actions in the authority of scientific knowledge. There is a suggestion that utilisation of lay knowledge in decision-making would satisfy an epistemological function, in improving its quality; however, the main objective clearly seems to be the avoidance of actions that: ...seriously constrains the imagination of new forms of order and how their social legitimation may be better founded...and tacitly and furtively impose prescriptive models of the human and the social upon lay people [which] are implicitly found wanting in human terms. According to this view: Thus alternative, more culturally rooted and legitimate forms of collective public knowledge--and of corresponding public order--which could arise from the informal non-expert public domain are inadvertently but still systematically suppressed. There is a clear political agenda at work here, coupled with the suggestion that lay knowledge is special in some respect. Others have drawn similar conclusions to mine, prompting Wynne to state: Because my perspective is vulnerable to such common misunderstanding, let me utterly disown the reading which takes it as claiming that lay, or 'local' knowledge is to be championed as superior to scientific or universal knowledge. To conclude

R i s k assessment

this from my analysis would be to completely miss the point. Despite his protestations, it is difficult to escape the conclusion that moral/political considerations are driving Wynne's analysis, and so possibly compromising the epistemological and so operational aspects of risk management. In addition, Wynne's analysis is unclear over what precisely constitutes 'lay knowledge'; by default, it seems to be implicitly equated with 'non-expert'. Moreover, he seems to be talking exclusively about the powerless, those with little formal education and those who have experienced difficulties of one kind or another with the products of science and technology. Wynne's populist version of the idea of lay involvement in decision-making raises the dangers of undue influence on risk issue decision-making by articulate and politically active members of the public. Berger's 6° recognition of a 'knowledge class' within advanced capitalist societies is of much relevance here; these groups both possess the resources to be influential and tend to be culturally predisposed towards taking views that are antagonistic towards science and technological development. Other approaches to contextualisation develop from an initial recognition of the 'pitfalls' of risk analysis which is close to the analysis set out in this paper. Notable among these researchers are Otway and von Winterfeld 61 and Funtowicz and Ravetz 56"57. All advocate the involvement of an extended range of legitimate voices in the risk management decision-making process. Funtowicz and Ravetz's theoretical rationale is the most highly elaborated, recognising at it does the emergence of a subset of practical problems where typically facts are uncertain, values in dispute, stakes high and decisions urgent. In these circumstances, scientific 'truth' is, as they put it, a 'chimera '56, so necessitating the development of new forms of scientific practice ('Postnormal Science'). Central to their concept is a concern for the quality of scientific knowledge in decision-making, and a commitment to the democratisation of such knowledge by the establishment of 'extended peer communities' including members of the public 'who are prepared to commit themselves to the quality assurance of scientific inputs '56. Rather similar ideas, in which the contextualisation of scientific expertise is achieved by means of a group interaction have recently been discussed in a number of distinct but conceptually related contexts. These include some of the thinking within the 'Constructive' technology assessment literature 62, which stress processes of 'social learning', and ideas within the 'Integrated Environmental Assessment' movement 63 which is concerned especially with issues like uncertainty, quality assurance and interdisciplinarity within climate change and other global environmental change problems. Operationalising these conceptual developments now presents a significant research challenge. Experiments with 'ethics committees' and 'citizen panels '5°'64 offer

85

some pointers, as indeed does the work on hazardous facility siting conducted by Renn and his collaborators in Germany 65 and the recent US National Research Council report on the extension of public involvement in risk decision-making 66. Some of the operational research and decision theory literature offers potential models, and recent work using problem structuring methods (groupbased approaches to enhancing mutual understanding of complex problems by participants possessing plural rationalities 67) seem to offer considerable potential for application in this context 59. Despite the promise of these developments, much work is required in order to begin to gain an understanding of the relationship between the characteristics of such group processes, the role of power in these interactions, and the nature of their outputs, merging as they do the epistemological and moral/political. The assessment of the performance of such processes, including their political implications, poses a major research challenge.

5 THE POLITICAL C O N T E X T OF RISK ASSESSMENT Jasanoff's 'two cultures' model has its roots in debates about risk that have taken place largely in public arenas, primarily within the United States. Here, expertise plays a key political function within both the regulation of risk, especially concerning the toxicity of chemicals, and in risk controversies such as those involving the siting of hazardous facilities. The American political and legal systems place great importance on openness, negotiation and on appeals to objective forms of knowledge. As Porter 68 notes: ...administrative decisions have come to be patterned after judicial ones, relying on a form of open and adversarial argument that is scarcely distinguishable from litigation Risk analysis has emerged as an important source of such 'objectivity', hence the emphasis on 'hard' science in the regulatory process in the form, for example, of the US National Research Council's 'Red Book '6'69. The open politicisation of expert evidence in the American courts has led to problems of quality control, prompting the expression 'junk science '7°. In these circumstances scientific 'experts' are used by protagonists on both sides ('advocacy science') as sources of authority in order to legitimate their arguments. In Europe, at present at least, things are rather different. Porter 68 recognises that: The Europeans of course vary among themselves, but all are capable in some measure of formulating policies and determining how to apply them through negotiation with the interested parties, behind closed doors. Americans, on the whole, are denied this:

86

T. Horlick-Jones

"Unable to strike bargains in private, American regulatory agencies are forced to seek refuge in objectivity, adopting formal methodologies for rationalizing their every action" In the United Kingdom risk assessment for the regulatory process involves the work of expert advisory committees which do not meet in public. John Rimmington vl, the former head of the UK's Health and Safety Executive, recently provided an interesting account of the workings of such committees, which he sees as providing some form of 'mediation' between science and political and social issues. Rimmington illustrates his argument by considering the way in which occupational exposure limits for chemical substances are set. The assessment process involves examination by regulatory agency scientists followed by the deliberations of two expert bodies: first a working party of scientists and then the Advisory Committee on Toxic Substances (ACTS). ACTS is a 'tripartite' body which brings together representatives of employers, trades unions and regulatory agency, backed up by expert advisers. The scientific working party considers not only toxicological data, but also: Practical questions of controllability in the workplace, and sociological and economic impacts of different safety margins are introduced, and good social and economic reasons, concerned with the economic importance or dispensability of the product, may be advanced for adopting a lower margin of safety or, if you like, size of uncertainty factor. 7~ Rimmington advances this as process as one model: for the mediation of scientific evidence through economic and social dimensions to attain a decision which is politically acceptable and economically tolerable Without doubt this approach constitutes a form of contextualisation of 'technical' scientific knowledge. In its present form it employs a rather narrow range of expert knowledges and perspectives; however, it clearly presents a number of advantages over more scientifically reductionist approaches. This has been recognised by Jasanoff 69, who argues that despite its 'many attractions' the British system could not be applied in the American context, characterised as it is by low public trust in official bodies. The role of trust is increasingly recognised as central to discussions about the politics of risk 5°'72. Significantly, US Federal Government under the (first) Clinton Administration introduced risk analysis in the broader context of regulatory priority setting and decision-making 73. In this way the 'objectivity' of a 'scientific process' is introduced as a basis for political decisions. A cynical observer might note that this development amounts to 'taking the politics out of policy'. Interesting parallel developments may now be observed

in the UK as the government 'Deregulation Initiative' has invoked the authority of risk assessment in order to legitimate its approach, which it argues will 'ensure we get the balance of regulation right to everyone's benefit '74. Similarly, in connection with the work of the ILGRA committee, mentioned above, it is argued that 'risk assessment provides a rational approach to making decisions '8. Whilst there is obvious merit in introducing consistency between departmental approaches to risk assessment, other agendas seem to be at work here. Both the DTI document TM and McQuaid's commentary s on the ILGRA committee, for example, identify the importance of applying risk assessment in 'influencing Europe' and in negotiating for 'sensible' regulation from European Union directives--an importance source of tension in U K - E U relations. Perhaps more important is the suggestion that these changes may be in response to perceived political difficulties in the management of risks, in the manner that has become common in the United States. This would be consistent with the suggestion that structural changes in the technologically advanced societies are resulting in the emergence of a global 'Risk Society '5s'75, characterised by an increasing preoccupation with the dangers of all risks, and an erosion of trust in science and expertise. Recent controversies, including those concerning the proposed North Sea disposal of the Brent Spar oil platform, and the safety of various foodstuffs including British beef, suggest that such changes are taking place 5°. Other corroborating evidence is provided by the central role of risk assessment and concerns about public trust in the operational approach of the Environment Agency, a recently established UK regulatory body 72.

6 CONCLUSIONS There now seems some justification for my initial claim that the formal distinction between risk assessment and risk management is a somewhat contrived one. We have seen that risk is constructed from the material and the social in ways that implicate the process of assessment and the context of its management. An approach which recognises that risk possesses a hybrid character that always embodies some degree of social constructionism has been shown to be a profoundly practical view, and also one that avoids reductionist tendencies that suggest that risk is simply about either material artefacts or social constructions. My analysis of Jasanoff's ~two cultures" model of the risk analysis community has itself implicitly used the idea of social constructionism, showing as it does how certain rhetorics corresponding to the politics of expertise have come into existence. Here it is argued that articulation of these rhetorics may reinforce reductionist tendencies, and there is a need to strongly resist this reified discourse of risk. Of course, Jasanoff's characterisation must be seen

Risk assessment

partly as arising from a distinctively polarised US context JJ. Nevertheless there are important lessons here of more universal applicability; in particular a need to recognise the expectations, constraints and caricatured views of others entailed in uncritically adopting any given expert discourse**. Interactions that allow the contextualisation of scientific knowledge and the integration of knowledges seem to offer a positive way towards non-reductionist risk analysis. Such approaches seem to have the potential to address both the complexity and the moral and political dilemmas associated with a wide range of risk management problems. These approaches need to extend the range of knowledge that is seen as 'expertise'; however, there exist very real dangers of demagogic populism, and a balance between epistemological and moral/political objectives must be maintained. It seems that these ideas may be applied both in public arenas, where multiple meanings associated with risks fuel misunderstanding and conflict, and in organisational settings, where contrasting meanings play central roles in complicating risk management problems. Certain forms of these developments may emerge naturally within the private sector, where knowledge production is increasingly taking a 'Mode 2' or transdisciplinary form, rather than the straightforward application of the products of scholarly reflection 76. In public arenas, risk controversies may become an even more important part of political life. Despite the very different socio-political contexts, there is some evidence that the UK Government may be adopting similar responses to shifting social currents of risk awareness as the US Federal administration. In particular, the adoption of risk assessment as a 'rational' approach to regulatory decisionmaking contains both positive and negative aspects, of which the latter includes the utilisation of orthodox risk rhetorics as legitimation devices. Whilst the US context has generated very positive proposals, for example those by the recent US National Research Council report 66, it is not at all clear whether or how these will be applied. On both sides of the Atlantic there is a need to transcend sterile debates between reductionist approaches to risk analysis in order to find ways of effectively and equitably managing the risks posed by our increasingly complex world.

"Such polarisation is not, however, purely restricted to the US. A recent conference on Risk, Science and Policy, held under the auspices of the Royal Society in London in March 1997, was clearly predicated upon, and promoted on the basis of, the need to resolve an alleged conflict between natural and social scientific perspectives on risk. "*The extent to which my own views are characteristic of (and imprisoned by?) my geographical and professional context is a matter for others to judge. They are, I suspect, quite European in ways that may appear unfamiliar to some North American readers, Other influences include my personal intellectual trajectory from physical scientist to policy analyst and then to social scientist.

87

ACKNOWLEDGEMENTS 1 am particularly grateful to Jerry Ravetz, Ragnar L6fstedt and Jonathan Sime, who provided me with detailed comments on the draft manuscript, to the referees for their observations, and to Nick Pidgeon, for his gentle yet incisive editorial guidance. The work was made possible by the generous financial support of the Kirby Laing Foundation. It also draws on work carried out with the support of the Economic and Social Research Council as part of the Risk and Human Behaviour Programme.

REFERENCES 1. Royal Society Study Group, Risk: Analysis, Perception and Management. Royal Society, London, 1992. 2. Royal Society Study Group, Risk Assessment. Royal Society, London, 1983. 3. Hood, C. & Jones, D., Preface. In Accident and Design, ed. C. Hood and D. Jones. UCL Press, London, pp. xi-xiii. 4. Pidgeon, N., Hood, C., Jones, D., Turner, B. & Gibson, R., Risk perception. In Risk: Analysis, Perception and Management, Royal Society Study Group. Royal Society, London, 1992. 5. Hood, C., Jones, D., Pidgeon, N., Turner, B., Gibson, R. et al., Risk management. In Risk: Analysis, Perception and Management, Royal Society Study Group. Royal Society, London, 1992. 6. Jasanoff, S. Bridging the two cultures of risk analysis. Risk Analysis, 1993, 13(2), 123-129. 7. ILGRA. Use of risk assessment within government departments. Report prepared by the Interdepartmental Liaison Group on Risk Assessment Health and Safety Executive, London, 1996. 8. McQuaid, J. Improving the use of risk assessment in government.. Transactions of the Institution of Chemical Engineers, Part B, 1995, 73(B4), $39-$42. 9. Krimsky, S. & Golding, D., Social Theories of Risk. Praeger, Westport, CT, 1992. 10. Hilgartner, S., The social construction of risk objects. In Organizations, Uncertainties and Risks, ed. J. Short and L. Clark. Westview, Boulder, CO, 1992, pp. 39-53. 11. Clark, L. & Short, J. Social organizations and risk: some current controversies. Annual Review of Sociology, 1993, 19, 375-399. 12. Vaughan, D., The Challenger Launch Decision. University of Chicago Press, Chicago, 1996. 13. Turner, B. A. & Pidgeon, N., Man-Made Disasters, 2nd edn. Butterworth-Heinemann, Oxford, 1997. 14. Woolgar, S., Science: The Veo' Idea. Ellis Horwood, Chichester and Tavistock, London, 1988. 15. Bijker, W. & Law, J. (ed.), Shaping Technology~Building Society: Studies in Sociotechnical Change. MIT Press, Cambridge, MA, 1992. 16. Berger, P. & Luckmann, T., The Social Construction of Reality. Penguin, Harmondsworth, 1967. 17. Sperber, D., Rethinking Symbolism. Cambridge University Press, Cambridge, 1975. 18. Fischhoff, B., Watson, S. & Hope, C. Defining risk. Policy Sciences, 1984, 17, 123-139. 19. Horlick-Jones, T. & Peters, G. Measuring disaster trends. Part 1. Some observations on the Bradford Fatality Scale. Disaster Management, 1991, 3(3), 144-148.

88

T. Horlick-Jones

20. Freudenberg, W. Perceived risk, real risk: social science and the art of probabilistic risk assessment. Science, 1988, 242, 44-49. 21. Funtowicz, S. & Ravetz, J., UncertainO, and Quality in Science for Policy. Kluwer, Dordrecht, 1990. 22. Barnes, B. & Edge, D. (ed.), Science in Context. Open University Press, Milton Keynes, 1982. 23. Collins, H., Changing Order: Replication and Induction in Scientific Practice. Sage, London, 1985. 24. Yearley, S., Science, Technology and Social Change. Unwin Hyman, London, 1988. 25. Gillespie, B., Eva, D. & Johnson, R,, Carcinogenic risk assessment in the USA and UK: the case of Aldrin/Dieldrin. In Science in Context, ed. B. Barnes and D. Edge. Open University Press, Milton Keynes, 1982 pp. 303-335. 26. Jasanoff, S., Cultural aspects of risk assessment in Britain and the United States, In The Social and Cultural Construction of Risk, ed. B. Johnson and V. Covello. Reidel, Dordrecht, 1987, pp. 359-397. 27. Amendola, A. Uncertainties in systems reliability modelling: insights gained through European benchmark exercises. Nuclear Engineering and Design, 1986, 93, 215-225. 28. Archer, G., Girardi, F., Graziano, G., Klug, W., Mosca, S. & Nodop, K., The European long range tracer experiment (ETEX): preliminary evaluation of model inter-comparison exercise. In Air Pollution Modelling and its Application, Vol. 11, ed. S, E. Gryning and S. A. Schiemeir. Plenum Press, New York, 1996, pp. 181 - 190. 29. Wolpert, L., The Unnatural Nature of Science. Faber & Faber, London, 1992. 30. Gross, P. & Levitt, N., Higher Superstition: The Academic Left and its Quarrels with Science. Johns Hopkins University, Baltimore, 1994. 31. Potter, J., Representing RealiOu Discourse, Rhetoric and Social Construction. Sage, London, 1996. 32. Wynne, B., Frameworks of rationality in risk management: towards the testing of naive sociology. In Environmental Threats: Perception. Analysis and Management, ed. J. Brown. Belhaven Press, London, 1989. 33. Wynne, B., May the sheep safely sleep? A reflexive view of the expert-lay knowledge divide. In Risk, Environment and Modernity, ed. S. Lash, B. Szersynski and B. Wynne. Sage, London, 1996, pp. 44-83. 34. Irwin, A., Citizen Science. Routledge, London, 1995. 35. Horlick-Jones, T., Is safety a by-product of quality management? In Accident and Design, ed. C. Hood and D. K. C. Jones. UCL Press, London, 1996, pp. 144-154. 36. Horlick-Jones, T., Acts of God: An Investigation into Disasters. Epicentre, London, 1990. 37. Renn, O., Concepts of risk: a classification. In Social Theories of Risk, ed. S. Krimsky and D. Golding, Praeger, Westport CT, 1992, pp. 53-79. 38. Douglas, M. & Wildavsky, A., Risk and Culture. University of California Press, Berkeley, 1982. 39. Douglas, M., Risk as a forensic resource. In Risk, ed. E. Burger. University of Mitchigan Press, Ann Arbor, 1990, pp. 1-16. 40. Wynne, B., Institutional mythologies and dual societies in the management of risk. In The Risk Analysis Controversy, ed. H. Kunreuther and E. Ley. Springer-Verlag, Berlin, 1982, pp. 127-143. 41. Wynne, B., Rationality and Ritual. British Society for the History of Science, Chalfont St. Giles, 1982. 42. Palmlund, I., Social drama and risk evaluation. In Social Theories ~f Risk, ed. S. Krimsky and D. Golding. Praeger, Westport, CT, 1992, pp. 197-212. 43. Mayo, D., Sociological versus metascientific views of risk assessment. In Acceptable Evidence: Science and Values in

44. 45. 46. 47.

48.

49.

50.

51. 52. 53.

54. 55. 56. 57.

58. 59.

60. 61.

62.

63.

64. 65. 66.

Risk Management, ed. D. Mayo and R. Hollander. Oxford University Press, New York, 1991, pp. 249-279. Thompson, P. & Dean, W. Competing conceptions of risk. Risk: Health, Safety and Environment, 1996, 7, 361-384. Weiner, R. Comment on Sheila Jasanoff's Guest Editorial. Risk Analysis, 1993, 13(5), 495-496. Ravetz, J., Scientific Knowledge and its Social Problems. Oxford University Press, Oxford, 1971. Gilbert, G. N. & Mulkay, M., Opening Pandora's Box: A Sociological Analysis of Scientist's Discourse. Cambridge University Press, Cambridge, 1984. Gieryn, T. Boundary-work and the demarkation of science from non-science: strains and interests in professional ideologies of science. American Journal of Sociology, 1983, 48, 781-795. Horlick-Jones, T., Prospects for a coherent approach to civil protection in Europe. In Natural Risk and Civil Protection, ed. T. Horlick-Jones, A. Amendola and R. Casale. E&FN Spon, London, 1995, pp. l - 12. Horlick-Jones, T. & De Marchi, B., The crisis of scientific expertise in fin de si~cle Europe. In Scientific Knowledge in Europe, ed. T. Horlick-Jones and B. De Marchi. Special issue of Science and Public Policy, 1995, 22(3), 139-145. Sch6n, D,, The Reflective Practitioner. Basic Books, New York, 1983. Dietz, T. & Rycroft, R., The Risk Professionals. Russell Sage Foundation, New York, 1987. O'Riordan, T., Kemp, R. & Purdue, H. On weighing gains and investments at the margin of risk regulation. Risk Analysis, 1987, 7(3), 361-369. Polanyi, M., Personal Knowledge. Routledge & Kegan Paul, London, 1958. Potter, J. & Wetherell, M., Discourse and Social Psychology. Sage, London, 1987. Funtowicz, S. & Ravetz, J. Risk management as a postnormal science. Risk Analysis, 1992, 12(1), 95-97. Funtowicz, S. & Ravetz, J., Planning and decision-making in an uncertain world: the challenge of post-normal science. In Natural Risk and Civil Protection, ed. T. Horlick-Jones, A. Amendola and R. Casale. E&FN Spon, London, 1995, pp. 415-423. Beck, U., Ecological Politics in an Age of Risk. Polity, Cambridge, 1995. Horlick-Jones, T. & Rosenhead, J., Developing methods to enhance to organisational management of ambiguous risks. In Proceedings in the 1996 Society for Risk Analysis (Europe) Conference. University of Surrey, Guildford UK, 1996. Berger, P., The Capitalist Revolution. Wildwood House, Aldershot, 1987. Otway, H. & yon Winterfeld, D. Expert judgement in risk analysis and management: process, context and pitfalls. Risk Analysis, 1992, 12(1 ), 83-93. Rip, A., Misa, T. & Schot, J. (ed.), Managing Technology in Society: The Approach ~f Constructive Technology Assessment. Pinter, London, 1995. Prospects for integrated environmental assessment: lessons learned from the case of climate change. International Symposium, Toulouse, 24-25 October 1996. Fiorino, D. Citizen participation and environmental risk: a survey of institutional mechanisms. Science Technology and Human Values, 1990, 15, 226-243. Renn, O., Webler, T. & Wiedemann, P., Fairness and Competence in Citizen Participation. Kluwer, Dordrecht, 1995. Stern, P. & Fineberg, H. (ed.), Understanding Risk: Informing Decisions in a Democratic SocieD'. National Academy Press, Washington, DC, 1996.

Risk assessment

67. Rosenhead, J. (ed.), Rational Analysis for a Problematic World: Problem Structuring Methods for Complexity, Uncertainty and Conflict. Wiley, Chichester, 1989. 68. Porter, T., Trust in Numbers. Princeton University Press, Princeton, NJ, 1995. 69. Jasanoff, S., Acceptable evidence in a pluralistic society. In Acceptable Evidence: Science and Values in Risk Management, ed. D. Mayo and R. Hollander. Oxford University Press, New York, 1991, pp. 29-47. 70. Huber, P., Galileo's Revenge: Junk Science in the Courtroom. Basic Books, New York, 1991. 71. Rimmington, J. A social regulator's use of science. Transactions of the Institution of Chemical Engineers, Part B, 1995, 73(B4), $5-$7.

89

72. Lrfstedt, R. & Horlick-Jones, T., Environmental regulation in the UK: politics, institutional change and public trust. In Social Trust, ed. G. Cvetkovich and R. Lrfstedt. Earthscan, London, in press. 73. Cantor, R. Rethinking risk management in the Federal Government. Annals of the American Academy of Political and Social Science, 1996, 545, 135-143. 74. DTI, Regulation in the Balance: a Guide to Risk Assessment. Department of Trade and Industry, London, 1993. 75. Beck, U., Risk Society. Sage, London, 1992. 76. Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P. & Trow, M., The New Production of Knowledge. Sage, London, 1994.