Cop'Tight © IF .-\C S. \FE(:()\W911 . London. L' K. 1'1'11I
THE IMPACT OF SOCIAL FACTORS ON ACCEPTABLE LEVELS OF SAFETY INTEGRITY I. H. A. Johnston Thl' Cl'll trr fo r S l1tll'{lI r £lIgi ll l'trillg Ltd. B(' I/(('i ll Dr/; 'I'. FIi.\ bo/(i/lgh. SU{// thrll/"', DX 15 8S.\' , n:.
Abstract . The paper emphasises the inherently subjective nature of the concept of safety, whilst accepting the need for some objectively based means of measuring it. The concept of levels of safety is introduced. The combination of risk and consequence as a means of determining required safety integrity as described in lEC 65A(Secretariate)-94I is referred to. Possible means of combining values for risk and consequenc~ to give a meaningful measure of safety integrity requirement are discussed . The potentlal benefit of the application is introduced as a contributor to the required safety integrity and possible means of incorporating this factor are discussed. A rang~ of social factors which contribute to risk/benefit/consequence and a means of cO~lning these to p:oduce an 'objective' measure of the level of safety integrity WhlCh should be requlred of a system are proposed. This combination draws on the ideas presented in DIN V 19 251. Keywords. Safety; computer software; computer selection and evaluation; social and behavioural sciences ; human factors ; risk; consequence ; benefit .
INTRODUCTION Public concern and interest in the extent to which people are safe in the course of their daily lives has been increasing for many years . Early efforts in the United Kingdom provided legislation governing the operation of shipping, mining and automobiles. Some of these efforts have helped provide the basis for levels of safety of which the United Kingdom has been proud. Some, such as the man with the red flag in front of the automobile, have long since been abandoned . In more recent years the growth of consumer organisations and the setting up of the United Kingdom's Health and Safety Executive (HSE) have seen increasing improvements in the degree of safety which the general population expects in all kinds of activity. With the emergence of the green movement major environmental issues are increasingly seen as involving the safety of our planet. In the world of software engineering and computer control these recent developments have been paralleled by an increasing concern with the safety aspects of automated control systems . This has been paralleled by an increase in the use of such systems in all areas of life. In particular much effort has been devoted to the development of suitable means of ensuring that computer software is safe. SUBJECTIVITY There is no clear cut point at which we can say something becomes safe when before it was not. Safety is essentially a subjective concept which depends on current social values and which concerns the level of risk which the population is prepared to tolerate in different situations (HSE, 1988; IGasE , 1989) . nany considerations affect the level of risk which is accepted as safe. These include the number of people affected when a hazard occurs, the degree of harm caused to them
and the degree to which individuals may, or may not, choose to subject themselves to the risk. The benefits to be derived from the application should also be taken into account. The conclusion as to whether or not something is safe, whilst being based on the same or similar considerations, may well differ considerably depending on time and geographical location . Indeed different individuals have different viewpoints on, for example, the acceptability or non-acceptability of the risk to safety posed by nuclear power generation. Often such differences may be explained by differences in the individuals knowledge or understanding . Nevertheless, differences of opinion on such matters can and do arise between people whose knowledge and understanding of the matter in hand are at least broadly comparable, if not identical. Safety cannot therefore be absolute , it is essentially a qualitative rather than a quantitative characteristic, and by its nature cannot be considered to be 100% (HSE , 1988 ; Bennett, 1984) . In practice objectivity of some sort is required and this has been provided by the adoption of standards , frequently backed up by legislation. Such standards usually apply to particular industries or to particular categories , of product or system. They provide a means by which a manufacturer , user or assessor can say "Yes, this product is safe . It meets the requirements laid down by the standard . " In order to be useful standards of this sort must allow such a clear cut decision to be made. Furthermore there must be some rational basis for the implied correlation between "meeting the requirements" and providing a level of safety which will prove acceptable to the public in the context in which the product or system will be used .
J. H. A. Johnston
158
In order to achieve such correlations different standards are often applied to systems which might otherwise be regarded as the same, depending on the environment in which they are to be used. Examples of this approach are the differing requirements for electrical appliances intended for use indoors and outdoors or the military and civil versions of electronic components. In a slightly different way the maintenance requirements placed on public transport vehicles reflect the greater public concern with the safety of such vehicles, in comparison with the private car .
Risk is defined by the IEC (1989) as; "The combination of the frequency, or probability, and the consequence of a specified hazardous event". The means of combination of the frequency , or probability, and the consequence is undefined. The following means of combination could be considered given suitable values for the consequences of hazards; Frequency of occurrence of a hazard
Consequence of the hazard
x
SOFTWARE SAFETY There are a number of problems with this approach ; The safety of systems involving software (Programmable Electronic Systems or PESs) has been of growing concern within the computer and control communities. In the last few years this concern has become more widespread and is now shared by many in the more general population. Software has certain characteristics which make the creation of standards similar to those used for other products rather difficult. Software is used for many different types of application and whilst apparently simple in small doses is exceedingly complex in its operation for most practical applications. There is therefore no easy application criterion which can be applied to software. Furthermore the complex logical structures embedded in software together with its digital nature severely restrict the usefulness of testing. The nature of software is such that its failures are systematic rather than random, though in practice they may appear to be random. Conventional approaches to defining safety requirements often involve the specification of required figures for availability or failure on demand. There has been much work on the development of models which allow such figures to be calculated for software . Whilst this work is clearly of use the systematic nature of software failures means that from the point of view of safety such figures must be treated with extreme caution. Existing work tends to stress the importance of quality rather than measurement . Safety Integrity levels have been used (IEC, 1989) in order to fit such qualitative criteria into the regime of different degrees of safety for different types of application . These safety integrity levels are currently identified simply as; Very high High Medium Low Normal COnBINATION OF RISK, CONSEQUENCE AND BENEFIT Safety Integrity is a continuous quantity which is derived from a combination of many different components and which is essentially subjective in nature . These Integrity levels allow a degree of objectivity to be brought to bear provided suitable mappings can be generated from the various factors which contribute to required safety integrity . The IEC (1989) has proposed a structure for such mappings which I have extended (Johnston, Wood , 1989) and which is shown in Fig. 1.
Values Frequency of occurrence and Severity of hazard must be expressed as appropriate numeric values (or by some other means for which a suitable "mul tiplication operator " is defined) . Ideally these values should be scaled in such a way as to provide for an "easy" transformation from Risk Level to Integrity Level. Effect Of Integrity Level On Frequency Since the system configuration, amongst other things, affects the reliability of the system we have a potentially circular situation. The Frequency is used in Integrity Level which is of the system (software) Frequency . Clearly this insuperable, but it must
determining the required used to drive the design which affects the problem is not be taken into account.
No account is taken of Benefit . Given the approach suggested benefit would be most naturally included as a divisor; Frequency x Severity Benefit Benefit could be incorporated, as shown above , directly in Risk; or separately in Integrity as shown in the Fig . 1. Alternatively Benefit may contribute in both ways. Subjectivity Frequency of occurrence is a "fairly" straightforward value in some ways , though it may be difficult to justify a specific figure. Severity and Benefit are much less clear cut . They depend on a number of factors some of which are rooted in the opinions and beliefs of society . POSSIBLE APPROACH As a starting point I have taken the following approach, which whilst raising further problems does provide a means of reasoning about the combination. Assume frequency is expressed as a figure within the range [0,1] where; 1 is "occurs all the time " "never occurs".
° is
In other words expressed as something like a probability.
The Impact of Social Factors o n Acceptable Levels of Safety Integrity There must be some question as to whether a hazard which occurs all the time is something which should be considered. remember that we are looking at the overall system . Perhaps a complete redesign of the system is appropriate. Equally is a hazard which never occurs really a hazard? Assume severity is expressed as a value within the range [O.lJ where;
o
is "completely benign" 1 is "some catastrophic event" .
Consideration must be given here to quite what is meant by "catastrophic". If we wish the range [O . lJ to encompass all possibilities then 1 must represent a "worst imaginable catastrophe". for example collision between Earth and Sun. From the human point of view this would be the "end of the world" and nothing could be worse . This gives a very wide range . I propose a restriction under which 1 represents "the worst catastrophe which is possible as a result of the failure of a system which we are prepared as a society to countenance· the existence of" . This approach is discussed in Appendix A. A straightforward multiplication; Frequency x Severity now gives some number to which I believe we can assign a realistic meaning ; Severity of hazard E [O . lJ Frequency of Occurrence E [O.lJ
S F
S x F
= R = Risk E [O . lJ
R
=)
=
0
R
=
There is no risk. 1 =) There is the highest possible risk. (Unless some protection is provided the worst imaginable catastrophe will occur now)
In real situations we would expect to get results for R such as R = 1/2 or R = 0.132 . Figure 1 depicts a mapping from the risk level (R) to a system integrity level (SI) . We could define this mapping (f) as f(x)
for for for for 5 for
1 2 3 4
x x x x x
e [O.0.2J. E (0.2.0.4J. E (0 . 4.0.6J . E (0.6.0.8J. E (0.8 . 1J
Such a mapping provides a means of combining Frequency and Severity to derive integrity level . The intervals which the mapping uses to select an integrity level may well need modification. but the method is defined . THE EFFECT OF CONSIDERING BENEFIT The means proposed above of combining frequency and severity to derive a risk and thence to map to a system integrity level is certainly attractive. Clearly there are some systems where the risk is too great for any level of safety integrity to be appropriate (unless we postUlate a 6th level of; "system shall not be implemented in this form") . A Risk level of 1 on this basis implies a certainty of the hazard occurring and a catastrophic severity. This can only be acceptable if the system is such as to reduce the frequency of occurrence to some very low level. which results in the Risk not in fact being at the level identified .
159
There is then a boundary level at which Risk maps to the 6th level. Beyond this boundary the system is not. or should not be. implemented. Any system is implemented in order to achieve sa.e objective. This objective and possibly side effects of the system are the benefits to be had from its implementation. The degree of benefit clearly has an effect on the mapping from Risk to required system integrity level. A level of safety integrity which is acceptable for a power plant may vary depending on the application for which the power is being generated. Power required for essential services might be considered to constitute a greater benefit than power required for leisure activities. SOCIAL FACTORS This modification of safety integrity requirement relates to the mapping between Risk and integrity level as shown in Fig. 1. The mapping is also conditioned by the ways in which society views risk taking. The HSE (1987) points out that it is natural to prefer one kind of hazard to another and cites ethical objections as being amongst the reasons for such differences. Both the HSE (1987) and the IGasE (1989) discuss the public's perception of risk. In their terms risk is not quite as discussed here. instead it is level of fatalities per annum. The IGasE(1989) does . however. allow that severity of consequence does affect the perception of risk . and mentions risk of loss of gas supply as being of lower consequence than personal injury . For the purposes of determining required levels of safety integrity a balance must be struck between the. possibly irrational or uninformed . views of the general public and the real risks and benefits as far as they are known . Work has been done by DIN (1988) on the use of risk-parameter graphs as a means of combining the diverse factors influencing the level of risk . I believe that an approach based on that suggested above is superior to the risk-parameter graph for combining Frequency and Hazard Severity. The risk-parameter graph approach may well . however . be extremely well adapted to the combination of the diverse social factors influencing Benefit and Hazard Severity. Various factors associated with the social system are identified in Fig. 1. Based on these I suggest a break down into the factors and categories shown in Table 1. These factors can then be combined using a riskparameter graph such as that shown for Benefit in Fig. 2 . to modify the transformation from Risk Level to integrity level shown in Fig. 1 . The risk-parameter graph shown completes all legs for all parameter except perception of benefit. The values of the modifiers are not identified. In order for this approach to be successful further work is required on; i)
the choice of appropriate parameters
ii)
the relative importance of the parameters (this may affect the order in which they are applied)
iii)
whether any parameters may be excluded from consideration in certain circumstances
160
iv)
v)
1. H . A. J ohnston
the values of the modifiers to be applied and whether they can be applied directly to the Risk Level or whether they should be applied as modifiers to the integrity level the ranges for perception of harm and risk. CONCLUSION
Concern about safety and particularly the safety of systems which incorporate software is becoming more widespread. Safety is a essentially subjective characteristic which it is not possible to measure directly . The approach often taken is to lay down standards which must be complied with for specific applications in particular circumstances. This approach has served well in the past but is not necessarily suitable for the wide variety of applications and circumstances in which software may be used. The concept of integrity levels has been developed to enable so.e form of objectivity to be applied to the safety of software. These integrity levels provide a possible means of taking account of a variety of social factors when deciding on the level of safety required in any particular case . As yet no means of objectively determining the integrity level required in a particular case has been developed . I suggest a first attempt to define a method of determining the integrity level which I believe shows promise. The method is based on the estimation of Frequency and Hazard Severity in the range [0,1] and the combination of these by multiplication to give a Risk level. The Risk level is then transformed to an Integrity Level using a risk-parameter approach which incorporates provision for socially determined risk parameters of hazard and benefit . APPENDIX A WORST CATASTROPHES Provided that we accept that there is a continuum of severity of hazard then there is a range on that continuum which is meaningful. What that range is depends on one's viewpoint. As individuals we can appreciate each others viewpoint to a greater or lesser extent , but the viewpoints are nevertheless different , in fact a single person ' s viewpoint may be different at different times. Driver's Viewpoint When considering a car driver on the motorway the meaningful range of severity is likely to encompass the range from car radio breaks down (benign) to
multiple pile-up occurs around vehicle (worst catastrophe) .
These days of course one might also consider Jumbo jet crash lands on my bit of motorway or earthquake opens up road in front as alternative candidates for "worst catastrophe " . Environmentalist's Viewpoint Consider another viewpoint , that of a person participating in a discussion about the environment , the range might be;
reduction in fish stocks (benign) to
no unpolluted fish (worst catastrophe)
or
reduced tree life from acid rain (benign)
to
Chernobyl style accident makes Europe uninhabitable (worst catastrophe).
Engineer's Viewpoint Given this variety what viewpoint should be taken by engineers considering the application of Programmable Electronics in safety related systems? If it is accepted that safety includes safety from environmental damage and financial hardship there is a practical range which needs to be considered . no discernible harm to persons , the environment or any person ' s standard of living (benign) to
Catastrophic accident in the most damaging circumstances to the most dangerous system currently controlled using programmable electronics or which any society would in the future countenance controlling with programmable electronics (worst catastrophe) REFERENCES
lEe ( 1989). Software for Computers in the
Application of Industrial Safety-Related Systems , Proposal for a Standard. International Electrotechnical Commission , Sub Committee 65A , Working Group 9. HSE (1988) . The Tolerability of Risk in Nuclear Power Stations. HnSO, London. Bennett , P.A . (1984). The Safety of Industrially Based Controllers Incorporating Software. PhD Thesis , Open University , nil ton Keynes. HSE (1987) . Programmable Electronic Systems in Safety Related Applications. HnSO, London . DIN (1988) . Prestandard DIN V 19 251 . Deutsche Electrotechnische Kommission, Frankfurt .
T he Impac t of Social Factors on Acce ptable Levels of Safety Integrity
16 1
Table 1 Social factors influencing integrity level
~c:
Category
Range
Numberof persons
All
Type of Benefit
I One individual Essential Desirable Luxury
Q)
m Extent of Benefit
Great I Little
E2 E3
Perception of Benefit
m
P?
Number of persons
All
N1 N2 N3
I One individual Degree of harm
Death Injury Environment Economic
Perception of harm
m
H1 H2 H3 H4 P?
Degree of choice
Involuntary Difficult to avoid Entirely voluntary
C1 C2 C3
"E CO
~
I
N1 N2 N3 T1 T2 T3 E1
Frequency
a
Occurence
Higl Software Integrity level
5
4 3 2
Severity of Hazard
Higl NlZT'ber affected
2
Degree of Harm
T1
Perception of Harm Degree of Choice
low Social
System
Nurrber Benefitting
Type of Benefit Essential Desirable
luxury Extent of Benefit low
Fig. 1. Risk and integrity level mappings
Fig. 2. Risk -parameter graph deriving modifiers