Reliability Engineering and System Safety 91 (2006) 964–973 www.elsevier.com/locate/ress
Generalizing the safety factor approach Jonas Clausen a,*, Sven Ove Hansson a, Fred Nilsson b b
a Division of Philosophy, Royal Institute of Technology, S-100 44 Stockholm, Sweden Department of Solid Mechanics, Royal Institute of Technology, S-100 44 Stockholm, Sweden
Available online 2 November 2005
Abstract Safety factors (uncertainty factors) are used to avoid failure in a wide variety of practices and disciplines, in particular engineering design and toxicology. Although these two areas have similar problems in their use of safety factors, there are no signs of previous communication between the two disciplines. The present contribution aims at initiating such communications by pointing out parallel practices and joint issues between the two disciplines. These include the distinction between probabilistic variability and epistemic uncertainty, the importance of distribution tails, and the problem of countervailing risks. In conclusion, it is proposed that future research in this area should be interdisciplinary and make use of experiences from the various areas in which safety factors are used. q 2005 Elsevier Ltd. All rights reserved. Keywords: Safety factor; Uncertainty factor; Uncertainty function; Uncertainty; Variability
1. Introduction Safety factors are used in a wide variety of disciplines in order to avoid various types of failure. The two disciplines in which they are most widely used are structural engineering and toxicology. Although many of the problems related to safety factors are largely the same in these two disciplines, and extensive debates have taken place in each of them, we have not seen any previous signs of contact between these two discussions. The purpose of the present paper is to provide a general overview of safety factors and related concepts, taking into account their uses in different areas of application. We begin by briefly introducing how safety factors are used in structural engineering and in toxicology (Section 2). After that we provide a general framework that includes but is not restricted to commonly used safety factors (Section 3), categorize safety factors and related concepts according to how they are determined (Section 4) and what they are intended to protect against (Section 5), discuss their relations to probability statements (Sections 6–8) and outline some important areas for future research (Section 9). This paper is the first outcome * Corresponding author. Fax: C46 8 790 9517. E-mail address:
[email protected] (J. Clausen).
0951-8320/$ - see front matter q 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.ress.2005.09.002
of a research project that aims at developing a general theory of safety factors (uncertainty factors). The project is supported by the Swedish Research Council. 2. Background Probably, humans have made use of safety margins since the origin of our species. At least since antiquity, builders have added extra strength to their constructions to be on the safe side. However, the explicit use of safety factors in calculations seems to be of much later origin, probably the latter half of the 19th century. In the 1860s, the German railroad engineer A. Wohler recommended a factor of two for tension [1]. In the early 1880’s, the term ‘factor of safety’ was in use; hence Rankine’s A Manual of Civil Engineering defined it as the ratio of the breaking load to the working load, and recommended different factors of safety for different materials [1]. In structural engineering, the use of safety factors is now since long well established and many different systems are in use. Most commonly, a safety factor is defined as the ratio between a measure of the maximal load not leading to failure and a corresponding measure of the applied load. In some cases, it may be preferable to define the safety factor as the ratio between the estimated design life and the actual service life. Design criteria employing safety factors can be found for instance in norms and standards. A typical feature of such systems is that they have to consider all the integritythreatening mechanisms that can occur. For instance, one
J. Clausen et al. / Reliability Engineering and System Safety 91 (2006) 964–973
safety factor may be required for resistance to plastic deformation and another one for fatigue resistance. The definition of a safety factor is always connected to the particular method of design. Thus, in the European tradition it has been customary to apply the safety factor to the yield limit, whereas in the American tradition it has been applied to the ultimate tensile strength. This has in turn influenced the material development so that European steels in general have a higher yield limit to ultimate tensile strength ratio than the American ones. It thus follows that a safety factor can only be understood in the context of the structure and design practice in which it is used. In toxicology, the use of explicit safety factors is more recent. In spite of some precursors [2], it dates from the middle of the 20th century. In 1945, Hart and co-workers proposed the use of what is now called an application factor (i.e. the inverse of a safety factor) of 0.3 to be applied to acute toxicity data [3]. The first proposal of a safety factor for toxicity was Lehman and Fitzhugh’s proposal in 1954 that ADIs (acceptable daily intakes) be obtained for food additives by dividing chronic animal NELs (no effect levels) in mg/kg of diet by 100 [4]. Hence, they defined a safety factor as the ratio between an experimentally determined dose and a dose that is accepted in humans in a particular regulatory context. This definition is still in use. Their value of 100 is also still widely in use, but higher factors such as 1000, 2000, and even 5000 are also used in the regulation of substances believed to induce severe toxic effects in humans [4,5]. Toxicological uncertainty factors are often accounted for as products of subfactors, each of which relates to a particular ‘extrapolation’. Hence, the factor 100 is described as composed of two factors of 10, one for the extrapolation from animals to humans and the other for the extrapolation from the average human to the most sensitive parts of the human population [5]. For ecotoxicity, factors below 100, such as 10, 20, and 50, are widely in use [3]. Other terms used for different variants of safety factors include ‘uncertainty factor’ [6], ‘margin of safety’ [7], ‘factor of ignorance’ [8], ‘contingency factor’ [9], and ‘assessment factor’ [3]. 3. Margins, factors, and functions In addition to safety factors, the closely related concept of a safety margin is used in several contexts. Airplanes are kept apart in the air; a safety margin in the form of a minimal distance is required. Surgeons removing a tumour also remove the tissue closest to the tumour. This ‘safety margin’, or ‘surgical margin’ is defined as the distance between the reactive zone of a tumour and the place of the surgical lesion. Typical values are 1–2 cm [10]. The notion of a safety margin is also sometimes used in structural engineering, and is then defined as capacity minus load [11]. Synonyms for ‘safety margin’ include ‘reserve capacity’ [11] and ‘reserve strength’ [12]. The essential difference between safety factors and safety margins is that the former are multiplicative whereas the latter are additive. For some purposes, both the multiplicative (safety
965
factor) and the additive (safety margin) approaches have been used. This applies to intestinal capacity to absorb nutrients [13] and to the geotechnical issue of embankment reliability [14]. Multiplication by a constant and addition by a constant are not the only mathematical operations that can be used to adjust technological or physiological variables in the direction of safety. The obvious generalization is to apply a function to the crucial variable (such as load in structural engineering and dose in toxicology). Multiplication by a safety factor and addition of a safety margin are special cases of this more general functional approach. There are some precedents for using the term ‘safety margin’ to denote a more general notion, namely ‘in general an arithmetic relationship comparing resistance to load, whatever format it takes (safety factor, an expression of partial factors, or a difference in numbers)’ [15]. We prefer, however, to reserve the term ‘margin’ for the additive subcase, and to use ‘function’ for the more general notion. Furthermore, the term ‘uncertainty’ is more accurate in this context than ‘safety’, since these factors and margins are applied in order to cope with uncertainty but they do not necessarily give rise to safety. Therefore, we propose to use the term ‘uncertainty function’ for the general notion, and ‘uncertainty factor’ and ‘uncertainty margin’ for the multiplicative and the additive subcases. The general framework in which uncertainty functions arise can then be described as follows: In order to avoid some kind of failure, we focus on a measurable variable x that is related to the occurrence of this failure. If the value of this variable is less than xc failure or undesired consequences are believed not to occur. In structural engineering, xc is termed the failure load of the structure considered and is calculated by deterministic methods. The numerical value depends of the input data and these are obtained by various methods. Sometimes expected values are used, in other cases more or less conservative data are employed. The design load y is a measure of the actual load obtained in a similar way. In order to decrease the probability or severity of failure it is required that y%u(xc), where the function u is here termed the uncertainty function. In time dependent systems (e.g. fatigue) xc represents the expected life of the structure, while y is the actual service life. In toxicology, xc represents the highest exposure that is believed to have no adverse effects and u(xc) an exposure that is accepted by the standard-setting body. The uncertainty function u can be represented by a (constant) factor f, fO1, if u(xc)Zxc/f. In the same way, u can be represented by a (constant) margin m, mO0, if u(xc)Z xcKm. We will use the terms ‘uncertainty factor’ and ‘uncertainty margin’ to denote f and m, respectively. In the case of just one design variable, the condition of no failure can often be expressed as gcKg(x)R0, where g is a nondecreasing function, gcZg(xc) the critical level and equality thus corresponds to the critical state. See Fig. 1, in which the design load is marked by the point (g0, y). An uncertainty function can be constructed in a variety of ways. If we want to maintain a certain ratio between xc and y, then the uncertainty function becomes u(xc)Zxc/f. If instead, as is often case when there is a non-linear relation between
966
J. Clausen et al. / Reliability Engineering and System Safety 91 (2006) 964–973
g gc = g(xc) (Critical state)
gc
Unsafe region (portion of curve)
g0
4. Explicit, implicit, and natural uncertainty functions
Design region (portion of curve)
Current uncertainty functions can be divided into three categories: xc
y = u(xc)
becomes more ambiguous than in the one-dimensional case. One way of accomplishing a situation analogous to that of the one-dimensional case is to restrict the design variable to a certain subregion enclosed by the critical state as is sketched in Fig. 2. This can for instance be done by a co-ordinate transformation and the design subregion is then given by h(u(x1,.,xn))R0, where u is an n-dimensional vector function. This function is then analogous to the uncertainty function defined above for the one-dimensional case.
x
4.1. Explicitly chosen uncertainty functions
Fig. 1. Schematic failure condition for a one-dimensional case.
primary load variable and the local value of some variable connected to the failure mechanism, we want to maintain a certain ratio between gc and g0, say f0, then the uncertainty function according to the previous definition becomes u(xc)Z gK1(g(xc)/f0) The numerical value of the uncertainty factors f and f0 will be different although they represent the same conditions. It is thus clear that the definition of an uncertainty factor is not unique. Generalizing further, it is often the case that a certain failure is related to not one, but a set of variables. Multi-dimensional descriptions of failure are rather the rule in structural engineering, e.g. failure surfaces [16] and limit state functions [17], and have found their use in toxicology, e.g. dose-response surfaces [18]. When several variables are involved the condition for no failure can in general be expressed as h(x1,.,xn)R0, where equality corresponds to failure. This is schematically illustrated in Fig. 2. Obviously, any definition of a safety measure and the corresponding uncertainty function x2
h(x1,x2) = 0 (Critical state)
Unsafe region (area)
h(u(x1,x2)) = 0
Design region (area)
These are the uncertainty functions most commonly referred to. They are used e.g. by the engineer who multiplies the foreseen load on a structure by a standard value of, say, three and uses this larger value in his or her construction work. Similarly, the regulatory toxicologist applies an explicitly chosen uncertainty factor when she divides the dose believed to be harmless in animals by a previously decided constant such as 100, and uses the obtained value as a regulatory limit. Explicitly chosen uncertainty factors are also used e.g. in geotechnical engineering [12,14], in ecotoxicology [3], and in fusion research [19] (for plasma containment). Explicitly chosen uncertainty margins are used in surgery [10,20] (the ‘surgical margin’), in radiotherapy [21] (to cope with set-up errors and internal organ motion), and in air traffic safety [22,23]. 4.2. Implicit safety reserves We use the term ‘safety reserve’ to denote margins that have not been chosen as explicit uncertainty functions, but can, after the fact, be described as such. Implicit safety reserves have their origin in human choice, but in choices that are not made in terms of uncertainty functions. As one example of this, occupational toxicology differs from food toxicology in that allowable doses are mostly determined in a case-by-case negotiation-like process that does not involve the use of generalized (fixed) uncertainty factors. However, it is possible to infer implicit uncertainty factors; in other words, the regulatory decision can be shown to be the same as if certain uncertainty factors had been used [24]. Another example can be found in traffic safety research. The behaviour of drivers can be described as if they applied a certain uncertainty margin to the distance between their car and the car nearest ahead [25] (this margin is measured as the time headway, i.e. the distance divided by the speed). 4.3. Naturally occurring safety reserves
x1 Fig. 2. An illustration of a multi-dimensional case.
These are the safety reserves of natural phenomena that can be calculated by comparing a structural or physiological
J. Clausen et al. / Reliability Engineering and System Safety 91 (2006) 964–973
capacity to the actually occurring loads. These safety reserves have not been chosen by human beings, but are our way of describing properties that have developed through evolution. Just as with implicit safety reserves, naturally occurring safety reserves can often be described in terms of uncertainty functions. Structural uncertainty factors have been calculated for mammalian bones [26,27], crab claws [28], shells of limpets [29], and tree stems [30]. Physiological uncertainty factors have been calculated e.g. for intestinal capacities such as glucose transport and lactose uptake [11], for hypoxia tolerance in insects [31], and for human speech recognition under conditions of speech distortion [32]. The reason why uncertainty functions can be applied in descriptions of natural phenomena is that the way we calculate loads—be it for natural or artificial artefacts—we do not consider more unusual loads. Resistance to unusual, unforeseen loads is as important for the survival of an organism as it is for the continued structural integrity of an anthropogenic artefact. If a limpet shell has extra strength, then it may resist predators even if its strength has been diminished due to infection by endolithic organisms. Similarly, the extra strength of tree stems makes it possible for them to withstand storms even if they have been damaged by insects. Hence, natural uncertainty functions—or more precisely: natural features that can be described in terms of uncertainty functions—are present although the physiological and mechanical capacities of animals and plants have been adapted to loads that will actually be encountered. On the other hand, there is a limit to the evolutionary advantage of excessive safety reserves. Organisms with unnecessarily prudent safety reserves would be disadvantaged. Trees with large safety reserves are better able to resist storms, but in the competition for light reception, they may lose out to tender and high trees with smaller safety reserves [30]. In general, the costs associated with excessive capacities result in their elimination by natural selection [11]. There are at least two important lessons to learn from nature in this context. First, resistance to unusual loads, that are sometimes difficult to foresee, are essential for survival. Secondly, a balance will nevertheless always have to be struck between the dangers of having too little reserve capacity and the costs of having reserve capacity that is not used. 5. What do uncertainty functions protect against? In characterizing the sources of failure that uncertainty functions are intended to protect against we have use for distinctions from decision theory. A decision is said to be made ‘under certainty’ if the decision-maker knows, for each alternative under consideration, what will be the outcome if that alternative is chosen. Non-certainty is further divided into the categories of risk and uncertainty. The locus classicus for this subdivision is Knight [33], who pointed out that ‘the term ‘risk’, as loosely used in everyday speech and in economic discussion, really covers two things which, functionally at least, in their causal relations to the phenomena of economic organization, are categorically different’. In some cases, ‘risk’
967
means ‘a quantity susceptible of measurement’, in other cases ‘something distinctly not of this character’. He proposed to reserve the term ‘uncertainty’ for cases of the non-quantifiable type, and the term ‘risk’ for the quantifiable cases [33]. In one of the most influential textbooks in decision theory, the terms are defined in the following way: We shall say that we are in the realm of decision making under: (a) Certainty if each action is known to lead invariably to a specific outcome (the words prospect, stimulus, alternative, etc., are also used). (b) Risk if each action leads to one of a set of possible specific outcomes, each outcome occurring with a known probability. The probabilities are assumed to be known to the decision maker. For example, an action might lead to this risky outcome: a reward of $10 if a ’fair’ coin comes up heads, and a loss of $5 if it comes up tails. Of course, certainty is a degenerate case of risk where the probabilities are 0 and 1. (c) Uncertainty if either action or both has as its consequence a set of possible specific outcomes, but where the probabilities of these outcomes are completely unknown or are not even meaningful [34, p. 13]. These distinctions are also made in the engineering literature [35], but in the more recent literature the term ‘risk’ is mostly replaced by ‘probability’, so that the distinction is between probability and uncertainty rather than between risk and uncertainty. The reason for this is that since the 1970s, the term ‘risk’ has increasingly been reserved for the statistical expectation value of unwanted events, i.e. the integrated product of probability and disutility [36]. It is a contested philosophical issue whether or not uncertainty (interpreted as lack of information or as subjective doubt) can be adequately represented by numerical values that satisfy the standard probability axioms [37]. For the present purposes, we will leave this issue open, and divide the sources of failure that uncertainty functions are intended to protect against into two major categories: (1) the variability of empirical indicators of the propensity for failure (corresponding to risk in decision-theoretical terminology) and (2) genuine epistemic uncertainty. The latter category includes the ‘risk’ (in the informal sense of that word) that important factors that should have influenced the appraisal have not yet been discovered. In our view, it is an important but not always well-understood feature of uncertainty functions that they are directed both at (probabilistic) variabilities and at (arguably non-probabilistic) epistemic uncertainty. In structural engineering, uncertainty factors are intended to compensate for five major categories of sources of failure: (1) higher loads than those foreseen, (2) worse properties of the material than foreseen, (3) imperfect theory of the failure mechanism in question, (4) possibly unknown failure
968
J. Clausen et al. / Reliability Engineering and System Safety 91 (2006) 964–973
mechanisms, and (5) human error (e.g. in design) [15,38]. The first two of these can in general be classified as variabilities, whereas the last three belong to the category of (genuine) uncertainty. In toxicology, uncertainty factors are typically presented as compensations for various variabilities. Other uses are as compensation for data deficiencies or to enable extrapolations. The factors used are normally one for extrapolation from animals to humans (UA), one for intraspecies—normally human—variability (UH), one for extrapolation from subchronic to chronic exposure (US), one for extrapolation from LOAEL1 to NOAEL2 (UL) and finally, one for inadequacy of databases (UD) [39]. As already mentioned, the traditional 100-fold factor is often accounted for in terms of one factor of 10 for interspecies (animal to human) variability in response to the toxicity and another factor of 10 for intraspecies (human) variability in the same respect [4]. It is important to note that these two uncertainty factors represent different types of relationships. Interspecies uncertainty factors represent quotas between two stochastic variables, e.g. the dose required for a certain effect in humans divided by the corresponding dose in an experimental species (the dose of a certain substance that will produce a certain effect in a randomly chosen individual being a stochastic variable), whereas intraspecies uncertainty factors represent quotas between different percentiles e.g. in the density function for the dose required for a certain effect in humans. In addition—although this is not often referred to— uncertainty factors in toxicology also protect against epistemic uncertainty. Suppose, for instance, that a substance is known to give rise to acute toxicity at a dose of 500 mg/kg body weight. It also gives rise to toxic effects after long-term exposure at a daily dose of 30 mg/kg body weight, but this is not known since no long-term studies have been performed on the substance. If an uncertainty factor of 100 is applied to the acutely toxic dose, then the exposure limit (5 mg/kg body weight) will protect also against the long-term effects. In this way, the uncertainty factor protects (to some degree) against epistemic uncertainty as well as against variability. Since, the available information on most chemical substances is incomplete [40], this is probably an important function of toxicological uncertainty (safety) factors. In some cases, uncertainty factors have been tailored to the explicit purpose of compensating for incomplete databases [41]. Several authors have discussed how the use of uncertainty factors in toxicology is currently undergoing change. As an example, Dourson et al. [42] note that in recent time there has been a movement among health agencies away from default factors towards factors based to an increasing degree on scientific information relating to the individual substance.
1 2
Lowest Observed Adverse Effect Level. No Observed Adverse Effect Level.
6. The need to optimise As we saw in Section 3, in nature, protection against one type of failure often counteracts protection against other types of failures. This applies equally in social and technological contexts. In other words, overdesign has its costs. The determination of uncertainty functions is a ‘‘balancing act’ between competing social values, against a background of scientific uncertainties’ [43]. The most obvious disadvantages of increased protection against health risks or mechanical failures are often economical. Therefore, the balance involved in the determination of a level of protection is often conceived as a competition between environment and safety on one side and economy on the other side. This is, however, a much too simplified picture. The action of deploying countermeasures against a certain type of risk may at times bring about other risks, so-called countervailing risks [44]. In engineering, in particular, overdesign clearly has a price in terms of excess usage of energy and other natural resources. In toxicology, protection against possible health effects of pesticides or preservatives may have to be weighed against the positive effects of improved production and conservation of food. Attempts to alter the current risk situation using uncertainty functions could be interpreted as movements along a risk frontier—a multi-dimensional surface in an orthonormal coordinate system, with the probability of a certain type of failure on each axis. This risk frontier is not absolute; it can be changed, for example by innovations in technology and practices [44]. Therefore, the determination of uncertainty functions should be seen as a process of optimisation. This approach to optimisation is clearly related to the method, proposed by Gayton and others, to optimise the set of partial safety factors in order to achieve a given safety goal efficiently [45]. In a recent contribution, Ruediger Rackwitz pointed out that current safety factors and other acceptance criteria, as laid down in codes, standards and regulations have been set in a process that may have led to non-optimal results [46]. As an alternative, he proposes direct cost-benefit analysis, based on the simple principle that a technical facility is optimal if and only if it maximizes a term B(p)–C(p)–D(p), where p is the vector of all relevant parameters, B(p) the benefit derived from the facility characterized by this vector, C(p) the costs of design and construction, and D(p) the (statistically expected) cost of failure. Like other versions of risk-benefit analysis, this calculation requires that monetary values be assigned to all outcomes, including deaths, so that an overall value can be calculated for each alternative under consideration. Various methods to convert lives to monetary values have been devised, making use of expected earnings, actual sums paid to save lives, willingness to pay for reduced risks of death, etc. (the method recommended by Rackwitz is a life quality index that is based on the quality-adjusted life years used in some priority-setting practices in medical ethics [47]). The practice of converting lives into monetary units has obvious advantages in terms of
J. Clausen et al. / Reliability Engineering and System Safety 91 (2006) 964–973
computational convenience, but the normative validity of such procedures is far from clear [48]. It is difficult to defend a ‘price’ of human lives as anything but a technical necessity. Stuart Hampshire has warned that the explicit assignment of monetary values to lives may encourage ‘a coarseness and grossness of moral feeling, a blunting of sensibility, and a suppression of individual discrimination and gentleness’ [49]. This is a warning that should not be taken lightly. But it also has to be conceded that in a practice based on safety factors, risks of human lives are weighed against costs, albeit less transparently. 7. Uncertainty functions and the tails of probability functions As was noted in Section 4, uncertainty functions are used to protect both against variability and against epistemic uncertainty. At first it may be surprising that uncertainty functions are at all used for the former purpose. Why not use probabilities instead, since variabilities are probabilistic? The reason is that we wish to protect against probabilities that are too low to be determinable in empirical experiments. This applies both to structural engineering and to toxicology. Experimental data on material properties are often insufficient for making a distinction possible between e.g. gamma and lognormal distributions, a problem called distribution arbitrariness [50]. This has little effect on expected values of these properties, but as one moves further towards the distribution tails—e.g. when reliability demands go up—the differences become significant. In some cases, levels of risk may vary by an order of a magnitude following slight changes in the basic variables that affect the tails of the distribution. Although the tails cannot in general be accurately determined, statistical methods are available that can be used to determine the uncertainty inherent in tail-based estimates of risk or safety [51]. The size of samples and a careful analysis of the influence of data sampling uncertainties are essential in this context [52]. A similar problem with distribution tails applies in toxicology. Just as in structural engineering, we are concerned with avoiding events with much lower probabilities than those that can be determined in actual experiments (or in epidemiological studies on exposed populations). In standard animal experiments (with typically%100 animals in each dose group) the doses that give rise to a 10 or 20% increase in cancer incidence can easily be determined, but it is in practice impossible to determine directly what doses give rise to e.g. an increase of 0.01 or 0.1% in that incidence. As stated in Klaassen [53] in a discussion of the dose-response relationship: The sigmoid curve has a relatively linear portion between 16 and 84 percent. These values represent the limits of 1 standard deviation (SD) of the mean (and the median) in a population with truly normal or Gaussian distribution. However, it is usually not practical to describe the doseresponse curve from this type of plot because one does not usually have large enough sample sizes to define the sigmoid curve adequately [emphasis added]
969
[53, p. 21]. There are also further difficulties connected with the application of results from other species in human risk assessment [53]. In conclusion, the low-dose end of the doseresponse curve cannot be empirically determined. The same applies to low doses of radiation [54]. As a result of this, the relation between the size of the uncertainty factor and the reduction in failure probability is unknown in toxicology, just as in structural engineering (see Fig. 3). In the case of an actual linear dose-response relationship, a reduction of the dose gives rise to a proportional reduction of the response (Zfrequency of adverse effect). If the actual dose-response curve is sublinear, the reduction will be more than proportional, whereas if the curve is supralinear, the reduction will be less than proportional [55] (sublinear doseresponse relationships may be due to the existence of a threshold below which no adverse effects occur. Supralinear relationships on the other hand can be caused by saturation mechanisms, i.e. systems that become incapacitated and then cease to be affected by further exposure). Gaylor and Kodell [39] label two of the commonly used toxicological uncertainty factors as ‘risk reduction factors’, namely the uncertainty factor that compensates for intrahuman variability and for the use of a LOAEL instead of a NOAEL. Other uncertainty factors, such as those used for extrapolations from animals to humans and subchronic to chronic exposure, are not meant to reduce risk but are meant to estimate quotas between doses of similar response levels in different doseresponse curves. This is an important distinction, but it is also important to distinguish between the intentions with which an uncertainty factor is used and the effects of using it. An example: a certain dose, say 10 mg/kg body weight, corresponds to a certain probability x of some adverse effect in mice. The very same dose, we assume, corresponds to some probability y of the corresponding adverse effect in humans. Applying an uncertainty factor larger than one, no matter its purpose, to the dose will, if the actual dose-response relationships are strictly increasing in the relevant range, lower the probability of the effect in humans to some y 0 !y and for mice to some x 0 !x. If the factor used is meant only for interspecies extrapolation and is successfully chosen for that purpose, the result should be that y 0 zx. There are now (at least) two ways of looking at this. One is to compare risks for mice to risks for humans. Viewed in this way, this interspecies extrapolation factor is not risk reducing, since the risk for mice before application of the factor is more or less the same as the risk for humans after it was applied. The other is to compare the risks to humans that result when this uncertainty factor is used to the risk that would ensue if no uncertainty factor (or equivalently, the factor 1) were used. Since y 0 !y, the new dose carries lower risk for humans and the factor is then riskreducing. Confusion can be avoided by being explicit about which dose-response curve is being discussed. Uncertainty factors are used in toxicology precisely because the shape of the dose-response curve is not known in the lowdose range. Since one and the same uncertainty factor is used for different substances, that may have differently shaped
970
J. Clausen et al. / Reliability Engineering and System Safety 91 (2006) 964–973
Response
Data points with 95% confidence intervals
Dose
Sublinear relationship.
Linear relationship.
Supralinear relationship.
Fig. 3. A toxicological data set is often consistent with several dose-response relationships. Depending on the actual nature of the relationship, acting on e.g. linear assumptions in the low-dose area may bring about risk situations different from those anticipated.
calculations. Hence, in toxicology, an alternative to the use of uncertainty factors is to calculate, based on an extrapolated dose-response curve, a ‘virtually safe dose’, i.e. a dose considered to be associated with a sufficiently low probability of toxic effects [55,56]. In cancer risk assessment, this is the dominant approach, whereas the uncertainty factor approach still dominates in the assessment of non-cancer risks. A dose-response relationship can be represented either as a distribution function or as a density function (see Fig. 4). In
dose-response curves in the unobservable (near-zero) interval, division of the no-effect dose by the uncertainty factor may give rise to different reductions in toxic response frequencies for different substances. 8. Can probabilities replace uncertainty functions? In both toxicology and structural engineering, attempts have been made to replace uncertainty factors by probabilistic
FE
fE
Frequency data intervals
Dose
Dose
Fig. 4. Left: A schematic density (frequency) function, fE, for a certain toxicological effect E in a certain group G. Right: The distribution function, FE, for E in G, i.e. the cumulative value of the integral of fE. The dose-response curve that results from a toxicological experiment is proportional to FE for the examined effect. The diagram shows a case commonly referred to in toxicology, namely that in which there is no dose threshold beneath which a substance does not give rise to the examined effect.
J. Clausen et al. / Reliability Engineering and System Safety 91 (2006) 964–973
971
FC
fC Threshold
Threshold
Frequency data intervals
Tension
Tension
Fig. 5. Analogous to those in Fig. 4, the curves represent schematic density and distribution functions for a critical state C for a certain type of structural element L. In structural mechanics, thresholds are often implicitly assumed through the use of deterministic methods for calculating failure conditions.
actual toxicological practice, distribution functions are the standard form of representation, and are called dose-response curves. Toxicological density functions are rarely discussed. When they are they tend to be expressed as lognormal distributions [53]. Similarly, in structural engineering, both density functions and distribution functions can be used to express the relationship between loads and failure states (see Fig. 5). Here, as opposed to in toxicology, density functions are standardly used to describe the probabilistic properties of various failure types. In some branches of structural engineering, probabilistic methods are used either directly or to calibrate a system of uncertainty factors or other simplified measures of safety. It is debated to what extent probabilistic methods constitute an improvement. As with uncertainty functions, probabilistic methods do not provide any absolute measures of safety. A probabilistic statement always depends on the current state of knowledge, so it cannot be expected to be meaningful to compare probabilistic failure statements between different types of structures and different design practises. However, as was noted above, uncertainty functions are used to protect not only against variabilities, but also against uncertainties such as human error, errors in the model, etc. In discussions of the substitutability of uncertainty functions by probabilistic analysis, it is important to distinguish between these two types of sources of failure. In the case of variabilities it is, at least in principle, possible to replace uncertainty functions by probabilities whenever the relevant probabilities can be determined. Hence, knowledge of the differences in variability between different materials used by engineers has given rise to materialdependent uncertainty factors. In principle, knowledge about actually occurring load distributions can be used in a similar manner to determine the use of uncertainty factors to protect against unusual overloads. Similarly, in toxicology, the intraspecies (human) variability in sensitivity to toxic chemicals has been estimated, based on statistics from substances for which such information is available, and similar calculations have been made by for interspecies variability [4,57–63]. However, it is not difficult to find cases in which the lack of statistical information makes the probabilistic analysis of
variabilities impossible. ‘Theoretically, design by using structural system reliability is much more reasonable than that based on the safety factor. However, because of the lack of statistical data from the strength of materials used and the applied loads, design concepts based on the safety factor will still dominate for a period’ [64]. One possible approach is to factor out as far as possible the variabilities for which empirical information is possible, and perform a probabilistic analysis of them. In other words, whatever can reasonably be treated as probabilities should be treated so. There will in most cases also be a need for an additional component to deal with the residual uncertainty (both unknown probabilities and non-probabilistic uncertainty), or as Knoll [15] called it, a ‘basic safety margin’. The basic safety margin may either be introduced on the same level as the variabilities, as an ‘extra’ variability, or it may be applied to the probability itself.
9. Conclusions In summary, we have found that the two disciplines that make most use of uncertainty functions (safety factors), namely structural engineering and toxicology, have much more in common than has usually been realized. In particular, they both use uncertainty functions to cope with known and unknown (probabilistic) variability and with non-probabilistic uncertainty. As practice stands today, there are several issues worth investigation with regard to the use of uncertainty functions. Professionals in various areas are forced to choose uncertainty functions and their parameters in the face of incomplete information. At this level of description, the problem situation appears to be common to all of the areas that make use of uncertainty functions of some kind. What would constitute a satisfactory justification of parameters in uncertainty functions remains to be clarified. We believe, however, that a careful distinction between probabilistic variability and non-probabilistic (epistemic) uncertainty should be an important element in the development of rational methods for uncertainty management.
972
J. Clausen et al. / Reliability Engineering and System Safety 91 (2006) 964–973
Acknowledgements This work was financially supported by the Swedish Research Council. The authors would like to thank Christina Rude´n, Per Wikman and three anonymous referees for valuable comments on earlier versions of this paper. References [1] Randall FA. The safety factor of structures in history. Prof Saf 1976; January:12–28. [2] Calabrese EJ. Uncertainty factors for children: historical precedent. Hum Ecol Risk Asses 2000;6:729–30. [3] Chapman PM, Fairbrother A, Brown D. A critical evaluation of safety (uncertainty) factors for ecological risk assessment. Environ Toxicol Chem 1998;17:99–108. [4] Dourson ML, Stara JF. Regulatory history and experimental support of uncertainty (safety) factors. Regul Toxicol Pharmacol 1983;3(3):224–38. [5] Weil CS. Statistics vs safety factors and scientific judgment in the evaluation of safety for man. Toxicol Appl Pharmacol 1972;21:454–63. [6] SCOEL. Methodology for the derivation of occupational exposure limits: key documentation. European commission EUR 19253 EN. EU publications office; 1999. [7] CSTEE. Position paper on margins of safety (MOS) in human health risk assessment expressed at the 22nd CSTEE plenary meeting, Brussels. http://europa.eu.int/comm/food/fs/sc/sct/out110_en.html, 2002-07-10; 06/07 March 2001. [8] Mitroff II. On the social psychology of the safety factor: a case study in the sociology of engineering science. Manage Sci 1972;18:454–69. [9] Capps RW, Thompson JR. Statistical safety factors reduce overdesign. Hydrocarbon Process 1993;72:77–8. [10] Kawaguchi N. New method of evaluating the surgical margin and safety margin for musculoskeletal sarcoma, analysed on the basis of 457 surgical cases. J Cancer Res Clin Oncol 1995;121:555–63. [11] O’Connor TP, Diamond J. Ontogeny of intestinal safety factors: lactase capacities and lactose loads. Am J Physiol Regul Integr Comp Physiol 1999;45:R753–R65. [12] Wolff TF. Embankment reliability versus factor of safety: before and after slide repair. Int J Numer Anal Methods Geomech 1991;15:41–50. [13] Toloza EM, Lam M, Diamond J. Nutrient extraction of cold-exposed mice—a test of digestive safety margins Am J Physiol 261 4 1991 G608–G620 [14] Duncan JM. Factors of safety and reliability in geotechnical engineering. J Geotech Geoenviron Eng 2000;126:307–16. [15] Knoll F. Commentary on the basic philosophy and recent development of safety margins. Can J Civil Eng 1976;3:409–16. [16] Soares RC, Mohamed A, Venturini WS, Lemaire M. Reliability analysis of non-linear reinforced concrete frames using the response surface method. Reliab Eng Syst Saf 2002;75(1):1–16. [17] Rackwitz R. Optimization—the basis of code-making and reliability verification. Struct Saf 2000;22:27–60. [18] Gessner PK. Isobolographic analysis of interactions: an update on applications and utility. Toxicology 1995;105:161–79. [19] Wootton AJ, Wiley JC, Edmonds PH, Ross DW. Compact tokamak reactors. Nucl Fusion 1997;37(7):927–37. [20] Elias D, et al. Results of 136 curative hepatectomies with a safety margin of less than 10 mm for colorectal metastasis. J Surg Oncol 1998;69: 88–93. [21] Stroom J. Safety margins for geometrical uncertainties in radiotherapy. Med Phys 2000;27(9):2194. [22] Smith K, Hancock PA. Situation awareness is adaptive, externally directed consciousness. Hum Factors 1995;37:137–48. [23] McKenna JT. ATC, political pressures squeeze safety margins. Aviat Week Space Technol 1999;151(5):44–5. [24] Hansson SO. Setting the limit—occupational health standards and the limits of science. Oxford: Oxford University Press; 1998.
[25] van der Hulst M, et al. Anticipation and the adaptive control of safety margins in driving. Ergonomics 1999;42:336–45. [26] Alexander RM. Animals. Cambridge: Cambridge University Press; 1990. [27] Rubin C, Lanyon L. Limb mechanics as a function of speed and gait. J Exp Biol 1982;101:187–211. [28] Palmer AR, Taylor GM, Barton A. Cuticle strength and the sizedependence of safety factors in cancer crab claws. Biol Bull 1999;196(3): 281–94. [29] Lowell RB. Selection for increased safety factors of biological structures as environmental unpredictability increases. Science 1985;228:1009–11. [30] Mattheck C, Bethge K, Scha¨fer J. Safety factors in trees. J Theor Biol 1993;165(2):185–9. [31] Harrison JF, State AZ. Safety margins and the hypoxia sensitivity of insects. FASEB J 1998;12/4/1:S994. [32] Harris JD. Combinations of distortion in speech. AMA Arch Otolaryngol 1959;72:227–32. [33] Knight FH. Risk, uncertainty and profit. London school of economics and political science; 1921,1933. [34] Luce RD, Raiffa H. Games and decisions. New York: Wiley; 1957. [35] Der Kiureghian A. Measures of structural safety under imperfect states of knowledge. J Struct Eng 1989;115(5):1119–40. [36] Rechard RP. Historical relationship between performance assessment for radioactive waste disposal and other types of risk assessment. Risk Anal 1999;19(5):763–807. [37] Hansson SO. What is philosophy of risk? Theoria 1996;62:169–86. [38] Moses F. Problems and prospects of reliability-based optimisation. Eng Struct 1997;19:293–301. [39] Gaylor DW, Kodell RL. A procedure for developing risk-based reference doses. Regul Toxicol Pharmacol 2002;35:137–41. [40] Allanou R, Hansen BG, van der Bilt Y. Public availability of data on EU high production volume chemicals European commission EUR 189 EN. Ispra, Italy: Institute for Health and Consumer Protection; 1999. [41] Gaylor DW, Kodell RL. Percentiles of the product of uncertainty factors for establishing probabilistic reference doses. Risk Anal 2000;20(2): 245–50. [42] Dourson ML, Felter SP, Robinson D. Evolution of science-based uncertainty factors in noncancer risk assessment. Regul Toxicol Pharmacol 1996;24:108–20. [43] Reisa JJ. Margins of safety in the assessment of aquatic hazards of chemicals—some regulatory viewpoints. Aquatic toxicology and hazard assessment. Proceedings of the fourth annual symposium on aquatic toxicology. vol. 737. ASTM Special Technical Publication; 1981 p. 14–27. [44] Graham J, Wiener J. Risk versus risk. Cambridge, MA: Harvard University Press; 1995. [45] Gayton N, Mohamed A, Sorensen JD, Pendola M, Lemaire M. Calibration methods for reliability-based design codes. Struct Saf 2004;26:91–121. [46] Rackwitz R. Optimal and acceptable technical facilities involving risks. Risk Anal 2004;24(3):675–95. [47] Nord E. Cost-value analysis in health care: making sense out of QALYs. Cambridge: Cambridge University Press; 1999. [48] Mishan EJ. Consistency in the valuation of life: a wild goose chase?. In: Frankel Paul E, Miller Jr FD, Paul J, editors. Ethics and economics. Oxford: Basil Blackwell; 1985. [49] Hampshire S. Morality and pessimism. Cambridge: Cambridge University Press; 1972. [50] Ditlevsen O. Distribution arbitrariness in structural reliability. In: Schue¨ller G, Shinozuka M, Yao J, editors. Proceeding of ICOSSAR’93: structural safety and reliability; 1994. p. 1241–7. [51] Caers J, Maes MA. Identifying tails, bounds an end-points of random variables. Struct Saf 1998;20:1–23. [52] Pendola M, et al. The influence of data sampling uncertainties in reliability analysis. In: Melchers RE, Stewart MG, editors. ICASP8— applications of statistics and probability in civil engineering, vol. 2. Rotterdam: Balkema; 2000. [53] Klaassen CD. Casaretts’s & doull’s toxicology—the basic science of poisons. NY, USA: McGraw-Hill; 1996. [54] Doll R. Effects of small doses of ionizing radiation. J Radiol Prot 1998; 18(3):163–74.
J. Clausen et al. / Reliability Engineering and System Safety 91 (2006) 964–973 [55] Gaylor DW. The use of safety factors for controlling risk. J Toxicol Environ Health 1983;11:329–36. [56] Krewski D, Brown C, Murdoch D. Determining ’safe’ levels of exposure: safety factors or mathematical models? Fundam Appl Toxicol 1984;4: S383–S94. [57] Hansen C. The use of environmental safety standards in Denmark. Brighton crop protection conference—pests & diseases. vol. 2; 1996. p. 537–48. [58] Allen BC, Crump KS, Shipp AM. Correlation between carcinogenic potency of chemicals in animals and humans. Risk Anal 1988;8: 531–44. [59] Crouch E, Wilson R. Interspecies comparison of carcinogenic potency. J Toxicol Environ Health 1979;5:1095–118.
973
[60] Dedrick RL, Morrison PF. Carcinogenic potency of alkylating agents in rodents and humans. Cancer Res 1992;52:2464–7. [61] Goodman G, Wilson R. Predicting the carcinogenicity of chemicals in humans from rodent bioassay data. Environ Health Perspect 1991;94: 195–218. [62] Tollefson L, Lorentzen RJ, Brown R, Springer JA. Comparison of the cancer risk of methylene chloride predicted from animal bioassay data with the epidemiological evidence. Risk Anal 1990;10:429–35. [63] Kodell RL, Gaylor DW. Uncertainty of estimates of cancer risks derived by extrapolation from high to low doses and from animals to humans. Int J Toxicol 1997;16:449–60. [64] Zhu TL. A reliability-based safety factor for aircraft composite structures. Comput Struct 1993;48:745–8.