Reliability Engineering and System Safety 120 (2013) 27–38
Contents lists available at ScienceDirect
Reliability Engineering and System Safety journal homepage: www.elsevier.com/locate/ress
Reliability and vulnerability analyses of critical infrastructures: Comparing two approaches in the context of power systems Jonas Johansson a,c,n, Henrik Hassel b,c, Enrico Zio d,e a
Department of Industrial Electrical Engineering and Automation, Lund University, Box 118, SE-221 00 Lund, Sweden Department of Fire Safety Engineering and Systems Safety, Lund University, Box 118, SE-221 00 Lund, Sweden c Lund University Centre for Risk Analysis and Management (LUCRAM), Sweden d Chair on Systems Science and the Energetic Challenge, European Foundation for New Energy-Electricite’ de France, at Ecole Centrale Paris & Supelec, France e Dipartimento di Energia, Politecnico di Milano, Milano, Italy b
art ic l e i nf o
a b s t r a c t
Available online 21 March 2013
Society depends on services provided by critical infrastructures, and hence it is important that they are reliable and robust. Two main approaches for gaining knowledge required for designing and improving critical infrastructures are reliability analysis and vulnerability analysis. The former analyses the ability of the system to perform its intended function; the latter analyses its inability to withstand strains and the effects of the consequent failures. The two approaches have similarities but also some differences with respect to what type of information they generate about the system. In this view, the main purpose of this paper is to discuss and contrast these approaches. To strengthen the discussion and exemplify its findings, a Monte Carlo-based reliability analysis and a vulnerability analysis are considered in their application to a relatively simple, but representative, system the IEEE RTS96 electric power test system. The exemplification reveals that reliability analysis provides a good picture of the system likely behaviour, but fails to capture a large portion of the high consequence scenarios, which are instead captured in the vulnerability analysis. Although these scenarios might be estimated to have small probabilities of occurrence, they should be identified, considered and treated cautiously, as probabilistic analyses should not be the only input to decision-making for the design and protection of critical infrastructures. The general conclusion that can be drawn from the findings of the example is that vulnerability analysis should be used to complement reliability studies, as well as other forms of probabilistic risk analysis. Measures should be sought for reducing both the vulnerability, i.e. improving the system ability to withstand strains and stresses, and the reliability, i.e. improving the likely behaviour. & 2013 Elsevier Ltd. All rights reserved.
Keywords: Reliability analysis Vulnerability analysis Risk management Critical infrastructures Power systems
1. Introduction Continuous supply of critical infrastructure services, such as electric power, water, information and transportation, is essential for people, public and private organizations, and for the security and economy of the society as a whole [1]. The importance of critical infrastructures has been demonstrated in numerous infrastructure breakdowns for example: the U.S. blackout in 2003, Hurricane Katrina in 2005 and the storms Gudrun and Per in Sweden in 2005 and 2007, respectively [2]. Critical infrastructures have undergone and are currently undergoing large changes. They are becoming more dependent
n Corresponding author at: Lund University, Department of Industrial Electrical Engineering and Automation, IEA, LTH, Box 118, SE-221 00 Lund, Sweden. Tel.: þ 46 0 46 222 31 05. E-mail addresses:
[email protected] (J. Johansson),
[email protected] (H. Hassel),
[email protected] (E. Zio).
0951-8320/$ - see front matter & 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.ress.2013.02.027
and interdependent on each other [3]. In addition, they are increasingly connected across geographical borders and thus become more large-scale. These trends make the critical infrastructures more efficient but at the same time more complex and more vulnerable, and the potential for large-scale disruptions increases. The aspects described above in combination with the extensive societal dependence on critical infrastructures, stress the importance of systematically managing risks and vulnerabilities. The traditional risk management approach has been the prevailing one in ensuring continuous services provided by critical infrastructures. Here, the traditional risk management approach is seen as encompassing the identification of hazards and threats that can affect the system, the estimation of the probabilities of various risk scenarios and their negative consequences, and the mitigation of the risks. Risk mitigation is often implemented as protection of the system from hazards and threats to a level of risk that can be deemed acceptable or tolerable. In this paper, we argue that the
28
J. Johansson et al. / Reliability Engineering and System Safety 120 (2013) 27–38
traditional risk management approach needs to be complemented with a vulnerability management approach. A vulnerability management approach is here seen as including the evaluation of the ability of the system to withstand strains and the mitigation of the identified vulnerabilities by implementing system changes. This approach can compensate inherent limitations of the traditional risk management approach. These viewpoints are further elaborated throughout the paper. The cornerstone of any effort to manage the risks and/or vulnerabilities of critical infrastructure systems is good knowledge and understanding regarding the function, operation, capacity, and limitation of the systems. Such knowledge can then be used as guidance towards improvements. In the traditional risk management approach for critical infrastructures, quantitative risk and reliability analysis has been the main approach for acquiring knowledge about the systems of interest. In this, reliability analysis can be seen as part of Quantitative Risk Assessment (QRA), providing the probabilistic input to the risk assessments, i.e. estimating probabilities of various failure scenarios [4–6]. However, in risk and reliability management of critical infrastructures, e.g. electric power systems or water supply systems, the concepts are often treated synonymously, where reliability analysis often also includes estimation of negative consequences. Consider for example the commonly used reliability indices in the electric power system area (e.g. EDNS and EENS – see Section 5 for an explanation of these concepts) which aggregate information about both the frequency and severity of failures. This paper will focus on reliability assessment since traditionally it is the most commonly used concept in the area of critical infrastructures; however, much of the discussion is valid for risk assessment as well. Reliability as a concept has been used in the context of engineering systems for more than 60 years [7]. A frequently used definition, which is adopted here, of reliability is the probability (or more generally – the ability) of a system, sub-system or component “to perform a required function, under given environmental and operational conditions and for a stated period of time” [4–6]. Similar definitions can be found in for example Allan and Billinton [8] and Murray and Grubesic [9]. When it comes to critical infrastructure systems, reliability thus refers to the ability of the critical infrastructure system to provide its services to its customers (e.g. to provide electric power supply to customers or to enable the transport of people and goods on roads). In the area of power systems, in which the example system considered in this paper falls, the concept of reliability is often operationalized in terms of the reliability indices mentioned in the previous paragraph. Quantitative risk and reliability assessments both emphasise the importance of estimating the probabilities of failures which are then used to inform risk management decisions, along with estimations of negative consequences. However, many express criticism towards relying too heavily on quantitative probability estimations when making decisions, see e.g. [10–12]. It is claimed that the estimations may be poor because they are based on insufficient knowledge and inappropriate assumptions (e.g. event independence). In addition, “surprises” may occur, e.g. due to unknown failure mechanisms, or calculations may simply be wrong ([10,13–15]). This is especially argued for when faced with large complexity and uncertainty, which definitely are characteristics of critical infrastructures [7,16,17]. It is argued that in such situations one must also look beyond the estimated probabilities, and risk reduction needs to be designed based on principles such as robustness, resilience, flexibility, diversification and defence-indepth, as well as adding an extra safety margin [17,18], i.e. from a vulnerability perspective. These reduction measures especially need to address the low probability, often high consequence,
events, since it is the tail of the probability distributions that are most difficult to estimate accurately [19]. Another approach to acquire knowledge for understanding and improving critical infrastructures is vulnerability analysis, which has been given increased attention in the research community during the last decade. Vulnerability is a term that is used with some different denotations in the scientific literature [20]. In the present context, vulnerability is defined as the inability of a system to withstand strains and the effects of failures, i.e. to absorb the strain and/or to restore the system quickly to full functionality. Haimes has a similar view and defines vulnerability as “the manifestation of the inherent states of the system that can be exploited to adversely affect that system” [21] – stressing that vulnerability is concerned with the inherent characteristics of a system rather than the environment in which the system is situated. In the context of vulnerability analysis, the role of probabilities of failures, threats and hazardous events are less emphasised. When analysing vulnerability, the focus is not on estimating these probabilities but rather to systematically explore the effects of failures and strains in order to identify system weaknesses that may be exploited by some, perhaps unknown or previously unimagined, threats or hazards. Later in the paper three different perspectives of vulnerability analysis are discussed. These perspectives constitute ways of operationalizing the concept in the context of critical infrastructures. Reliability and vulnerability analyses of critical infrastructures have similarities but also some differences with respect to what type of information they generate about the systems. Few papers exist where the two approaches are discussed and contrasted in parallel, e.g. [22]. However, systematic comparisons with the aim of finding out their specific strengths and weaknesses, and perhaps more importantly how the two approaches can be used as a complement to each other, are lacking. This paper attempts to pragmatically address the apparent lack of comparative studies by analysing a simple, but representative, example of a critical infrastructure using the two different approaches. The overall aim is to compare and discuss reliability analysis and vulnerability analysis of critical infrastructures, specifically exemplifying the type of results on a numerical example of a test system, and showing how these analyses can provide complementing information and knowledge. The test system selected for the study is an electric power system, the IEEE reliability test system [23], chosen because of its wide use as representative case in the scientific literature. The two types of analyses on the test system are delineated in accordance with their fundamental characteristics and how they are performed within their respective fields, in order to clarify the typical results that they achieve. This leads to the discussion on how these types of analyses and their results can be used to guide decisions in the wider context of management of critical infrastructures, providing the foundation for establishing how reliability analysis and vulnerability analysis can be combined to help understanding the behaviour and limitations of a system.
2. Reliability and vulnerability analyses of critical infrastructures 2.1. Reliability analysis Reliability analysis is commonly used in the context of critical infrastructures; see [9] for an overview, [24] for an application to gas networks, and [25] for an application to water supply systems. Although the exact procedures may vary between different infrastructures, the main underlying principles are the same.
J. Johansson et al. / Reliability Engineering and System Safety 120 (2013) 27–38
The goal in reliability analyses is to obtain a picture of a system's likely behaviour, for example by calculating different types of reliability indices. These indices are either system indices, aggregated measures providing reliability information regarding the system as a whole, or load-point indices, describing reliability characteristics at specific points in the system, e.g. where customers are connected in the case of electric power systems. See [8] for an overview. Note that these indices often, but not always, aggregate information about the frequency of failure scenarios and the negative consequences of the scenarios, which means that the indices could also, perhaps more accurately, be described as “risk indices” – with the definition of risk as a function of both probability and consequence [26]. In addition, it is also possible to estimate “component importance” indices, i.e. measures of the contributions of the components to system unreliability. Such information can be useful for improving a system likely performance. See [6,27–31] for an overview of importance measures. Two main approaches exist for conducting reliability analysis of electric power systems: analytical and simulation [8,32]. In analytical approaches mathematical equations and models, e.g. block diagrams or fault trees, are used to derive reliability indices. Analytical approaches often require approximations and simplifications when analysing complex systems [8], and are here not further considered. In order to sample the system states to evaluate, contingency enumeration or Monte Carlo simulation can be used. In both approaches a numerical model of the system is used to calculate the consequences of the system states sampled. In the contingency enumeration approach, all possible combinations of failures up to a specified order (e.g. N-2) or down to a specified failure frequency limit are evaluated using a model of the system, and reliability indices are aggregated. Monte Carlo simulation is conducted by sampling the state space of the system of interest based on the component failure probability distributions or the probability distributions of state-transitions. The sampled states are evaluated through a model to determine the consequences. When it comes to the reliability analysis of power systems, Monte Carlo simulation is frequently used for its flexibility and capability of handling high-order contingencies; it is the approach used in this work for the reliability analysis of the test system. There are two main techniques for Monte Carlo reliability analysis of power systems: sequential and non-sequential [33]. In the sequential approach, the goal is to simulate the actual stochastic behaviour of the system over time, thus creating “an artificial history” [34]. This is done by dividing the simulation in small time steps (e.g. an hour) and for each time step consider whether some contingencies occur (single failure, two overlapping failures, etc.). Reliability indices are calculated by aggregating the outage characteristics over a whole year. In non-sequential approaches (e.g. state sampling techniques, see e.g. [35]) the states of all components are sampled leading to a non-chronological state of the system. The two approaches both have advantages and drawbacks. The sequential approach is more fundamental, flexible and accurate; however, it also gives rise to much longer simulation times [35]. For analysing the test system considered in this paper, the sequential approach is used. Reliability analysis of electric power systems are often divided into two different categories – adequacy- and security-related [36]. Adequacy addresses the problem of to what extent generation, transmission and distribution facilities exist to satisfy customer demands. Security, on the other hand, focuses on the transitions between different states in terms of the ability of the system to avoid instabilities induced by a sudden change [37]. In the present paper, we will focus on adequacy-related reliability, as this is the most commonly addressed type of reliability analysis. For comparison, the vulnerability analysis performed will also be adequacy-related.
29
2.2. Vulnerability analysis In general, vulnerability analysis aims at estimating the magnitude of the negative consequences that arise given that a strain is imposed on the system. It is an emerging approach used within the management of critical infrastructures. Its implementation takes different forms, as described in the research literature – see e.g. [38,39] for an overview and [40,41] for applications to interdependent critical infrastructures. A strain can be defined in different ways, either concrete, e.g. to describe an earthquake of a given magnitude or a hurricane with wind speeds of certain magnitudes, or in more abstract terms, e.g. a random removal of components in an infrastructure system or specific combinations of component failures (e.g. N-1, N-2, N-3, etc.). Defining a strain in a concrete way requires that the hazard, from which the strain stems, is known and that some knowledge about the characteristics of the hazard exists. It is then possible to couple the hazard to the effect it has on the system components, through the use of so-called fragility curves, [42]. However, to analyse the system response for hazards that may not be well known or even unknown, more abstractly, or generically, defined strains are necessary to provide a general picture of the system vulnerability to strains. In vulnerability analysis, the probabilities or frequencies of the strains affecting the system are not addressed or accounted for explicitly; instead the aim is to systematically and comprehensively estimating the consequences associated with the possible states a system can be in, due to various types of strains. A number of different perspectives of vulnerability analyses can be adopted in the study of the ability of a system to withstand strains and stresses. These perspectives provide different but complementing insights regarding system vulnerability. Two important perspectives, termed global vulnerability analysis, [43,44], and critical component analysis, [40,45], will be considered here. The authors have suggested a third perspective, geographical vulnerability analysis [41], in order to address spatially oriented vulnerabilities, not further described or exemplified here. Global vulnerability analysis is carried out by exposing a system to strains of increasing magnitude and estimating the consequences that arise. This is carried out by imposing different types of strains to the system, e.g. by removing an increasing number of components or changing the loading of the system, and estimating the associated consequences using a model of the system response. As the magnitude of the strain increases, the performance of the system degrades. If the system is robust it degrades rather slowly, whereas it degrades rather quickly if the system is vulnerable. In [43,44], more in-depth descriptions of global vulnerability analysis are given. Critical component analysis focuses on identifying components that contribute to the system vulnerability. A component, or set of components, is seen as critical if it is essential for the system function [45]. The analysis uses the estimation of the consequences of single or multiple component failures, to identify components or combinations of components that give rise to the largest negative consequences if they fail. The analysis of critical components is made exhaustively up to a given number of simultaneous failures, since the combinatorial explosion puts an upper limit for the magnitude of the strain, i.e. the number of simultaneously removed components, which feasibly can be considered. This gives, in some sense, a complete picture of the system vulnerability up to the limited number of simultaneous failures considered. This way of analysis should be contrasted against global vulnerability analysis, where the aim is to achieve a representative picture of the system vulnerability for all magnitudes of strain, i.e. only a small sample of all the possible system states for different magnitudes of strains can be evaluated.
30
J. Johansson et al. / Reliability Engineering and System Safety 120 (2013) 27–38
See e.g. [40,45,46] for more thorough descriptions of critical component analysis.
3. Modelling critical infrastructures Reliability and vulnerability analyses require that the critical infrastructure of interest is modelled. In this paper, a previously developed modelling approach is used (see [47] for a detailed description). The approach distinguishes between a structural and a functional model and is inspired by the field of network theory and system engineering (Fig. 1). The structural model of a system consists of an abstract representation of physical objects in terms of nodes and edges. Nodes represent components and junctions of the modelled system, for example a busbar in a power system or a
Fig. 1. Modelling approach used for modelling an infrastructure, with a structural and a functional part. The structural model is exemplified with a simple network consisting of nodes and edges. The functional model is exemplified with – but not restricted to – a first order linear differential equation.
switch in a telecommunication system. Edges represent how the system components and junctions are connected, for example by power lines in power systems or optical fibres in telecommunication systems. The functional model depicts the response of the system when it is exposed to strains, i.e. accounting for the physical flow in the system, and calculating the consequences. The functional model can be engineering-oriented, such as using load flow models for power systems or pressure models for water supply systems, or more abstract, for faster calculations or for assessing the system from different viewpoints, such as connectivity-based models that are used within the field of Network Theory. Two types of strains can be applied, structural and functional. Structural strains affect the structural properties of the system in terms of removal of nodes and or edges. Functional strains affect physical properties of the system, e.g. increased loading. Only structural strains are considered in the present paper. The modelling approach can also describe the cascading of failures, i.e. capturing functional properties of the system that affect structural properties (see Fig. 1). An example is the overloading of power lines which get disconnected thus leading to overloading of other power lines which get disconnected and so on in a cascading manner. However, for simplicity of illustration, cascading failures are not accounted for in the present paper, and hence not further elaborated. There are many ways for describing and modelling the function and behaviour of critical infrastructures, e.g. depending on the level of detail, to what extent physical and dynamical aspects of the system functions to capture, etc. Advanced models, as used in
Fig. 2. The IEEE RTS96. Circles are generators and triangles are load-points; double-circle symbols are transformers and grey lines are used for reasons of picture clarity.
J. Johansson et al. / Reliability Engineering and System Safety 120 (2013) 27–38
system engineering approaches, are superior in capturing the physical behaviour of the system, but they usually require unfeasible computational times for the type of analyses carried out here. On the contrary abstracted models, e.g. purely connectivity-based/ topological models as used within the field of Network Theory, are computationally very fast but may not capture the relevant behaviour of the system, see e.g. [48–50]. For electric power system analysis, two common functional models in engineering approaches are DC-load flow and AC-load flow models.
4. Description of test system In this paper a power transmission system is used as representative of a critical infrastructure, namely the IEEE RTS96 [23]. The system is chosen since it is well documented and numerous reliability studies have been carried out in the research literature. The IEEE RTS96 is a bulk power transmission system (230 and 138 kV), including generation, transmission, and loads. It consists of three different system sizes; the one area, two area and three area systems. The two and three area systems are extended mirrored copies of the one area system with different types of interconnections between the mirrored systems. Here the one area system is analysed (Fig. 2). The IEEE RTS96 system is described in [23] with rather extensive data and system parameters, such as: generation reliability and capacity, transmission system reliability and capacity, and load curves with respect to both yearly and daily variations. For detailed information of the test system, see [23]. Here, the annualized peak power demand, 2850 MW, of all loads is used; hence, annual and daily fluctuations of loads are not addressed. The aggregated generation capacity is 3405 MW. The 24 h emergency power rating of lines is used as the line capacity. Analyses on two different system setups are performed. In Setup 1, we use the reliability data in accordance with [23], where the busbars are treated as fail-safe components. However, since totally fail-safe components are not entirely realistic, we define another system setup, Setup 2, where we acknowledge the possibility that the busbars might fail. A very low failure frequency of 0.001 year−1 and a repair time of 24 h are assigned. The second setup is used to
31
study the impact of omitting components with low failure frequency early in the analysis. The structural model of the system is presented in Fig. 2, in terms of nodes, edges and how they are connected. In total, there are 74 nodes (33 generators, 24 busbars, 17 load-points) and 37 edges (5 transformers and 32 power lines). Failures of load points are not considered, and hence a total of 94 components can fail. In a study to guide real-life decisions, the choice of a functional model should be carefully addressed, since the estimated magnitude of the consequences may affect decisions of which scenarios to mitigate. However, the aim of this paper is primarily to contrast and discuss reliability and vulnerability approaches for analysing infrastructure systems, and not the ability of the functional model to accurately capture real-life behaviour. Here the same functional model is used for both the reliability and the vulnerability study and only relative comparisons are carried out. For this reason, the functional model used, for both reliability and vulnerability analyses, is a simplistic capacity model, similar to a DC load-flow model and captures some of the physical limitations of the test system, and not a purely structural model (as normally used in the network theoretical field). In this model the loads of the system (MW), the capacities of the generators (MW), and the capacities of lines and cables (MW) are included. Electrical properties such as voltage levels, voltage angles, frequency deviations, and cascading failure mechanisms are not captured by the model. In Fig. 3, the main steps of the functional model are summarized. The model can be seen as a maximum flow model since the only limitation is the amount of active power that can be generated and transferred in the system without considering other physical parameters, such as voltage levels and reactive power. The results from the used physical model are similar to what is expected from a DC-load flow, which is strengthened by the comparison of the reliability results with [51], using a DC-model, in Section 5. Furthermore, the authors have done a comparison of nine different functional models, from simple topologic models (Network Theoretical) up to DC and AC model (engineering models) which is given in [50]. By using this type of model a reasonable balance between fidelity and simulation time is obtained. By using the capacity model described above it is easier to expand the general conclusions drawn in this paper to a larger
Fig. 3. A flow chart of the functional model used in the two approaches. A load-point is reachable if there exists a path between the generator and the load-point where all edges in the path have a “capacity left” larger than zero. PG is the available capacity of the generator, PL is the load for the load point and PP is the smallest capacity left of all edges in the path.
32
J. Johansson et al. / Reliability Engineering and System Safety 120 (2013) 27–38
array of other critical infrastructure systems, such as water supply systems and gas networks – i.e. systems that can be described by nodes having limited in-feed capacities (e.g. a water purification plant or a gas plant) and/or load demands (e.g. water or gas consumption by customers), and edges with capacity limits (e.g. maximum throughput for water or gas pipes).
An interesting comparison in Table 1 is the one between the two different system setups. A minor increase in the reliability indices can be seen when treating busbars as fallible (Setup 2) compared to the fail-safe setup (Setup 1). Of course, the fact that the effect is only minor is due to the assigned failure frequency for the busbars – the smaller the estimated probabilities are, the smaller the effect on the reliability indices is.
5. Reliability analysis of the test system 6. Vulnerability analysis of the test system A sequential Monte Carlo approach with a 1-h time resolution is used to perform a reliability analysis aimed at calculating a number of reliability indices over the horizon of 1 year (i.e. 8760 time steps of 1 h each) (for detailed calculation procedures see e.g. [52]):
EDNS – Expected Demand Not Supplied (MW). Average power
not supplied over the 8760 1-h time steps. Calculated as EENS/ 8760. EENS – Expected Energy Not Supplied (MWh/year). Summed energy not supplied in each of the 8760 1-h time steps. LOLE – Loss of Load Expectancy (h/year). The number of time steps (i.e. hours) where power not supplied is above zero. Calculated as LOLF∙LOLD. LOLP – Loss of Load Probability (%). Calculated as LOLE/8760. LOLF – Loss of Load Frequency (outages/year). Frequency of transitions from zero to non-zero of power not supplied. LOLD – Loss of Load Duration (h/outage). Calculated as LOLE/ LOLF.
The coefficient of variation, α, (see Eq. (1)), is used to check the convergence of the indices [52]: α¼
sðXÞ EðXÞ
ð1Þ
where X is a vector containing the reliability index for the simulated years up to the year, and including the year, for which α is calculated. Commonly used values for an acceptable convergence are for example 6% [35] and 5% [51]. For the simulations performed here, the α-values were 4.0% for Setup 1 (busbars are considered fail-safe, as indicated in Section 4) after 150 years, 5.2% for Setup 2 (busbars are considered fallible) after 150 years and 2.8% for Setup 2 after 1500 years. All α-values are calculated for the EENS (the index that converges most slowly). The low α-value indicates that convergence is acceptable after 150 years and that simulating additional years would have a very limited effect on the calculated reliability indices. Comparing the 150 year and the 1500 year simulations for Setup 2 in Table 1 also shows a very small difference, thus indicating high convergence of the indices. Table 1 reveals that there are outages almost 10% of the total time and that on average about 1% of the power demand is not supplied (EDNS divided by the peak load). These values are rather high compared with the actual performance of a real transmission system, since the assumption of using peak load (giving so called annualized indices) overestimates the indices significantly. In [53], a comparison was done for the test system where the annualized EENS is about 135,000 MWh/year compared with an annual EENS of about 4000 MWh/year (using hourly peak loads). Table 1 contains also the results from reference [53]. It can be seen that the results here obtained compare well with those reported in the reference, indicating that the functional model used in this paper compares reasonably well with a DC-load flow model: the functional model utilized here slightly underestimates the indexes containing estimations of power not supplied (EDNS and EENS) compared to the DC model used in [53].
Two types of vulnerability analyses of the test system are presented: global vulnerability analysis and critical component analysis, as introduced in Section 2.2. In Fig. 4, the results of the global vulnerability analysis are presented. For the analysis, an increasing number of components are randomly removed (i.e. structural strains) and the consequences are evaluated in terms of power not supplied (by using the same functional model as in the reliability analysis of Section 5). Analysis of functional strains (e.g. variations of the loading of the system) is also possible Table 1 Annualised reliability indices for the test system. Index
This paper A 150 years Setup 1, B150 years Setup 2, C 1500 years Setup 2
13.8A 14.46B 14.56C EENS (MWh/yr) 120,796A 126,262B 127,546C LOLE (EDLC) 704A (h/yr) 725B 732C LOLF (ENLC) 18.6A (outages/yr) 18.8B 18.8C LOLD (ADLC) 38.3A (h/outage) 39.0B 38.8C LOLP (PLC) (%) 8.0A 8.3B 8.3C EDNS (MW)
References Billinton and Wangdeea [53] 15.4b
134,591
745b
18.6
40.1b
8.5
a Values represent annualized indices based on a sequential Monte Carlo simulation. b These values are not presented in the references but are derived from the other presented reliability indices in accordance with equations in Section 5.
Fig. 4. Global vulnerability results. Consequences that arise (y-axis), in terms of power not supplied, for an increasing number of random failures. Average (black line), maximum (∇), minimum (Δ) values and whiskers for one standard deviation are presented for 10,000 iterations.
J. Johansson et al. / Reliability Engineering and System Safety 120 (2013) 27–38
33
fulfilled for failures of generators and power lines, but not for busbar failures.
7. Comparison of the two approaches
Fig. 5. Distribution of the power not supplied (MW) for N-1, N-2, N-3, and N-4 simultaneous failures.
but is not addressed here. In the Figure, the results (average, maximum and minimum values) from 10,000 iterations are reported. One iteration entails randomly removing an increasing number of components (one up to all) and calculating the consequences for each order of removed components. In total, 94,000 scenarios were analysed since the number of removable components are 94. This can be contrasted with the number of all possible scenarios that would need to be analysed to cover, for example, all possible N-3 states, 134,044, or N-4 states, 3,049,501. Hence, only a partial picture of the system vulnerability is obtained, especially for large-orders of removed components. On average, a rather large fraction of components have to be removed to produce significant power outages. This indicates that many of the components are not critical for the system function, i.e. on the whole it is a relatively robust system. However, as depicted by the Figure, the sampled maximum and minimum values show a large variation, which is due to a few highly critical components (the large generating units and the busbars). In Fig. 5, an overview of the results of the component criticality analysis is presented. Exhaustive analyses were performed for N-1 to N-4 simultaneous failures, i.e. covering only a small portion of the number of removed components in Fig. 4, but instead exhaustively covering all failure scenarios. The Figure, which shows the consequences for all failure scenarios sorted from highest to lowest, provides a good overview of the potential consequences of different failure combinations. These types of results could be useful for establishing vulnerability criteria, such as “maximum allowed power outage for any N-3 scenario is 1000 MW”. The results of the example would lead to revision and possible mitigation of roughly 10 N-3 scenarios exceeding this limit. The most common criteria followed in practice today is the N-1 design criterion, stating the system should withstand the loss of the most critical component without leading to any consequences. This type of criterion is often referred to as a reliability criterion; however, from the definitions of vulnerability and reliability used in this paper it would more properly be labelled as a vulnerability criterion, since no reference is made to a probability (which is implicit in the concept of reliability). Instead, a proper reliability criterion could be that outages above a certain consequence threshold, e.g. 1000 MW, must not have a probability of more than some specified value, e.g. 0.01 over a 10 year interval. Looking closer into the results of the component criticality analysis, it can be concluded that the most critical combinations are when the high-capacity generators and busbars are simultaneously out of function. These types of analyses thus give a good notion of the systems robustness to strains. In addition, it is concluded that the commonly used deterministic N-1 criterion is
In this section some conceptual comparisons between the results from the reliability analysis and those from the vulnerability analysis are given. In Fig. 6, the global vulnerability analysis results of Fig. 4 are compared with those from the reliability analysis. For small numbers of removed components (up to 10) the reliability analyses by Monte Carlo simulations (150 and 1500 years) and the vulnerability analysis give similar results (in terms of the maximum and minimum consequences found), although the Monte Carlo simulation finds some more extreme values around N-3 to N-4. This is due to the fact that the test system generators, which are critical components, have very high failure frequencies, also leading to the discovery of several higher order scenarios (N-5 to N-10) in the Monte Carlo simulation. However, the reliability analysis by Monte Carlo simulation has no occurrences of scenarios above 10 removed components and, thus, gives no information on the potential consequences; on the contrary, the global vulnerability analysis also samples these high order scenarios, thus providing information, for example on to what extent the system degrades “gracefully” as an increasing fraction of components are lost, or on whether there is some point beyond which the system breaks down quickly, in a sort of phase transition in system behaviour, see e.g. [54]. The coverage of the state space for N-1 to N-4 scenarios (due to 1 to 4 simultaneous failures), which is systematically and exhaustively covered by the critical component analysis, is presented in Fig. 7 for the 150 and 1500 years reliability analyses. It can be seen that a large number of single failures (N-1) are captured in the reliability analysis for both 150 years and 1500 years simulation. Those not covered are all busbar failures. However, already for two simultaneous failures (N-2) the coverage is far from being complete (26% and 42%, respectively), even though the measured reliability indices have converged to acceptable levels. For three (N-3) and four (N-4) simultaneous failures the coverage of the state space is very limited. Comparing the coverage for the two simulation lengths in Fig. 7, it is seen that the coverage increases slightly with increased simulation length. However, overall the coverage of higher order contingencies for the reliability analysis is very limited due to the estimated rarity of the large majority of the simultaneous failure scenarios.
Fig. 6. Consequences found in the global vulnerability analysis (black line for average value (–) and grey lines (–) for maximum and minimum values) compared to the combined results of the reliability analysis for 150 years and 1500 years Monte Carlo simulations (grey squares).
34
J. Johansson et al. / Reliability Engineering and System Safety 120 (2013) 27–38
In Fig. 8, the accumulated consequences for the covered states are presented. The Figure clearly reveals that the reliability analysis poorly covers the accumulated consequences that can arise in the system, even for the longer 1500 years simulation. Already at N-2 scenarios the accumulated coverage of the consequences is less than 1% for both 150 years and 1500 years simulation. Hence, the reliability analysis fails to capture high-consequence scenarios. However, when studying the accumulated frequencies of the covered system states (Fig. 9), the conclusion is the opposite, as expected: the most likely states are indeed well captured by the reliability analysis. The frequencies for all the states (single failures and combinations of up to four simultaneous failures) are calculated using the failure frequency data from the test system and assuming component failure independence (as was assumed in the reliability analysis above). For N-1 to N-3, the coverage of the accumulated failure frequencies is higher than 82%, for both the 150 years and 1500 years simulations. For N-4, the coverage drops significantly with coverage of 6% and 23% for the respective lengths of simulation years. Hence, the reliability analysis covers reasonably well the most likely scenarios. In Fig. 10, a consequence/frequency plot is presented that further strengthens the above conclusions. The Figure depicts which types of states from the critical components analysis (exhaustive coverage up to N-4) are captured by the reliability analysis. The Monte Carlo simulation-based reliability analysis provides good coverage of the most frequent failure states. As the failure frequencies drop, the coverage drops significantly, indicating that extremely few of the low probability states are captured. It is particularly clear from the Figure that the reliability analysis fails to capture the low frequency and high consequence scenarios. The overall conclusion is that the estimated rare higher-order contingencies are not important for the result of the reliability analysis, since they have a rather small influence on the indices due to their very small frequency of occurrence. These can, however, potentially lead to large-scale consequences, which is the major focus of and captured by the vulnerability analysis. 8. Discussion The purpose of this paper was to discuss and contrast two approaches for analysing critical infrastructures and the type of results they generate, namely reliability analysis and vulnerability analysis. In a risk management context, the results of these analyses are used to inform decisions on how to strengthen the
system and, hence, it is important to know what type of information about the system these approaches generate and what indications they can provide on its improvement. To circumstantiate the discussion, a numerical example was carried out on a wellreferenced test system, the IEEE RTS96. As the analysis in Section 5 revealed, reliability analysis provides useful information about a system likely behaviour in terms of indices describing characteristics such as how often failures can be expected per year and the average duration and magnitude of these failures. Most commonly, the results of reliability analysis are presented as average values of these indices over a long period of time (e.g. over hundreds of years). This information is very valuable in order to get a sense of the system general ability to perform its intended function and which components are contributing the most to the system unreliability, which for example may be used as a basis for maintenance schemes. Sometimes, the importance of information of variations in the indices between specific years is also stressed [34]; still, the focus is on values that are averaged over the year. Hence, reliability analysis can guide decisions of designing systems in terms of their likelihood to perform their intended function. Since the focus of reliability analysis is on the system's likely behaviour, events that are estimated to have low probability/ frequency of occurrence have a minor impact on the results, as illustrated by the comparisons between the two different system setups in Section 5. In fact, many of those events will not even be captured in the Monte Carlo-based reliability analysis, as seen in Figs. 6–10. Hence, relying too heavily on reliability analysis results to guide decisions could potentially lead to a reliable but at the same time vulnerable system, because the efforts to strengthen the system will be focused on improving its reliability and not decreasing its vulnerability. A fundamental issue is, then, the estimation of the probabilities of both single and simultaneous failures. These estimations can be problematic in practice, as pointed out many times in different contexts, see e.g. [13–15]. Often, several assumptions must be made, e.g. the assumption of failure independence. These assumptions usually give rise to very small estimated probabilities of multiple simultaneous failures. However, there are occasions where assumption of failure independence is not appropriate, such as in case of external hazard exposures (e.g. adverse weather, where the phenomena of “failure bunching” often arise [55], or malicious threats), unforeseen failure mechanisms or interactions, or other types of common cause failures. This leads to the fact that
Fig. 7. Coverage of the state space for (a) the 150 years simulation and (b) the 1500 years simulation.
J. Johansson et al. / Reliability Engineering and System Safety 120 (2013) 27–38
35
Fig. 8. Accumulated coverage of the consequences for (a) the 150 years simulation and (b) the 1500 years simulation.
Fig. 9. Accumulated coverage of the estimated frequencies for (a) the 150 years simulation and (b) the 1500 years simulation.
the probabilities of multiple simultaneous failures estimated in reliability analyses can lead to misleading results. In fact, many catastrophic events that have occurred in the past include failure combinations and interactions that are very difficult to capture beforehand in a reliability analysis. The Auckland power outage in 1998 [56], the blackout in Sweden in 2003 [57] and the North American blackout in 2003 [58] are three examples of such events. Therefore, performing more sophisticated reliability analyses where one tries to take failure correlations into consideration might improve the ability of reliability analysis to capture highconsequence scenarios. The extent of the improvement has not been addressed in this paper but it would be highly interesting to study it in future studies. However, the problems associated with estimating these correlations beforehand still remain: the phenomena of “hind-sight bias” [59] states that perhaps it is possible to make sense of the events after they have happened, but that is much easier than identifying them beforehand and including them in a reliability analysis. The difficulty of estimating probabilities of failure scenarios, or even identifying/capturing them in the analysis in the first place, are amplified in the context of high complexity and large uncertainties, which characterises the area of critical infrastructures. Commonly used strategies for handling uncertainties in risk and reliability analyses are to make conservative judgements and to
require that the risk and reliability acceptance criteria are achieved with a certain margin – to be “on the safe side”. As such, both these strategies would imply that some measures would have been implemented as an extra safeguard against various sources of uncertainty. However, since these measures would be focussed on further increasing the level of reliability (or reducing the level of risk), they would especially address scenarios that contribute significantly to the unreliability in the system, rather than the ones estimated to have very small frequency of occurrence but very large consequences. Traditional risk management that is based on probabilistic reliability analyses, therefore, has some limitations when it comes to accounting for large consequence scenarios which are beforehand judged to be unlikely. The paper has demonstrated this in the context of Monte Carlo-based reliability analysis. Probabilistic analyses provide very important input to decisions; however, it should not be the only input, c.f. [60]. We argue that vulnerability analysis plays an important role in providing necessary additional information to decisions concerning the design of critical infrastructures. In this, we agree with Van Asselt and Renn who, in describing the principles of the Risk Governance, argue that “[g]overning risks is concerned not just with minimizing the risks, but also with stimulating resilience (or decreasing vulnerability), in order to be able to withstand or even
36
J. Johansson et al. / Reliability Engineering and System Safety 120 (2013) 27–38
Fig. 10. Consequence/frequency plot of the states (N-1 to N-4) captured by the Monte Carlo simulation (grey squares) and states not captured (black triangles) by (a) the 150 year and (b) the 1500 year Monte Carlo simulations. They grey horizontal lines indicate which system states that can be expected to be captured (above the line), and not expected to be captured (below the line), by the Monte Carlo approach used here, i.e. in (a) 150−1 and (b) 1500−1, respectively.
tolerate surprises” [61] (p. 438). Many other researchers have previously suggested that probabilistic risk and reliability analyses must be complemented by resilience-, robustness- and vulnerability-oriented approaches [5,16]; however, there are very few concrete suggestions of such approaches and how they can complement risk and reliability analyses. This paper has presented some information that can be obtained from vulnerability analyses, and how these highlight high-consequence scenarios. As presented in the paper, such information includes scenarios of “how bad can it get?”, “what combinations of components would lead to very large consequences if they fail?”, “how large strains can the system withstand?”, and “are there any thresholds where a slight increase in strain magnitude gives rise to much larger consequences?”. By investigating these types of questions, and addressing system vulnerabilities before, perhaps unknown, threats or hazards exploit them, vulnerability analysis can guide decisions of designing less vulnerable systems. In the context of vulnerability analysis, the probabilities of scenarios are not treated explicitly. When actually deciding whether measures have to be taken, judgments usually need to be made regarding the plausibility that the identified vulnerability may be
exploited by threats and hazards. But, it is argued that the first step to actually be able to make the decision is simply that the vulnerability is identified and highlighted. Furthermore, even though our best current knowledge and judgment give rather small probability estimates of these strains (but they nevertheless tend to happen, e.g. Canadian snow storm in 1998, North American blackout 2003, and earthquake followed by a tsunami in Japan 2011), if the negative consequences are very large then perhaps measures should be taken to ensure a robust and resilient system (including measures to limit the impact of the system failing). It should be noted, as Doorn and Hansson argue, that such extra measures “may not be defensible from … a cost-benefit perspective, but they may still be justified from the perspective of protection against uncertainties” [11] (p. 162). At the same time, measures must also be taken for reducing the occurrence of more frequent events, where probabilistic risk and reliability analyses provide important information. Measures should hence be related to both reducing the vulnerability, i.e. improving the system's ability to withstand strains and stresses, and the reliability, i. e. improving the likely behaviour, in order to ensure the services provided by critical infrastructures. Future research should address how to best integrate the results from a probabilistic reliability
J. Johansson et al. / Reliability Engineering and System Safety 120 (2013) 27–38
analysis and a non-probabilistic vulnerability analysis in order to make risk-related decisions.
9. Conclusion In the paper we discussed and contrasted two approaches for gaining knowledge required for understanding and improving critical infrastructures, namely reliability analysis and vulnerability analysis. Reliability refers to the ability of a system to perform its intended function. Vulnerability, on the other hand, refers to the inability of a system to withstand strains and the effects of failures. The IEEE RTS96 electric power test system was used to exemplify the type of results gained from the two different approaches. Although of limited size, the test system used is representative and the results obtained lead to general insights also valid for other types of critical infrastructures, to some extent. Specifically, the numerical example revealed that reliability analysis, in the context of the Monte-Carlo approach employed here, provides a good picture of the system likely behaviour, but fails to capture a large portion of the high consequence scenarios. Although these scenarios are estimated to have small probabilities of occurrence, they should be treated cautiously as probabilistic analyses should not be the only input to decisionmaking – as argued for in the discussions Section. Using only the results from reliability analysis to guide decisions on how to improve the system would, expectedly, lead to a reliable system; but if no results from a vulnerability analysis are also used to guide decisions, then the system might still be vulnerable. Hence, the overall conclusion is that vulnerability analysis should be used as a complement to reliability analysis, and as argued for in the paper also for other forms of probabilistic risk analysis, when making decisions concerning securing the supply of critical infrastructure services. Measures should be sought for reducing both the vulnerability, i.e. improving the system's ability to withstand strains and stresses and hence protect from surprises, and the reliability, i.e. improving the likely behaviour. Future research should address how to sensibly use the results from vulnerability analyses for making decisions concerning vulnerability reductions and how these results can complement those obtained from a reliability analysis for securing the vital services provide by critical infrastructures.
Acknowledgements The research, for two of the authors, has been financed by the Swedish Civil Contingencies Agency, which is greatly acknowledged. The authors are thankful to the anonymous referees for providing insightful comments and criticisms, which have guided a thorough revision of the paper and its resulting improvement. References [1] de Bruijne M, van Eeten M. Systems that should have failed: critical infrastructure protection in an institutionally fragmented environment. Journal of Contingencies and Crisis Management 2007;15(1):18–29. [2] McDaniels T, Chang S, Peterson K, Mikawoz J, Reed D. Empirical framework for characterizing infrastructure failure interdependencies. Journal of Infrastructure Systems 2007;13(3):175–84. [3] Bagheri E, Ghorbani AA. The state of the art in critical infrastructure protection: a framework for convergence. International Journal of Critical Infrastructures 2008;4(3):215–44. [4] Zio E. An Introduction to the basics of reliability and risk analysis. Singapore: World Scientific Publishing; 2007. [5] Aven T. Foundations of risk analysis: a knowledge and decision-oriented perspective. Chichester: John Wiley & Sons; 2003. [6] Höyland A, Rausand M. System reliability theory: models and statistical methods. New Work, NY: John Wiley & Sons; 1994. [7] Zio E. Reliability engineering: old problems and new challenges. Reliability Engineering & System Safety 2009;94(29):125–41.
37
[8] Allan R, Billinton R. Probabilistic assessment of power systems. Proceedings of the IEEE 2000;88(2):140–62. [9] Murray AT, Grubesic TH, editors. Berlin: Springer; 2007. [10] Aven TA. Semi-quantitative approach to risk analysis, as an alternative to QRAs. Reliability Engineering & System Safety 2008;93(6):768–75. [11] Doorn N, Hansson SO. Should probabilistic design replace safety factors? Philosophy of Technology 2011;24(2):151–68. [12] Apostolakis GE. How useful is quantitative risk assessment? Risk Analysis 2004;24(3):515–20. [13] Shrader-Frechette K. Technological risk and small probabilities. Journal of Business Ethics 1985;4(6):431–45. [14] Sarewitz D, Pielke Jr R, Keykhah M. Vulnerability and risk: some thoughts from a political and policy perspective. Risk Analysis 2003;23(4):805–10. [15] Möller N, Hansson SO. Principles of engineering safety: risk and uncertainty reduction. Reliability Engineering and Systems Safety 2008;93(6):798–805. [16] IRGC. Risk governance: towards an integrative approach. Geneva: International Risk Governance Council; 2006. [17] Aven T, Renn O. The role of quantitative risk assessments for characterizing risk and uncertainty and delineating appropriate risk management options, with special emphasis on terrorism risk. Risk Analysis 2009;29(4):587–600. [18] Stirling A. On science and precaution in the management of technological risk. Seville, Spain: European Commission Joint Research Centre; 1999. [19] Clausen J, Hansson SO, Nilsson F. Generalizing the safety factor approach. Reliability Engineering and System Safety 2006;91(8):964–73. [20] McEntire DA. Why vulnerability matters – exploring the merit of an inclusive disaster reduction concept. Disaster Prevention and Management 2005;14 (2):206–22. [21] Haimes YY. On the Definition of Vulnerabilities in Measuring Risks to Infrastructures. Risk Analysis 2006;26(2):293–6. [22] Blockley DI, Agarwal J, Pinto JT, Woodman NJ. Structural vulnerability, reliability and risk. Progress in Structural Engineering and Materials 2002;4(2):203–12. [23] Grigg C, Wong P, Albrecht P, Allan R, Bhavaraju M, Billinton R, et al. The IEEE Reliability Test System – 1996. A report prepared by the Reliability Test System Task Force of the Application of Probability Methods Subcommittee. IEEE Transactions on Power Systems 1996;14(3):1010–20. [24] Helseth A, Holen AT. Reliability modeling of gas and electric power systems; similarities and differences. International conference on probabilistic methods applied to power systems, 2006 June 11–15: Stockholm, Sweden. [25] Wagner JM, Shamir U, Marks DH. Water distribution reliability: analytical methods. Journal of Water Resources Planning and Management 1987;114 (3):253–75. [26] Kaplan S, Garrick B. On the quantitative definition of risk. Risk Analysis 1981;1 (1):11–27. [27] Cheok MC, Parry GW, Sherry RR. Use of importance measures in risk informed applications. Reliability Engineering and System Safety 1998;60(3):213–26. [28] Zio E. Computational methods for reliability and risk analysis. Singapore: World Scientific Publishing; 2009. [29] van der Borst M, Shoonakker H. An overview of PSA importance measures. Reliability Engineering and System Safety 2001;72(3):241–5. [30] Zio E, Podofillini L. Importance measures of multi-state components in multistate system. Reliability Engineering and System Safety 2003;10(3):289–310. [31] Zio E. Risk importance measures. In: Pham H, editor. Safety and Risk Modeling and its Applications. London: Springer; 2011. p. 151–96. [32] Rei AM, Schilling MT., Melo ACG., Monte Carlo simulation and contingency enumeration in bulk power systems reliability assessments. International conference on probabilistic methods applied to power systems, 2006 June 11–15: Stockholm, Sweden. [33] Brown RE. Electric power distribution reliability. Boca Raton, USA: CRC Press; 2008. [34] Billinton R, Wang P. Teaching distribution sys-tem reliability evaluation using Monte Carlo techniques. IEEE Transactions on Power Systems 1999;14(2):397–403. [35] Billinton R, Sankarakrishnan AA. Comparison of Monte Carlo simulation techniques for composite power system reliability assessment. IEEE WESCANEX 95, 1995 May 15–16: Winnipeg, Canada. [36] Debnath K, Goel L. Power system planning – a reliability perspective. Electric Power System Research 1995;34(3):179–85. [37] de Silva AML, Endrenyi J, Wang L. integrated treatment of adequacy and security in bulk power system reliability evaluations. IEEE Transactions on Applied Superconductivity 1993;3(1):275–85. [38] Murray AT, Matisziw TC, Grubesic THA. Methodological overview of network vulnerability analysis. Growth and Change 2008;39(4):573–92. [39] Kröger W, Zio E. Vulnerable systems. London, UK: Springer; 2011. [40] Apostolakis GE, Lemon DM. A screening methodology for the identification and ranking of infrastructure vulnerabilities due to terrorism. Risk Analysis 2005;25(2):361–76. [41] Johansson J, Hassel H, Cedergren A. Vulnerability analysis of interdependent critical infrastructures: case study of the swedish railway system. International Journal of Critical Infrastructures 2011;7(4):289–316. [42] Booker G, Torres J, Guikema S, Sprintson A, Brumbelow K. Estimating cellular network performance during hurricanes. Reliability Engineering & System Safety 2010;95(4):337–44. [43] Crucitti P, Latora V, Marchiori M. Error and attack tolerance of complex networks. Physica A 2004;340(1–3):388–94. [44] Johansson J, Jönsson H, Johansson H. Analysing the vulnerability of electric distribution systems: a step towards incorporating the societal consequences of disruptions. International Journal of Emergency Management 2007;4 (1):4–17.
38
J. Johansson et al. / Reliability Engineering and System Safety 120 (2013) 27–38
[45] Jönsson H, Johansson J, Johansson H. Identifying critical components in critical infrastructure networks. Journal of Risk and Reliability 2008;222(2):235–43. [46] Zio E, Sansavini G. Component criticality in failure cascade processes of network systems. Risk Analysis 2011;31(8):1196–210. [47] Johansson J, Hassel H. An approach for modelling interdependent infrastructures in the context of vulnerability analysis. Reliability Engineering & System Safety 2010;95(12):1335–44. [48] Eusgeld I, Kroger W, Sansavini G, Schlapfer M, Zio E. The role of network theory and object-oriented modeling within a framework for the vulnerability analysis of critical infrastructures. Reliability Engineering and System Safety 2009;94(5):954–63. [49] Hines P, Cotilla-Sanchez E, Blumsack S. Do topological model provide good information about electricity infrastructure vulnerability. Chaos 2010;20 (033122):1–5. [50] Johansson J, LaRocca S, Hassel H, Guikema S. Comparing topological performance measures and physical flow models for vulnerability analysis of power systems. Proceedings PSAM11 & ESREL2012, 2012 June 25–29: Helsinki, Finland. [51] Pinheiro JMS, Dornellas CRR, MTh Schilling, Melo ACG. Mello JCO. Probing the new IEEE Reliability Test system (RTS-96): HL-II assessment. IEEE Transactions on Power Systems 1998;13(1):171–6. [52] Billinton R, Li W. Reliability assessment of electric power systems using Monte Carlo Techniques. New York: Plenum Press; 1994.
[53] Billinton R, Wangdee W. Impact of utilising sequential and nonsequential simulation techniques in bulk-electric-system reliability assessment. IEE Proceeding Generation Transmission Distribution 2005;152(5):623–8. [54] Sansavini G, Hajj MR, Puri JK, Zio EA. Deterministic representation of cascade spreading in complex networks. Europhysics Letters 2009;87(4):480041–4. [55] Billinton R, Singh G, Acharya J. Failure bunching phenomena in electric power transmission systems. Journal of Risk and Reliability 2006;220(1):1–7. [56] Newlove LM, Stern E, Svedin L. Auckland Unplugged. Stockholm, Sweden: Copy Print, 2000. [57] Larsson S, Ek E. The blackout in Southern Sweden and Eastern Denmark, September 23, 2003. Proceedings IEEE PES general meeting, 2004 June 10: Denver, CO, USA. [58] 2004. [59] Fischhoff B. Hindsight≠Foresight: the effect of out-come knowledge on judgement under uncertainty. Journal of Experimental Psychology: Human Perception and Performance 1975;1(3):288–99. [60] Aven T, Zio E. Some consideration on the treatment of uncertainties in risk assessment for practical decision making. Reliability Engineering and System Safety 2011;96(1):64–74. [61] Van Asselt MBA, Renn O. Risk governance. Journal of Risk Research 2011;14 (4):431–49.