Riding the storm: a comparison of uncertainty modelling techniques for storm surge risk management

Riding the storm: a comparison of uncertainty modelling techniques for storm surge risk management

Applied Geography 22 (2002) 307–330 www.elsevier.com/locate/apgeog Riding the storm: a comparison of uncertainty modelling techniques for storm surge...

1MB Sizes 0 Downloads 63 Views

Applied Geography 22 (2002) 307–330 www.elsevier.com/locate/apgeog

Riding the storm: a comparison of uncertainty modelling techniques for storm surge risk management A. Zerger a,∗, D.I. Smith b, G.J. Hunter a, S.D. Jones a a

Centre for GIS and Modelling, Department of Geomatics, The University of Melbourne, VIC 3010, Australia b Centre for Resource and Environmental Studies, Australian National University, Canberra, ACT 0200 Australia Received 29 May 2001; received in revised form 5 December 2001; accepted 30 January 2002

Abstract In tropical Australia the risk from tropical cyclone disaster is significant and increasing, since much of the population inhabits low-lying coastal regions, which are experiencing further rapid urbanization. This paper examines GIS methodologies for predicting flood risk to urban communities that are at risk from cyclone-induced storm surge inundation, using as a case study the coastal community of Cairns. The methodologies attempt to account for the uncertainties inherent in risk predictions. Two uncertainty modelling techniques – the grid cell uncertainty model and the standard normal probability model – are implemented and evaluated in the context of improved risk management decision-making. Results show that, spatially, the results are almost identical, and for evacuation decision-making should be treated as such. The results of the methodologies confirm that the low-lying nature of Cairns contributes to the overall risk and that relatively high-frequency and small-magnitude surge events can cause major inundations. However, the techniques have very different computational overheads and implementation efficiencies and these are discussed in detail. The paper concludes by examining the implications of uncertainty modelling for risk management decision-making.  2002 Elsevier Science Ltd. All rights reserved. Keywords: Decision-support systems; Flood risk; GIS; Risk management; Sensitivity analysis; Uncertainty



Corresponding author. Fax.: +61-3-9347-2916. E-mail address: [email protected] (A. Zerger).

0143-6228/02/$ - see front matter  2002 Elsevier Science Ltd. All rights reserved. PII: S 0 1 4 3 - 6 2 2 8 ( 0 2 ) 0 0 0 1 0 - 3

308

A. Zerger et al. / Applied Geography 22 (2002) 307–330

Introduction In Australia, the risk from tropical cyclone disaster is significant because much of the population inhabits low-lying coastal regions and urbanization rates in these regions are rising. Globally, cyclone-induced storm surges pose the greatest natural hazard threat to coastal communities, storm surge being the world’s foremost natural hazard in terms of property devastation and lives lost (Murty, 1988). Most disasters, however, occur in developing countries where population densities are significantly higher than in Australia. Given that political and social impediments can be overcome, Australia is in an excellent position to mitigate future storm surge risk by adapting technologies such as geographic information systems (GIS) to support decision-making. However, the apparent scale-less nature of a GIS can lead users to make assumptions about data accuracy that are not valid. Buttenfield and Beard (1995) note that computer-generated maps commonly imply a level of accuracy not warranted by the data used to derive them. Openshaw (1989) notes that a GIS provides a very accurate system for manipulating spatial data, but the data being analysed are often of variable accuracy. Removing the common perception that ‘if the output is computer generated it must be accurate’ should be an important consideration for practitioners working in risk management. In addition to the technical considerations of accounting for error inherent in spatial modelling, there are issues pertaining to the legal liability of decisions supported by results from GIS-based modelling. Epstein (1991) notes that one element of the legal system of relevance to GIS modelling is the liability associated with questions of error and misuse of data and information. Hunter and Goodchild (1997: 36) add that legal issues of data quality are ‘particularly pertinent to agencies responsible for making regulatory decisions that may be subject to judicial review’. Issues of legal liability arising from spatial analyses are particularly relevant in flood risk modelling as the possible consequences of incorrect decisions are grave. To date, there have been few examples where results from analysis using GIS have been held to legal scrutiny (see Epstein, 1991; Garvey, 1995). However, this is expected to change as risk modelling and GIS-based decision-support systems become more common. Error and uncertainty in input data and spatially explicit risk models will invariably become a key focus in legal liability cases. One way GIS practitioners could prepare for this is by providing error-free results. This is at the very least impractical, and in reality impossible. The alternative, and most practical, approach is instead for GIS practitioners to make model, data and error assumptions explicit to decision-makers. This paper examines the issue of spatial data uncertainty and its relevance for GIS-based decision-making for natural hazard flood risk modelling. Storm surge inundation modelling is presented as a case study of risk modelling by comparing and contrasting two uncertainty modelling methodologies: 앫 a simulation technique known as the Grid Cell Uncertainty Model (Hunter & Goodchild, 1997), and

A. Zerger et al. / Applied Geography 22 (2002) 307–330

309

앫 an analytical statistical technique herein referred to as the Standard Normal Probability Model. Both models are applied to the task of storm surge risk modelling in the coastal community of Cairns in the far north of Australia. These uncertainty models focus on the error inherent in digital elevation models (DEMs), since ground elevations are key variables in any risk assessment (in addition to flood heights and building floor heights). Results suggest that the spatial pattern of error has important implications for evacuation planning, as, for example, in examining the feasibility of developing evacuation zones on the basis of ground elevations. Results from both methodologies are presented, focusing on their relative merits and practical implementation requirements. The implications of an uncertainty-based modelling approach for flood hazard mapping and risk management decision support are then presented.

Uncertainty and natural hazard risk assessment Uncertainty, in contrast to error, assumes no prior knowledge of data accuracy. The importance of uncertainty in natural hazard risk management has received recent attention (Goodchild, 1991; Newkirk, 1993; Rejeski, 1993; Coppock, 1995; Handmer, 1995; Davis & Keller, 1997). However, it is indeed relatively rare for operational risk-focused GIS applications to consider data or model accuracy, and rarer still for them to communicate these inherent uncertainties in the final risk assessments. One example is the study by Emmi and Horton (1995), which applied GIS and random simulations to perturb the location of earthquake ground-shaking zones, and assessed the resulting change in building losses in Salt Lake City, Utah. This study is particularly relevant since it quantifies uncertainty in terms of the final risk results. More commonly, spatial uncertainty summaries are poorly understood and presented as esoteric concepts that have little practical use for risk managers. These may include mathematical measures such as ‘Root Mean Square Errors’ (RMSE), epsilon bands, probability surfaces and classification error matrices. Simulation techniques such as GIS-based modelling allow natural hazard risk managers to identify the cost-effectiveness of error reduction strategies, including improving database resolution, or performing more detailed ground truthing (Emmi & Horton, 1995). Uncertainty analysis also allows risk managers to answer questions about the error bounds of hazard models and to guide future GIS data acquisition. Davis and Keller (1997) applied ‘fuzzy logic’ and Monte Carlo simulations (EPA, 1997) in a GIS to a slope stability risk model in British Columbia, Canada. The method accounts for the uncertain nature of risk when polygon spatial data models are used. The theory accounts for the representation of degrees of membership in a set (that is, vegetation polygons) to ‘mirror the nature of imprecise data’ (Davis & Keller, 1997: 411). Murillo and Hunter (1996) evaluated uncertainty in a landslide susceptibility model for a site in Oregon, USA. To evaluate the role of spatial data uncertainty,

310

A. Zerger et al. / Applied Geography 22 (2002) 307–330

several different but equally probable realizations of a DEM were created. The results were mapped, and regions consistently identified as ‘at risk’ were shaded by a particular colour to denote high-certainty results. Wadge (1988) also discussed the importance of DEM error on gravity-flow models and resultant slope instabilities. These examples only account for one component of uncertainty in the risk model. Other uncertainties include those associated with GIS spatial operations, uncertainty inherent in the natural hazard model, and the uncertainty associated with cartographic generalization. All these will affect the quality of decision-making. The problem is not so much the existence of uncertainty in GIS data but rather that the traditional response has been to ignore it, on the grounds that methods to handle it do not exist (Openshaw, 1989). Goodchild, Guoqing and Shiren (1992) note that, with the inevitable existence of uncertainty in spatial data, there are three responses to dealing with it: 1. to omit all reference to it; 2. to attach some form of description to the output, or 3. to show samples from the range of maps or outputs possible. The first option is unacceptable and the second may not adequately communicate to risk managers the complex nature of uncertainty in the data and model. Examining the third option is preferable because ‘it would appear to have the greatest potential benefit in both communicating uncertainty and at the same time educating the user community to the significance of the issue’ (Hunter, Goodchild, & Robey, 1994: 3). Methods for implementing this last option are described as ‘sensitivity analyses’. Lodwick, Monsoon and Svobodam (1990: 413) define sensitivity analysis as ‘the study of the effects of imposed perturbations (variations) on the inputs of a geographical analysis on the outputs of that analysis’. Sensitivity analysis is applicable when no prior knowledge of the truth exists or when analytic error propagation models are not applicable. The following section examines the issue of elevation model uncertainty and its importance to flood risk modelling. Two uncertainty methodologies are presented and evaluated in the context of computational efficiency and their relevance to risk mapping. Although the research examines storm surge risk, the methodologies could be adapted to riverine flooding.

Study site and flood risk modelling A storm surge (commonly called ‘storm tide’ when combined with tidal influences) is the term used to describe an anomalous elevation of water generated by the combined action of a cyclone and coastal bathymetry. Loss of life in Australia from storm surge inundation has been minimal in comparison to around 300 000 lives lost in 1970 in Bangladesh, or around 6000 deaths in Galveston, Texas in 1900. Nevertheless, modelling surge hazard inundation in vulnerable urban regions is an important component of broader disaster risk management efforts. For instance, accurate surge inundation estimates are important for evacuation planning, for developing urban

A. Zerger et al. / Applied Geography 22 (2002) 307–330

311

zoning that accounts for inundation risk, for developing mitigation measures such as building construction codes and for planning post-disaster recovery. Cairns (Fig. 1) is situated in far North Queensland and has a population of approximately 100 000. The Queensland coast has been labelled ‘The New California’, with growth rates that place it in the top ten fastest-growing urban areas in the developed world (Skinner, Gillam, & O’Dempsey, 1993). The linear nature of the coastal range, combined with the desire for beach frontages, restricts urbanization in Cairns to a north–south corridor, which leads to a greater susceptibility to massive losses from storm surge. Road and utility networks are also likely to be at risk from a major disaster. In addition, Cairns is a relatively remote regional centre in Queensland. All these factors magnify the consequences of any major cyclonic event. Although the incidence of major cyclone disaster in Australia is uncommon, events such as Cyclone Tracy in 1974 and Cyclone Justin in 1997 remind residents and policy-makers that a risk exists. Detailed spatial databases for Cairns are a key input for risk modelling. From fieldwork and existing council databases, information on 24 669 commercial and residential buildings has been integrated into a GIS. The database also includes building attributes such as floor heights (for inundation modelling) and building construction material (for future cyclone wind field modelling). In addition to the built environment, key input parameters for modelling inundation are high-accuracy digital elevation models (DEMs). The DEM, in combination with the floor heights, is critical for accurately predicting the number of buildings likely to be inundated by a particular storm surge. ANUDEM surface interpolation software (Hutchinson, 1996) was used to generate a 20-m DEM from 1:25 000-scale contours, spot heights and drainage networks (Fig. 2). ANUDEM generates DEMs with sensible drainage properties by including a drainage enforcement algorithm that removes spurious sinks. The algorithm is particularly suited to cases where only sparse elevation data (contours or spot heights) are available. To verify the final DEM accuracy, 294 sub-centimetre accuracy permanent survey markers were used.

Fig. 1. Cairns City looking north showing the coastal range and central business district and a 3D visualization for the same area.

312

A. Zerger et al. / Applied Geography 22 (2002) 307–330

Fig. 2.

Cairns: analytical hillshading derived from 20-m DEM.

A. Zerger et al. / Applied Geography 22 (2002) 307–330

313

Fig. 3. DEM error frequency and cumulative frequency histogram for all elevations compared to elevations recorded at permanent survey markers (PSM).

Fig. 4. DEM error frequency and cumulative frequency histogram for elevations ⬍5.0 m (AHD) compared to elevations recorded at permanent survey markers (PSM).

Fig. 3 shows the error frequency histogram for the 20-m Cairns DEM. After comparison with the surveyed ground control points, the DEM was found to be accurate to within ±2.0 m 90% of the time. This represents a ±0.5-m accuracy improvement over the published accuracy standards of ±2.5-m 90% of the time for the 1:25 000-

314

A. Zerger et al. / Applied Geography 22 (2002) 307–330

scale input contour and spot elevation data. When elevations of less than 5 m are considered (Fig. 4), the DEM is more accurate (within ±1.0 m 90% of the time). Of particular interest to this research is not the absolute accuracy of the DEM but rather the implications of error for storm surge risk modelling. The spatial pattern of DEM error has important implications for evacuation planning, as one objective of the research is to examine the feasibility of developing evacuation zones on the basis of ground elevation in Cairns. Results from the DEM error analysis show that error varies spatially through the study area. Within areas of interest (where people live) these errors are, however, relatively constant (Fig. 4). This has important implications for other coastal hazard studies that utilize digital elevation data for risk assessment (Zerger, 1999), since most error statements (metadata) for DEMs are provided as global error estimates (e.g. RMSE statistics). Whilst these are correct globally, they may be relatively erroneous as local estimates. The implications of this for decision-making are serious.This is particularly important in coastal regions such as Cairns, where a mountainous coastal range can skew global elevation error estimates, since larger errors are commonly found in mountainous regions. Results in Cairns show that the absolute DEM vertical accuracy may be less than that ideally required, but evacuation priorities can still be determined as the relative error where buildings are located (⬍5.0 m above the Australian Height Datum (AHD)) remains relatively constant. However, this approach to evacuation planning is relatively simple, ignoring the storm surge magnitude entirely. Just as importantly, it ignores the vast investment in spatial data made by local governments. More sophisticated methodologies should therefore be utilized. The following section examines two approaches to modelling inundation risk to the buildings of Cairns under assumptions of elevation model uncertainty. Both provide a technique for modelling and communicating the consequences of spatial model uncertainty to disaster risk managers in the absence of detailed spatial error modelling. The methodologies are compared in the context of evacuation planning in Cairns, as they represent two distinct approaches that yield similar results. However, issues of GIS software integration and computational efficiency will determine their ultimate suitability in applications such as disaster risk management. These issues are considered in detail, as they are relevant for other coastal zone applications that depend on accurate elevation models and error communication to decision-makers.

Storm surge inundation modelling methodology In addition to the availability of high-accuracy databases (including elevation data), modelling storm surge risk on land is restricted by adequate knowledge of the complexity of storm surge behaviour around natural barriers and buildings and the associated computational demands of modelling such dynamic behaviour. To model the inundation component, a simple flatwater inundation model is used to represent

A. Zerger et al. / Applied Geography 22 (2002) 307–330

315

inundation levels above the Australian Height Datum (AHD) based on elevations in the DEM and building floor heights only. This simple inundation model is implemented using the Arc/Info GIS (ESRI, 1997) and the Arc Macro Language (AML). A simple flatwater model was used as, currently, there is no hydraulic overland storm surge model for Cairns. The Australian Bureau of Meteorology has developed the Maximum Envelopes Of Water (MEOWS) technique for surge modelling along the Queensland coast (Sanderson, Tang, Holland, Grimshaw, & Woodcock, 1995). However, this technique is not suitable for such local-scale studies, and in addition it neglects the overland surge estimates that are so critical in urbanized areas such as Cairns. Inundation levels were modelled in a range from 0 to 15 m (AHD) in 0.5 m increments and relative building flooding was compared at each inundation level. (All heights are relative to the AHD.) This approach is a form of sensitivity analysis and is used in the absence of detailed flood hydrologies for the study area. Building inundation is a function of the ground elevation at each building location, the building floor height (obtained from fieldwork) and the storm surge inundation simulated using the flatwater approach in the GIS. For flood risk assessment, such a sensitivity analysis is particularly valuable for several reasons. 앫 Surge scenarios can be introduced into the landscape and risk managers can assess the outcomes on community vulnerability by identifying ‘hot spots’ or regions that have particularly high risk. Similarly, ‘cold-spots’, or regions that have repeatedly low levels of risk, can be identified and used as evacuation safe areas. 앫 It can identify probable maximum risk whereas in the past probable maximum inundations are only considered. The construct of probable maximum risk reframes disaster planning and policy in terms of vulnerability, consequence and risk, rather than hazard. 앫 It can identify emergency management ‘catchments’ or regions within the study domain that have a similar risk and hence require similar risk treatments. 앫 It can identify critical inundation ‘thresholds’ or storm surge levels that require significantly different risk treatments. 앫 The relative importance of input parameters can be assessed and hence the robustness of the risk analysis to error in GIS-based modelling can be examined. For storm surge modelling, sensitivity analysis is also useful because results from the model cannot commonly be compared against historical events. Earlier damage from storm surge inundations has not been mapped accurately and the frequency of cyclone disasters in Northern Queensland has been low. This makes model calibration and verification almost impossible. The following section first examines the inundation risk in Cairns under assumptions of model accuracy (i.e. highly accurate DEMs). The inundation risk is expressed in terms of the percentage of buildings inundated above floor height for particular storm surge scenarios. The

316

A. Zerger et al. / Applied Geography 22 (2002) 307–330

Fig. 5.

Buildings inundated above floor-level using a flatwater storm surge model.

paper then examines two methodologies for mapping inundation risk that include measures of model uncertainty in the final risk estimate.

Storm surge risk modelling under assumptions of data accuracy The following results assume error-free spatial data and the inundation modelling makes no reference to inherent uncertainties. Fig. 5 shows the building inundation statistics for the flatwater sensitivity analysis ranging from 2 to 8.0 m (AHD) for both commercial and residential properties in Cairns. The critical inundation level for Cairns is 2.2–2.8 m (AHD), as this is where the greatest increase in the number of buildings flooded over floor level occurs. The count of buildings flooded increases from 0 to almost 5000 (15%) with only a 1-m rise in inundation. In contrast, the number of buildings with over-floor flooding increases at a rate of 1000 buildings (7–8%) for each 1-m increase after 3.5 m. This pattern of inundation is not surprising when we consider the topographic distribution of buildings in Cairns (which are predominantly located in areas of low elevation).

A. Zerger et al. / Applied Geography 22 (2002) 307–330

317

In addition to showing the consequences of building inundation in Cairns, these results raise the critical issue of model error and the consequences for emergency management decision-making and risk mapping. Significant increases in buildings flooded over floor level are observed at increments less than the vertical accuracy of the DEM (+2.0 m 90% of the time). A crucial question is: how much faith can be placed in the inundation results as the model is highly dependent on a DEM that is known to contain error? One outcome is that confidence in storm surge inundation results must be questioned under particular surge scenarios. This is particularly the case for high-frequency and low-magnitude events in coastal Queensland. A closer approximation of the true elevation at each building is impossible without more accurate elevation and flood data. Therefore, uncertainty still remains in the final results presented above. The following section introduces and evaluates two techniques for modelling and communicating the uncertainty inherent in the risk models. The conclusions are important, for the reality of flood risk modelling is that decisions are commonly made with assumptions of accuracy that are clearly false. In other applications of digital elevation modelling, such as wildlife habitat modelling or land use suitability assessment, error in vertical data can be tolerated and even ignored, with minimal consequence. For flood risk prediction and evacuation decision-making, the consequences of ignoring inherent errors may be far from insignificant.

Flood risk modelling under assumptions of uncertainty A number of studies have examined the effect of DEM uncertainty on DEMderived variables such as slope and aspect (Carter, 1992). Gao (1997) developed a methodology for determining the DEM resolution that is appropriate for the accuracy requirements of a particular application. The research focused specifically on DEMderived slope gradients and the representation of morphologic features, including valleys, peaks and ridges. Hunter and Goodchild (1997) provide an approach for modelling the nature and severity of slope and aspect in the absence of explicit error information by applying uncertainty analysis. Less common are GIS applications that examine the effect on final model outcomes. Researchers have applied variations of sensitivity analysis to examine the effect of a model’s input on output results (Stoms, Davis, & Cogan, 1992). Goodchild, Buttenfield, and Wood (1995) provide a detailed discussion of the role of uncertainty in the decision-making process using spatial data. DEMs are critical for flood risk inundation modelling and particularly for urban damage assessment and evacuation planning. Uncertainty analysis based on sensitivity analysis is one technique for communicating the consequences of spatial uncertainty for decisionmaking. The following sections describe and evaluate two approaches for modelling the uncertainty inherent in flood risk models. Results are almost identical, yet computationally the approaches have important differences that determine their ultimate suitability for flood risk estimation.

318

A. Zerger et al. / Applied Geography 22 (2002) 307–330

Fig. 6. Realizations of the Cairns DEM with the addition of a spatially autocorrelated disturbance field (ρ = 0.0, 0.12, 0.24, 0.249) and the raw DEM.

The Grid Cell Uncertainty Modelling approach This research first adapts the Grid Cell Uncertainty Model (GCUM) (Hunter, Goodchild, & Robey, 1994), which was based on earlier work by Goodchild et al. (1992). It has been used previously to determine the uncertainty of slope and aspect estimates derived from spatial databases (Hunter & Goodchild, 1997) and for landslide susceptibility mapping (Murillo & Hunter, 1996). The application of the GCUM approach for flood risk modelling introduces a range of new issues and highlights its limitations, which are unique in relation to previous applications. The model is particularly suitable when no knowledge of input data errors is available. The GCUM applies ‘noise’ to a source DEM to simulate the error present in the elevations by randomly perturbing the elevation values to create a new realization of the DEM (Fig. 6). The realization is then used in the storm surge inundation model, and new

A. Zerger et al. / Applied Geography 22 (2002) 307–330

319

risk results are derived. Because the model is stochastic, multiple realizations are usually performed to generate a range of model outcomes. The realizations of the DEM must be sufficiently spatially autocorrelated to truly represent the spatial dependencies inherent in elevation. By simply applying a random ‘noise’ based on a global root mean square error (RMSE) value, the process would fail to represent adequately the error in elevations, as spatial autocorrelation exists between neighbouring elevation values. Spatial autocorrelation is a feature inherent in spatial data in which geographical proximal values are likely to be more similar that those further away. A random noise neglecting spatial autocorrelation would result in cells having immediate neighbours with extreme peaks and troughs, which rarely occurs in nature. The model constrains the random perturbations by a spatial autocorrelation index (Cliff & Ord, 1981). The GCUM also requires the input of an RMSE for the DEM. This value can be obtained from the published global RMSE values supplied by the provider of contour and spot elevation data, from a detailed error analysis presented earlier, or directly from surface interpolation algorithms such as ANUDEM, which provides a global RMSE value in its diagnostic files. Previous applications of the Grid Cell Uncertainty Model were in mountainous and non-homogeneous terrain. In contrast, the coastal region of Cairns has a wide range of terrain characteristics, from relatively steep slopes inland to flat coastal plains (see Fig. 2). In addition, the distribution of the phenomenon commonly being modelled, such as aspect or northness, is distributed across the entire study domain. Cairns storm surge inundation results are based on a selected subset of the data, restricted to regions where buildings are located. This is almost exclusively in lowrelief areas (⬍5 m AHD), which excludes a large population of the elevation data from the analysis. The flat and relatively homogeneous terrain leads to unique results, as a very small perturbation in elevation can include a large range of buildings as being inundated. Therefore, the variation in results for flood risk modelling under assumptions of uncertainty may be more extreme than those previously documented for environmental variables such as slope and aspect, or the other applications mentioned earlier. Coastal-zone risk modelling hence introduces new challenges for practitioners that attempt to integrate elevation uncertainty modelling techniques into their risk predictions.

The Standard Normal Probability methodology The Standard Normal Probability (SNP) methodology assumes that the elevation errors in the DEM are normally distributed. A variation of this methodology has been previously documented in Eastman, Kyem, Toledano, and Jin (1993). Results are nearly identical to the GCUM model because the GCUM methodology generates an initial error matrix that is also normally distributed. The probability of any building being flooded above floor level, for a particular inundation level, can be determined by using the standard normal probability density function. Fig. 7 shows schematically how the probability (level of certainty) of inundation is calculated for each

320

A. Zerger et al. / Applied Geography 22 (2002) 307–330

Fig. 7. Generalized methodology for implementing the Standard Normal Probability uncertainty model for flood risk.

building using this. The building floor height (metres above AHD) is the mean, and the RMSE (as derived from error assessment or the data provider) of the DEM is the standard deviation(s) of the normal probability density function for each building. To derive the probability of over-floor flooding for a particular building, the equation is solved using standard normal probability theory. The SNP methodology has been implemented using a software program that exports the floor heights (relative to the AHD in metres) and unique building identifiers from the GIS building database to a data file. An algorithm functioning external to the GIS (Perlman, 1998) evaluates the probability of flooding for each building with the addition of an inundation level and RMSE value to represent the error approximation in the DEM. The probability estimates for each building are then returned to the GIS database using the unique building identifier and plotted to show the probability of flooding for a particular inundation level. The next section examines the implications of uncertainty modelling using both techniques for storm surge risk assessment in Cairns.

Results The final output provided by the GCUM and SNP risk models is a probability of over-floor flooding for each inundation scenario and for each building, expressed as a value from 0 to 100%. Because the SNP methodology is an analytical statistical technique, theoretically all possible simulations are accounted for. In contrast, the GCUM requires a finite number of simulations, which are representative of the range of results possible. For Cairns, this consists of 50 unique realizations of the DEM. A new attribute C is created for each building in the GIS, where C is the count of the number of times the building is flooded above floor level for a particular inundation scenario. Where C approaches 50, a greater probability of flooding is expected for that building. Conversely, where C approaches 0, the likelihood of flooding is less. Continuous-risk probability estimates can be re-classed into binary risk maps by selecting a threshold value below which flooding is classed as ‘not likely’. This is problematic as selection of a cut-off point is relatively arbitrary and can result in inaccurate risk forecasts. In addition, valuable data would be lost and such represen-

A. Zerger et al. / Applied Geography 22 (2002) 307–330

321

tations reinforce the perception of risk as an absolute quantity. In practical terms this would mean that risk managers could not prioritize evacuation zones. This research treats risk as a continuous phenomenon with inherent uncertainty, and hence each building contains a ratio scale probability of over-floor inundation stored as an attribute for each inundation scenario. For graphic and cartographic simplicity, the continuous probabilities are re-classed into five ordinal classes of increasing probability. For emergency management decision-making, results are presented as maps showing the spatial pattern of risk in Cairns. Fig. 8 shows the building inundation probabilities for selected inundation scenarios (1.0–8.0 m) using the GCUM. Owing to cartographic generalization, risk patterns are identical between the GCUM and the SNP methodology, and hence only the GCUM results are shown. When provided in a GIS, these maps act as look-up maps for risk management decision-making and, specifically, for assigning evacuation priorities. Graphical representations highlight the subtle differences between both models and are used in the ensuing discussion (Figs. 9 and 10). Because it is impossible to verify the accuracy of these risk models without mapping an actual storm surge event in Cairns, comparing them highlights the range of risk results possible. The graphs also show the percentage of buildings inundated over floor level, using the simple inundation model that does not account for DEM error. This assumption of accuracy is implicit in most applications of a GIS for natural hazard risk assessment. For instance, DEMs are used to delineate inundation zones by identifying one critical contour to generate a final hazard or evacuation map. The inclusion of the uncertainty estimates provides some indication of the error bounds of our inundation model to assist with decision-making. With reference to Figs. 9 and 10, the SNP model over-predicts the number of buildings flooded at low inundations. At a 1 m inundation, the SNP model shows that 40% of the buildings will be inundated at the lowest certainty level (0–20%). This over-estimation should be considered in the context of the tidal range in Cairns, which is ±1.78 m relative to AHD. Few buildings are located below this elevation and the over-estimation by the SNP model reflects its theoretical nature. In contrast, the GCUM model only identifies 10% of the buildings as being inundated at the low certainty level (0–20%). In these cases, the GCUM model is more representative of the likely consequences of flooding as simulations are based on actual elevations derived from alternate DEMs rather than a probability distribution of error. Similar patterns are observed at the 80% probability level for both models. In both models, the high certainty probabilities (80–100%) are first observed at a 3.5m (above AHD) inundation and further increases in buildings inundated are almost identical. The patterns at high certainties closely follow the pattern of the simple inundation model, with a consistent under-estimation of about 10% of buildings. As noted below, the critical results at medium to high inundations (⬎3.0 m) are those that are highly certain (⬎80% probability). From the perspective of evacuation planning and decision-making, the results from both models are effectively identical.

322

A. Zerger et al. / Applied Geography 22 (2002) 307–330

Fig. 8. Cairns central business district: probability of over-floor inundation for simulated surge events using the GCUM.

A. Zerger et al. / Applied Geography 22 (2002) 307–330

Fig. 9.

Building inundation statistics for the GCUM.

Fig. 10. Building inundation statistics for the SNP model.

323

324

A. Zerger et al. / Applied Geography 22 (2002) 307–330

Discussion For emergency management evacuation planning, three possible approaches exist for interpreting the uncertainty results. The first is the worst-case-scenario approach. This would consider any building scoring at least 1% as likely to have over-floor flooding for a particular storm surge. This approach ignores the results of the uncertainty analysis entirely. The second approach would only consider those buildings scoring a probability above some specified limit (for example, 80%) as buildings that would have over-floor flooding. Lastly, emergency managers may elect to prioritize buildings requiring evacuation on the basis of uncertainty estimates. This approach may be useful where limited evacuation resources are available and buildings require evacuation priorities on some pre-determined basis. The probability estimate would then guide the evacuation decision-making process as each probability estimate provides an indication of the number of buildings inundated. The results of the risk modelling confirm that the major factor affecting the risk in Cairns is that urbanized communities have built in close proximity to the coast and on low-lying land. Although a probable maximum storm surge can result in catastrophic losses, similarly large losses are possible with smaller-magnitude events (⬍3.0 m). The uncertainty risk modelling methodology provides a practical and repeatable technique for prioritizing evacuation because relative risk can be mapped for individual surge scenarios. The sensitivity analysis approach has several advantages over risk maps, which simply re-class DEMs into elevation zones. 앫 It allows emergency managers to visualize the consequences of under-estimating the storm surge height. This is important because storm surge predictions are progressively amended as the predicted coastal crossing time may be amended. 앫 It highlights critical threshold inundations that may require different risk management responses. 앫 It highlights the importance of model and database error in the risk analysis, and communicates this to risk managers. 앫 It quantifies and maps relative risk using a systematic and repeatable methodology. The GCUM and SNP models of spatial uncertainty provide an approach for modelling, quantifying and visualizing the inherent uncertainty in digital elevation models and their significance for storm surge risk assessment. The GCUM technique is a more complex simulation technique that requires larger computing resources and is time-consuming to implement. This makes any real-time use of the model impractical. In contrast, the SNP technique is a statistical analytical technique that can be rapidly implemented on desktop computers and using relatively inexpensive GIS software. Such software and hardware are commonly used in local government in Australia. This means that the approach could be readily adapted for other coastal areas with storm surge risks and with adequate spatial databases to support such analysis. Advantages of the SNP methodology include:

A. Zerger et al. / Applied Geography 22 (2002) 307–330

325

앫 It can be implemented in vector-based GISs and personal computers that are common in local government (for example, MapInfo, ArcView). 앫 It performs the inundation modelling using a numerical rather than simulation modelling technique and hence it is relatively fast. 앫 It allows rapid sensitivity analysis to assess the influence of various model variables. 앫 The SNP does not require any estimation of DEM spatial autocorrelation, as is required for the GCUM methodology. This significantly reduces the computational overheads. Because the SNP methodology can be implemented quickly and with minimal computer processing requirements, sensitivity analysis can be used when knowledge of the DEM RMSE is not available. For example, a range of RMSE values can be tested and the resulting inundation uncertainty mapped to identify where significant changes in inundation are observed. Such an approach can guide the future development of elevation models for coastal inundation risk assessment. Additionally, the speed gains allow an uncertainty model to be run in near-real time for rapid storm surge risk assessment. In this particular implementation of the SNP methodology, the standard deviation is treated as the RMSE of the DEM where buildings are located (not the global RMSE). The standard deviation, however, could readily be the error term for any variable that affects the magnitude of over-floor storm surge inundation. Hence, it could be the error in the floor height measurements, or the error associated with storm surge levels (or wave heights) modelled using more sophisticated hydraulic models. It could also represent the cumulative uncertainty of these variables. Uncertainties can be combined by propagation procedures to create a new uncertainty estimate (Burrough, 1986; Despotakis, Giaoutzi, & Nijkamp, 1993). The GCUM methodology is a simulation technique for modelling and visualizing the uncertainty inherent in hazard risk models. From the results presented here, it can be seen that the methodology has a number of shortcomings. 앫 It is best implemented in a GIS that supports raster and vector data structures because it relies on MAP Algebra (Tomlin, 1990) expressions to generate alternate realizations of the DEM. Such GISs can be prohibitively expensive for local government hazard risk management. The current implementation of the model requires workstation ARC/INFO and the GRID module (ESRI, 1997). 앫 The methodology is CPU intensive because numerous alternate realizations of the DEM are required. Large amounts of free disk space are required due to the size of the DEM. This becomes more important as increasingly higher-resolution DEMs become available for larger study areas. 앫 Because some 100 initial realizations are required to determine a suitable r value prior to applying the GCUM model, large computing resources are required. In addition, identifying a suitable r value is a highly subjective process that is studysite specific. Because spatial autocorrelation statistics are rarely available for DEMs, this remains a necessary step in the GCUM methodology.

326

A. Zerger et al. / Applied Geography 22 (2002) 307–330

앫 Due to the above constraints, it is not feasible to perform rapid sensitivity analysis. To assess the role of DEM RMSE values on inundation risk maps (from 0.5 to 15.00 m AHD in increments of 0.5 m), more than 2 days of CPU time is required on a Sun SPARC 10 processor for the Cairns database (10 000 properties). The SNP approach can perform this analysis in less than 2 hours using the same computing resources. 앫 These computational constraints limit to 50 the number of realizations that are used to generate uncertainty estimates. This may be inadequate to generate a random sample of alternate DEM realizations. Computational overheads are significant barriers to implementing the methodology for other study areas or in real time. The SNP method is an analytical technique based on normal probability theory. On the other hand, the GCUM model is a numerical simulation technique that achieves similar results by simulation. Simulation techniques such as this commonly require more computational processing than analytic techniques. In addition, simulation techniques performed within a GIS incur greater computer processing overheads owing to the inherent spatial data structures of a GIS. An important factor that makes the SNP approach more practical is that all the inundation processing is performed using a stand-alone numerical algorithm, rather than the analytical capabilities of a GIS. The GIS simply provides the input data to the numerical model and helps in visualizing the final model results. GISs are often not suited to such numerical modelling as they rely on interpreted computer macro languages and relatively cumbersome spatial data structures.

Implications for decision-making This research has examined a number of the technical issues and the consequence of modelling uncertainty for flood risk models. The techniques examined provide practitioners with a sound and repeatable methodology for modelling and communicating the inherent errors in estimating flood risk. However, the inclusion of uncertainty estimates challenges common perceptions of risk. The following discussion examines the non-technical issues associated with uncertainty modelling, as these may be a greater barrier to the acceptance of these approaches than the purely technical modelling issues. Specifically, the inclusion of an uncertainty term in the risk model output results in risk being mapped along a continuum rather than as an absolute. In addition, results from this research suggests that a variety of risk interpretations are possible, which may ultimately lead to decision-making ambiguity. There may also be challenges associated with presenting such uncertainty-based risk results to risk managers for two reasons. First, in an effort to model the consequences of error in risk modelling, unnecessary levels of complexity are introduced into the decision-making process. Secondly, risk models that show spatial uncertainty may be too complex for interpretation by untrained individuals and may be ignored

A. Zerger et al. / Applied Geography 22 (2002) 307–330

327

or misinterpreted. Research in progress also indicates that risk managers actually demand temporal resolution and accuracy ahead of spatial resolution and accuracy to assist with decision-making. This issue itself challenges our common goal of improve spatial accuracy and resolution in natural hazard risk management (Zerger, 2002). Technical issues of modelling uncertainty have been described here in detail. Human/cognitive issues, however, may limit the acceptability of uncertainty-based risk results for natural hazard risk management and decision-making. Further research is required to examine the decision ambiguity that may arise from presenting uncertainty modelling results to risk managers and decision-makers. In addition, semi-structured interviews with key risk managers in Cairns have highlighted the common incompatibility between decision-making scale and resolution and the scale and resolution of spatial data. Addressing this disparity should be a key step for future spatial modelling that attempts to improve decision-making. Even when the construct of spatial uncertainty is clearly understood by risk managers, including it in the decision-making process may be difficult because emergency managers seek clear and accurate guidelines, rather than uncertain outcomes. At present, in communities such as Cairns, the risk management treatment is to ‘over-evacuate’. Such decisions are based on ‘precautionary principles’ that have evolved in the absence of detailed inundation modelling. The uncertainty modelling methodologies discussed in this research have shown that in relatively homogeneous terrain such as Cairns, small changes in inundation can significantly alter building inundations. Therefore, these over-evacuations may indeed be sound risk management approaches to reducing risk, rather than attempting to define precise evacuation and safe zones.

Conclusion The inclusion of uncertainty estimates into inundation risk models has advantages over presenting binary maps of risk. First, it includes measures of spatial uncertainty into the risk model and data and model assumptions are made implicit to decisionmakers and stakeholders. In other words, spatial uncertainty can be visualized in the final risk maps. Secondly, the techniques evaluated provide an indication of the relative risk between buildings because risk is mapped along a continuum rather than as an absolute. From a decision-making perspective, this has the benefit of rephrasing disaster policy in the context of probable maximum risk rather than probable maximum hazard. Results confirm that the greatest changes in risk occur at relatively low inundations (⬍3 m above AHD). Although probable maximum storm surges may have dramatic consequences, such as complete building failure, results show that these would not be significantly greater than the consequences for relatively low inundations. Threshold inundations reflect the pattern of urbanization in coastal communities, where building is restricted to low-lying coastal land. A better understanding of storm surge dynamics in urban communities would improve the accuracy of the risk models, but in the

328

A. Zerger et al. / Applied Geography 22 (2002) 307–330

absence of these, ground elevations may prove to be an adequate surrogate. However, the adequacy of elevation data as a surrogate for risk can only be determined once detailed over-land hydraulic storm surge models have been developed. From the Cairns case study, it is apparent that coastal zone risk assessment and evacuation planning place demands on data accuracy that are very different to other applications. Hence, future modelling of urban coastal risk must include estimates of model uncertainty in the final risk maps. Specifically, the SNP methodology could commonly be adapted in the absence of very accurate DEMs and uncertain flood hydrologies. It is relatively common in aspatial statistical analysis to include estimates of model error and uncertainty. This may take the form of numeric and graphical representations of confidence limits, error bars, or the use of significance tests. It is indeed surprising that these commonly used statistical measures of uncertainty have not been regularly extended to the spatial domain. Flood risk modelling is a prime example of an application that requires better representation and communication of model uncertainties to decision-makers.

Acknowledgements This research was funded by the Australian Organising Committee for the International Decade for Natural Disaster Reduction. Research was integrated within the Tropical Cyclone Coastal Impacts Project (TCCIP). Thanks are extended to Mr K. Granger (Australian Geological Survey Organisation), Queensland Department of Emergency Services, Cairns City Council and Emergency Management, Australia.

References Burrough, P. A. (1986). Principles of geographical information systems for land resource assessment. Oxford: Clarendon Press. Buttenfield, B., & Beard, K. M. (1995). Graphical and geographical components of data quality. In H. M. Hearnshaw, & D. J. Unwin (Eds.), Visualisation in geographic information systems (pp. 150–157). Chichester: Wiley. Carter, J. R. (1992). The effect of data precision on the calculation of slope and aspect using gridded DEMs. Cartographica, 29, 22–34. Cliff, A. D., & Ord, J. K. (1981). Spatial processes: models and applications. London: Pion. Coppock, J. T. (1995). GIS and natural hazards: an overview from a GIS perspective. In A. Carrara, & F. Guzzetti (Eds.), Geographical information systems in assessing natural hazards (Vol. 6, pp. 21– 34). Dordrecht: Kluwer. Davis, T. J., & Keller, C. P. (1997). Modelling uncertainty in natural resource analysis using fuzzy sets and Monte Carlo simulations: slope stability prediction. International Journal of Geographical Information Science, 11, 409–434. Despotakis, V. K., Giaoutzi, M., & Nijkamp, P. (1993). Dynamic GIS models for regional sustainable development. In M. Fischer, & P. Nijkamp (Eds.), Geographic information systems, spatial modelling, and policy evaluation. Berlin: Springer-Verlag. Eastman, J. R., Kyem, P. A. K., Toledano, J., & Jin, W. (1993). GIS and decision-making. Geneva: United Nations Institute for Training and Research, Exploration in Geographic Information Systems Technology.

A. Zerger et al. / Applied Geography 22 (2002) 307–330

329

Emmi, P. C., & Horton, C. A. (1995). A Monte Carlo simulation of error propagation in a GIS-based assessment of seismic risk. International Journal of Geographical Information Systems, 9, 447–461. EPA (1997). Guiding principles for Monte Carlo analysis. Washington, DC: US Environmental Protection Agency. Epstein, E. F. (1991). Legal aspects of GIS. In D. J. Maguire, M. F. Goodchild, & D. W. Rhind (Eds.), Geographical information systems: principles and applications (Vol. 1, pp. 489–502). Harlow: Longman. ESRI (1997). ARC/INFO. Redlands, CA: Environmental Systems Research Institute. Gao, J. (1997). Resolution and accuracy of terrain representation by grid DEMs at a micro-scale. International Journal of Geographical Information Science, 11, 199–212. Garvey, M. (1995). The use of geographic information systems to analyse wildfire threat. Fire Management Quarterly (Country Fire Authority of Victoria, Australia), 11, 2–8. Goodchild, M. F. (1991). Issues of quality and uncertainty. In J. C. Muller (Ed.), Advances in cartography (pp. 113–139). Oxford: Elsevier Applied Science. Goodchild, M. F., Guoqing, S., & Shiren, Y. (1992). Development and test of an error model for categorical data. International Journal of Geographical Information Systems, 6, 87–104. Goodchild, M., Buttenfield, B., & Wood, J. (1995). Introduction to visualising data validity: visualisation in geographic information systems. In H. M. Hearnshaw, & D. J. Unwin (Eds.), Visualisation in geographic information systems (pp. 141–149). Chichester: Wiley. Handmer, J. (1995). In T.W. Norton, T. Beer, & S.R. Dovers (Eds.), Risk and uncertainty in environmental management (pp. 44–62), Canberra: Australian National University. Hunter, G. J., & Goodchild, M. F. (1997). Modeling the uncertainty of slope and aspect estimates derived from spatial databases. Geographical Analysis, 29, 35–49. Hunter, G. J., Goodchild, M. F., & Robey, M. (1994). A toolbox for assessing uncertainty in spatial databases. In Proceedings of the 22nd annual conference of the Australasian Urban and Regional Information Systems Association Inc., Sydney. Hutchinson, M. (1996). ANUDEM. Canberra: Centre for Resource and Environmental Studies. Lodwick, W. A., Monsoon, W., & Svobodam (1990). Attribute error and sensitivity analysis of map operations in geographical information systems. International Journal of Geographical Information Systems, 4, 413–428. Murillo, M.L., & Hunter, G.J. (1996). Evaluating uncertainty in a landslide susceptibility model. Paper presented at 2nd International Symposium on Spatial Data Accuracy, Fort Collins, CO. Murty, T. S. (1988). List of major natural disasters, 1960–1987. Natural Hazards, 1, 303–304. Newkirk, R. T. (1993). Simulation for risk analysis: a challenge for GIS. In J.D. Sullivan (Ed.), International Emergency Management and Engineering Conference: Tenth Anniversary: Research and Applications; Proceedings of the 1993 Simulation Multiconference on the International Emergency Management and Engineering Conference (pp. 62–65). San Diego: Society for Computer Simulation. Openshaw, S. (1989). Learning to live with errors in spatial databases. In M. Goodchild, & S. Gopal (Eds.), Accuracy of spatial databases. London: Taylor & Francis. Perlman, G. (1998). Chisq. c. Unpublished MA dissertation, Wang Institute. Rejeski, D. (1993). GIS and risk: a three culture problem. In M. F. Goodchild, B. O. Parks, & L. T. Steyaert (Eds.), Environmental modeling with GIS (pp. 318–331). New York: Oxford University Press. Sanderson, B., Ming Tang, Y., Holland, Y., Grimshaw, R., & Woodcock, F. (1995). A tropical cyclone maximum envelope of waters (MEOW) technique. Melbourne: Bureau of Meteorology Research Centre. Skinner, J. L., Gillam, M. E., & O’Dempsey, T. M. (1993). The new California? Demographic and economic growth in Queensland. In Catastrophe insurance for tomorrow: planning for future adversities (pp. 249–274). Griffith University. Stoms, D. M., Davis, F. W., & Cogan, C. B. (1992). Sensitivity of wildlife habitat models to uncertainties in GIS data. Photogrammetric Engineering and Remote Sensing, 58(6), 843–850. Tomlin, C. D. (1990). Geographic information systems and cartographic modeling. New Jersey: Prentice-Hall.

330

A. Zerger et al. / Applied Geography 22 (2002) 307–330

Wadge, G. (1988). The potential of GIS modelling of gravity flows and slope instabilities. International Journal of Geographical Information Systems, 2, 143–152. Zerger, A. (1999). Digital elevation modelling for natural hazard risk assessment. Australian Journal of Emergency Management, 13(4), 61–65. Zerger, A. (2002). Examining GIS decision utility for natural hazard risk modelling. Environmental Modelling and Software, 17, 287–294.