Environmental Modelling & Software 22 (2007) 1389e1391 www.elsevier.com/locate/envsoft
Preface
Modelling, computer-assisted simulations, and mapping of dangerous phenomena for hazard assessment The topic of this special issue certainly deserves much more than the following few comments of introduction. Our remarks here are aimed at better defining the scientific framework within which research activities on this topic e and hence the papers here collected e have generally been conducted up to date. Evaluating the hazards posed by geological phenomena constitutes, within the broader framework of ‘‘prediction’’, one of the most significant challenges of modern scientific research. As a matter of fact, there has always been a strong demand from Government Authorities for suitable and powerful tools for analysing and predicting hazardous situations for civil protection purposes. With the aim of evaluating the variety of available techniques for hazard assessment, they could perhaps be classified solely on the basis of the type of phenomenon that they describe (e.g. earthquakes, floods, hurricanes, landslides, volcanic eruptions, tsunamis, forest fires, and water/air pollution). Nevertheless, similarities in methodologies and general trends of conceptual evolution immediately demonstrate that many approaches of analysis have broader applications, which go well beyond peculiarities related to the specific type of phenomenon for which they were originally implemented. In the last decades, increasingly powerful computing environments and software codes, combined with refined techniques of measure, have progressively become available. We are now still ‘‘in the middle of the ford’’, well within such an evolutionary phase, which appears somewhat confused and must also deal with the wider topic of ‘‘complexity’’. The stakes involve the passage from a purely descriptive kind of science, to a mainly predictive one e which is exactly what the public expects from the scientific community. In this context, the word ‘‘complexity’’ does not refer to algorithmic complexity, but rather to the notion of complex systems (i.e. aggregates made of different, generally non-linearly interacting parts, whose global evolution cannot be returned to the sum of the elementary behaviours of their constituent parts). The system is complex with reference to an observer, who is able to distinguish the local from the global scale, and to measure the elementary interactions. Multiple levels of description can be selected for a given system, either in typological, temporal- or spatial-scale terms, which are strictly related to the adopted observationaleexperimental choices (expressing the 1364-8152/$ - see front matter Ó 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.envsoft.2006.12.002
particular interest of the observer). For instance, even the quite simple event of ‘‘an ant which carries food to the nest’’ may disclose a complex behaviour, when the level of description becomes so detailed that the influence of the chemical tracks left from other ants, and the relationship with the environment around the nest, are taken into consideration. One could even decide to go deeper, pondering the nuclear interactions within the body of the ant. Nevertheless, it seems quite obvious that the best scale for analysing the problem of ‘‘ant infestation’’ would probably be neither the latter, nor the over-simplified former one. Typical properties of complex systems e even though not necessarily ‘‘general’’ e are: (i) the discrepancy between the severity of the perturbation and the related effects (i.e. a small perturbation can induce strong effects, and vice versa); (ii) the phenomenon of ‘‘emergence’’ (i.e. when the general properties of the system can be inferred, but they cannot be discovered at a closer level of examination by simply analysing its elementary parts); and (iii) no ‘‘reductionism’’ (simplification) is substantially allowed, and the ‘‘elementary processes’’ e which need to be properly identified e must all be considered for predicting the evolution of the system. Computer Science offers computational resources and methods that are potentially able to generate models, by means of a proper selection of the level of description of the phenomenon, and thus for performing simulations and predicting the evolution of a complex system. Though, scientific research has long focused (and somehow still does) mainly on data acquisition and storage aspects, generally developing qualitative analyses and classification schemes. With reference to these latter, an only-apparent reduction of complexity may derive, as they are mostly produced within mono-disciplinary contexts. More specifically, for risk evaluation to be usable, such a fundamental and massive scientific effort requires a strong qualitative undertone. In the time of an emergency, this approach allows the process to ‘‘go with the tide’’ by focussing on general characteristics of other similar events, and learning a posteriori lessons from empirical experience. This makes it possible to understand what must be corrected, and whether new considerations should be included. An immediate and obvious consequence of this outlook relies on the application of
1390
Preface / Environmental Modelling & Software 22 (2007) 1389e1391
(more or less refined) statistical methods to huge amounts of data, aiming at deriving probabilistic rules. This methodology represented a significant step forward by applying a welltested method, which relies on computer support for long and complicated analyses, and implied the adoption of a multidisciplinary approach. The correct application of such methods is not trivial: despite being limited by a probabilistic formulation, the statistical treatment of empirical data in fact allowed the derivation of a first predictive approach, in certain well-defined sectors of study. It should be mentioned that, within certain limits, probabilistic analyses still are useful, even in the case of substandard data or even when some data are lacking. In parallel to statistical/probabilistic methods, another type of approach with a strong basis in Physics developed. The goals in this form of analysis were typical of ‘‘hard science’’, aiming at predisposing highly-predictive formal tools. Certainly, this is a difficult (perhaps even prohibitive) task if not properly supported by computers. On one side is the highly-complex phenomenon and, on the other side, a physicalemathematical description based on space/time discretizations, using the methods of numerical approximation for partial differential equations (PDE). The assumption is that these formulations capture the essential characteristics of the phenomenon. In particular, the following questions should not be ignored: 1. Is the phenomenon, in its complexity, adequately described by the adopted system of differential equations? In general, the main problem is in fact to simplify e as much as possible e the system of equations, in order to solve it, rather than complicating it for the sake of a better physical description; 2. Are the approximations, introduced by numerical methods for solving differential equations, admissible for obtaining a correct result? Obviously, the problem of achieving ‘‘exact’’ analytical solutions remains practically unsolvable, with the exception of extremely simple and unrealistic cases; 3. Is the accuracy and quality of available data sufficient for a correct result? For instance, in real cases, the approximation of ‘‘uniform slope’’ is not generally truly justifiable. A further point could be provocatively added here, what in fact should we assume to be a correct result? The ultimate objective of modelling techniques is to obtain computer simulations, preferably supported by sophisticated graphics, so that the spatial/temporal evolution of the considered phenomena could easily be appreciated. An additional goal is to provide information useful for suitable intervention for risk reduction (i.e. by means of a priori simulations). Such techniques are often coupled with stochastic approaches in order to obtain statistical evaluations (e.g. for hazard mapping). Moreover, when developing ‘‘physical’’ models (i.e. in which physical parameters are employed), the above three points result to be highly interlaced. However, achieving conclusive answers about the goodness of the adopted
assumptions and choices are not trivial. Laboratory reproduction of a given phenomenon, within controlled conditions and in a simplified context, constitutes an important intermediate phase of analysis, which may furnish valuable hints about the limits of validity of a given model, and for a better calibration of the parameters. Nevertheless, a serious check of model performances against real cases (i.e. calibration through backanalysis) should never be ignored, as it is necessary to better understand conditions and ranges of model applicability. Similarly, a proper validation phase of model performances against a significant number of study cases (these latter properly chosen within a population of phenomena similar to the ones employed for calibration), as well as thorough sensitivity analyses should always be carried out, especially when ‘‘general-purpose’’ PDE-numerical models are utilised. Within the research context of complex systems, some alternative methods to either PDE-numerical techniques or statistical methods have been developed. These are based on Parallel Computing and correspond to Cellular Automata (in a broad sense) and to Neural Networks. Cellular Automata (CA) embed the possibility of describing a given phenomenon, at different levels of discretization of space and time, as a consequence of local interactions. These interactions are ruled according to elementary processes, which may be identified by ideally decomposing the real phenomenon. By properly combining the effects of these interactions, the overall behaviour of the phenomenon can be reproduced. Each local rule must obviously respect the classical laws of conservation e even if, in such a discrete context, empirical simplifications may be necessary. This approach may result in a valuable advantage with respect to PDEnumerical methods, because CA allow for direct simplifications, even of quite complex relationships. The three points discussed above concerning statistical and PDE-numerical methods also obviously apply to Cellular Automata numerical models, for which calibration and validation phases, and sensitivity analyses, usually are extremely important. Neural Networks can be classified into several types, some of which are quite dissimilar when considering their modalities of application. If well tuned, and if properly fed with suitable data, they have the ability of learning from their input. This aspect may allow them to successively evaluate from given outcomes (partial data) the probability of occurrence of a given event (even in terms of its characteristics). Essentially, they can be seen as a tool which project much more broadly and selectively than purely statistical approaches. A relevant example of application of Neural Networks involves a crucial aspect of modeling complex systems: i.e. the calibration of model parameters. The values of parameters cannot always be merely determined from the knowledge of the physical characteristics of the phenomenon, nor from material properties (e.g. by experimental measurements). In such cases, the value for an empirical parameter must be selected inside a range of ‘‘reasonable’’ values, and is commonly determined through a proper calibration methodology. Another important tool for model calibration is the use of Genetic Algorithms, a very efficient and successful
Preface / Environmental Modelling & Software 22 (2007) 1389e1391
methodology of optimisation. In such a framework, the problem is to find the optimal set of model parameters, i.e. those which allow to simulate a given phenomenon at best. The values of model parameters can be seen as individuals of an evolving population, subjected to ‘‘selection’’ by means of a fitness function (based on individual’s performance). The algorithm iteratively generates sets of new individuals, by applying the probabilistic operators ‘‘crossover’’ and ‘‘mutation’’. In this way, it automatically finds an optimal solution to a given problem within reasonable period. This technique functions even better within a parallel environment of computation. Finally, with regard to computation, the time, costs and complexity emphasize the importance of parallelizing both simulation and calibration phases. PDE-numerical methods and Neural Networks permit a partial parallelisation in many cases; Cellular Automata and Genetic Algorithms are always ‘‘outrageously’’ parallel, and thus provide high performances. *** The papers collected in this special issue have been selected among the 28 contributions presented at Session NH11.01 ‘‘modelling, computer-assisted simulations and mapping of natural phenomena for hazard assessment’’ of the EGU (European Geosciences Union) 2nd General Assembly, held in Vienna, April 25e29, 2005, and among the 12 contributions presented at Session SE17 ‘‘modelling and simulation of volcanic surface flows, flood flows, debris flows, landslides, and other gravity currents: mitigation and hazard mapping’’ of the AOGS (Asia Oceania Geophysical Society) 2nd Annual Meeting, held in Singapore, June 20e24, 2005. They represent, in the opinion of the guest editors, a significant spectrum both of natural dangerous phenomena (indisputably, complex systems), and of different levels of computational approaches for modelling and hazard evaluation purposes. Some of the cases have adopted innovative e even compound e methodological techniques. In particular, the papers by Eliasson, Kjaran, Holm, Gudmundsson and Larsen, by Miyamoto, Komatsu, Baker, Dohm, Ito and Tosaka, and by Natale and Savi, deal with flooding processes through PDE methods coupled with statistical techniques (except for the second, which is only based on PDE). In contrast, the studies presented by D’Ambrosio, Iovine, Spataro and Miyamoto, and by Pirulli, Bristeau, Mangeney and Scavia, concern debris flows and rock avalanches, respectively, by means of CA (coupled with Genetic Algorithms for the sensitivity analysis), and by PDE (coupled with laboratory
1391
experiments) approaches. The papers by Georgoudas, Sirakoulis, Scordilis and Andreadis, and by Vicari, Alexis, Del Negro, Coltelli, Marsella and Proietti, are both based on CA methods, somewhat coupled with statistical techniques, for analysing earthquake processes and lava flows, respectively. The paper by Gruber and Bartelt applied a PDE-numerical model, coupled with statistical methods and GIS, to determine run-out distances and pressure maps of snow avalanches, and for delineating protective forests. Finally, Miyamoto, Rodriguez and Sasaki present a new Boundary Element Method (PDE) for computing surface topographic deformations produced by mantle convection. The guest editors would like here to thank the authors of the papers, the reviewers, the conference organising committees, the conference attendees, and the chief editor of the journal EMS, for their indispensable support for realizing this special issue. Giulio Iovine* CNR-IRPI, via Cavour 6, 87030 e Rende, (Cosenza), Italy *Corresponding author. Tel.: þ39 0984 835521; fax: þ39 0984 835391. E-mail address:
[email protected] Salvatore Di Gregorio Department of Mathematics & Center of High Performance Computing, University of Calabria, 87036 e Arcavacata di Rende (Cosenza), Italy Tel.: þ39 0984 496432; fax: þ39 0984 496410. E-mail address:
[email protected] Michael F. Sheridan Center for Geohazards Studies, 876 Natural Science Complex, University at Buffalo, Buffalo, NY 14260, USA Tel.: þ1 716 645 6800x3904; fax: þ1 716 645 3999. E-mail address:
[email protected] Hideaki Miyamoto Department of Museum Collection Utilization Studies, The University Museum, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan Tel.: þ81 3 5841 2830; fax: þ81 3 5841 8451. E-mail address:
[email protected] Available online 6 February 2007