CHAPTER 9
Challenges in seismology Contents 9.1 Lithosphere–atmosphere–ionosphere coupling model (LAIC) 340 9.2 Predictability of earthquakes 346 References 349
From a geophysical point of view, seismology is a field with a wide spectrum of applications in many of the key scientific frontiers from the Earth’s dynamics to the prevision of social hazards in human communities. It is the scientific field concerned with the study of the Earth’s elastic vibrations, the sources that generate them, and the structure through which they propagate. Seismology is supported by concepts of continuum mechanics, electromagnetism, and applied mathematics. Its natural laboratory is the Earth, and it uses modern communication systems to monitor ground motions with complex networks of sensors, which observe and analyze signals from natural and human-made energy sources distributed around the planet. Moreover, with the presently available modern and powerful computational tools, researchers are now able to analyze and quantify the sources of mechanical signals, determine the structures through which these signals propagate, and describe the processes originating the signals at all depths in the Earth interior with much higher resolution than previously possible. For example, thanks to satellite monitoring and image processing methods, it has been possible to image sources and structures, extracting coherent signals from which had been previously discarded as background noise. On the other hand, records of seismic data that have been collected for years, have also been analyzed using various tools. The results of these analyses have allowed us to distinguish between the specific phenomena that frequently occur in the Earth, such as volcanic activity, pollution from heavy particle dispersion in the atmosphere, hurricanes, climatic change, and underground nuclear tests, among many others. With respect to the specific case of earthquakes, there are many research groups around the world dedicated to studying the seismic phenomena from different perspectives. Motivated by the importance of these natural phenomena, a workshop on seismological research frontiers was held in Denver, in September 2008, which was attended by more than 120 members of the seismological and geophysical research community. As one important result from this workshop, a report titled “Seismological Grand Challenges in Understanding Earth’s Dynamic Systems” was published [1], which describes a long-term scientific plan for seismology Time Series Analysis in Seismology https://doi.org/10.1016/B978-0-12-814901-0.00017-1 Copyright © 2019 Elsevier Inc. All rights reserved.
335
336 Chapter 9 where promising research directions are considered for one or two future decades. The published report summarizes the 10 Seismological Research Grand Challenge topics that were identified as fundamental: 1. The physics of fault failure. The stresses in the crust result from the relative motions of the Earth’s plates, which occur mostly as slippage along faults or tectonic plates. As a consequence of such slippages, earthquakes originate. An earthquake occurs when the energy stored over long periods of time, from hundreds to thousands of years, is suddenly released as faults, or subduction plates slip in a few seconds radiating seismic waves. The global seismicity is largely concentrated in narrow bands that represent the plate boundaries. Three basic types of plate boundary are characterized by different modes of plate interaction due to the inner dynamics of the Earth: (1) Divergent boundaries, where two plates are moving apart and either a new lithosphere is produced or an old one is thinned. Midoceanic ridges and continental rifts are examples of divergent boundaries. (2) Convergent boundaries, where the lithosphere is thickened or consumed by sinking into the mantle. Subduction zones and alpine belts are examples of convergent plate boundaries. (3) Transcurrent boundaries, where plates move past one another without either convergence or divergence. Transform faults and other strike–slip faults are examples of transcurrent boundaries [2]. The speed of the fault slip is an important parameter to characterize seismic events. Recent observations reveal the richness of other fault slip behaviors, from faults that creep silently without apparent resistance (giving rise to the so-called silent earthquakes), that slide sporadically chattering as they go, to others that slide at super-shear velocities emitting seismic shock waves which are potentially more dangerous because the released energy could be shuttered suddenly. Fortunately, the research programs in seismology have made great progress in understanding how and when faults are likely to fail. Further development of the warning methodologies is necessary in order to build alert robust systems. The challenges include improved methods for the largest magnitude earthquakes. Promising development is underway to map the distribution of slip on faults in real-time using seismic and geodetic networks. Dense geophysical instrumentation in earthquake source regions, with rapid and robust telemetry, will also be needed. 2. Stress and rheology of the crust and mantle. Most features of the Earth’s crust and interior can be explained as consequences of the relative motions of the plates where the material can be present in different phases, which are defined by physical parameters, such as the temperature and viscosity. A complete understanding of large-scale phenomena within the Earth’s crust and uppermost mantle would require taking into account the mass that is being moved and how it deforms, the link between the kinematics and strains, and the forces or stresses that shape our environment. As it is well known, rheology describes the linkage between the stresses and strains, whose variability depends
Challenges in seismology 337
3.
4.
5.
6.
on the temporal and spatial scales considered. Motions and strains can now be measured with great precision on many scales using vast networks of GPS, strainmeters, and tiltmeters, while at present stresses can only be inferred at best. Knowledge of the lithospheric stresses is essential to understand the forces driving both plate boundary and intraplate deformations, as well as the more localized redistribution of stresses that accompanies the earthquake cycle. Physical coupling of the atmosphere–ocean–solid Earth systems. A new era of research has emerged at the interface of solid Earth geophysics, glaciology, oceanography, and atmospheric science, with a high potential for transformative scientific research and strong social relevance. Scientific challenges include the improved understanding of the coupling mechanisms between ocean waves and seismic waves over a broad range of frequencies, the characterization of historical and present changes in ocean wave activity and its connection to climate change, the development and validation of new methods for quantifying ocean mixing and turbulence, glacial seismology, and infrasound and ionospheric waves. Critical interfaces and seismic waves in the near-surface environment. The land surface is the critical interface between the solid planet and the hydrosphere, oceans, atmosphere, and biosphere. Therefore, the study of the Earth’s surface is a crucial aspect towards creating a sustainable environment for the development of life. Today, nearsurface geophysics is on the border of explosive growth because of the pressures being put on our environment. Seismology provides several imaging methods that work well in near-surface environments, including the use of refracted, reflected, and surface waves. Distribution and circulation of fluids and volatiles in the Earth’s interior. Water is of fundamental importance for the Earth’s evolution, and also in a myriad of other ways. Water fundamentally affects the evolution of the continental and oceanic lithosphere by profoundly influencing the geothermal heat flow and cooling, and contributes to processes that weaken faults, including the behavior of master faults at plate boundaries. Indeed, it is widely accepted that the fault lubricating effects of water are a prerequisite for plate tectonics to occur. Water interacts geo-chemically with the Earth’s silicate crust and mantle, and contributes to melting and volcanic processes. The melting of the mantle above subduction zones due to hydration induces lowering of the melting temperature and is the primary mechanism by which the mantle differentiates to create continental crust in volcanic arcs. Water filled cracks constitute a principal structural component controlling the attenuation of seismic energy in the Earth. More generally, water undoubtedly affects the global rheology and dynamics of the mantle in a great number of ways. At present, many of these processes are active areas of current research. Magma dynamics in the Earth’s interior. Volcanic eruptions are some of the most spectacular and potentially dangerous geological events. Seismological monitoring is one of the primary ways for forecasting volcanic eruptions. An increase in micro-earthquake
338 Chapter 9 activity and harmonic tremor, as moving magma changes the shape of the volcano and fractures the surrounding rock, often precedes eruptions by several days, providing some warning. Until now eruption predictions were based mostly on empirical methods, because there was not enough information serving as a guide to better understand the underlying physics. Therefore, many questions remain open: Why do some magmatic intrusions result in eruptions while others are confined beneath the surface? How can we better predict the volume, intensity, timing, and style of the eruption? How do earthquakes and volcanic eruptions trigger each other? And how do volcanoes interact with each other producing simultaneous eruptions? The deep plumbing system of volcanoes is poorly known. To improve the prediction methods, it would be necessary to study the physical changes that accompany eruptions along with a better imaging of the interior of the volcanic systems. 7. Evolution and coupling of the lithosphere and asthenosphere. The lithosphere is a highviscosity region that translates coherently on the Earth’s surface. It is the mechanically strong outer shell of the Earth, which forms the tectonic plates. On its top, it consists of the crust, and at its bottom, of the mantle. It varies in thickness from 0 km at midocean ridges to perhaps 250 km under cratons – the ancient and relatively undeformed hearts of continents. Below the lithosphere is the asthenosphere, a region of relatively low strength that may be nearly decoupled from the lithosphere. The asthenosphere behaves like a viscous fluid on relatively short time scales (i.e., 104 yr) [2]. Tectonic plate motions reflect the dynamical contributions from subduction processes (i.e., classical “slab-pull” forces) and lateral pressure gradients within the asthenosphere (i.e., “asthenosphere-driving” forces), which are different from gravity forces exerted by elevated mid-ocean ridges (i.e., classical “ridge-push” forces) [3]. The physical interaction of the convecting asthenosphere and the highly viscous lower lithosphere determines the transmission of plate driving stresses from the mantle to the plates, and controls the stress state of deep cratonic roots, which is important for the cratonic diamond formation models [4]. Many gaps in our understanding of the evolution and structure of the Earth’s lithosphere and lithosphere–asthenosphere boundary still remain to be covered. Some of these gaps are reflected in the following questions: (a) What is the nature of cratons? How did they form? What is their composition? Why did they stabilize? And how stable are they over time? Can we image cratons and compositional variations within them seismically? Understanding this will be fundamental to understanding plate tectonics within the early Earth, as well as whether cratonic crust is destroyed over time; (b) How do preexisting structures such as ancient faults or sutures affect modern day deformation?; (c) What aspects of melting, grain scale processes, and rock scale processes cause velocity anisotropy? And how can we use this to deduce the flow and strain state of the lithosphere and asthenosphere?; (d) What exactly is the asthenosphere? Why is it weak? And why is it low velocity? What is the lithosphere–asthenosphere boundary?;
Challenges in seismology 339 (e) Where and when does small-scale convection and lithospheric delamination occur? Can we use seismically imaged features to deduce how the crust evolved? Where does convection occur in the oceans and does it relate to surface features? What is the role of water, other volatiles, and composition in modulating the stability and instability of the lithosphere?; (f) How is continental crust and lithosphere built over time? How deep do boundaries associate with accreted terrains extend?; and (g) How is the lithosphere rejuvenated? Are there pyroxenite veins or “plumes” of eclogite throughout it and the asthenosphere? 8. Dynamical systems at plate boundaries. The Earth’s outer layer is continually deforming at the boundaries of tectonic plates. The majority of earthquakes and volcanoes occur in these regions and are the violent response of the Earth to plate boundary stresses. Mountain belts are pushed up, and the old oceanic crust is pulled down into the Earth’s interior. New plate is continually formed through the volcanic processes at mid-ocean ridges compensating for plate destruction elsewhere. Not all boundaries create or destroy plates. Plate tectonics explains why and how deformation is focused on the boundaries, but as a kinematic theory it does not help us to explain what happens as we move away from the plate-bounding fault. A grand challenge is to quantify and then explain the deformation, exhibited as earthquakes, slow slip, and creep, which takes place on the network of faults that extend away from the main boundary faults. These broad regions of deformation occur on the continents, as well as beneath the oceans. 9. Thermo-chemical structures and dynamics of the mantle and core. The large-scale 3-dimensional structure of the deep mantle is now quite well known, which is characterized by two very large slow velocity regions (one under Africa and the other under the central Pacific) surrounded by faster material indicating the idea of intermittent mass transfer between the upper and lower mantle. The large low-velocity structures, usually (but probably inaccurately) referred to as “superplumes”, are remarkable in many ways. While they are slow features in both compressional and shear velocity, the relative behavior of shear to compressional velocity is quite anomalous with much larger shear anomalies than would be expected. This observation is inconsistent with the “superplumes” being caused solely by lateral variations in temperature inside the Earth, and we must appeal to other causes such as chemical and/or phase heterogeneity. This conclusion is also consistent with remarkable observations of very strong lateral changes in heterogeneity at the edges of the superplumes which, again, could not be generated solely by thermal effects. There is also increasingly strong seismic evidence that at least parts of the superplumes are denser than their surroundings which is completely contrary to the usual notions of plumes as the locations of light material rising from a bottom thermal boundary layer. More detailed seismological studies of parts of the lowermost mantle have revealed an enormous variety of fascinating structures including regions with extremely low seismic velocities a few tens of kilometers thick situated right at
340 Chapter 9 the core mantle boundary (perhaps clustered close to the edge of superplume regions). In summary, this brief narrative should demonstrate that our understanding of the deep Earth depends on advances not just in seismology, but in mineral physics, geodynamics, and geochemistry. 10. Earth’s internal boundary layer processes. The emergence of a global distribution of seismograph stations has allowed discovering the crust–mantle, outer and inner core boundaries from observations of the reflected and refracted P and S waves. Growing up of networks and advances in computation and modeling have led to the discovery of the boundaries of composition and phase in the mantle and the inner core (inner core sidebar). Physical properties at these boundaries determined from modeled seismic waves provide images of the temperature and composition of the deep Earth. These images have been fundamental for understanding Earth’s early evolution from gravitative accretion and differentiation, as well as its future evolution driven by cooling and radiogenic heating. The forefront of research now lies in mapping the three-dimensional topography and sharpness of Earth’s boundaries. Important advances in full waveform imaging in three-dimensions have shown that combined P and S waveform data over broad angles of incidence make it possible to identify whether a reflector is a boundary in composition, solid-state phase, or a combination of the two. Exploiting advances in computational and experimental mineral physics in predicting the phase boundaries of minerals as a function of pressure and temperature, seismology may be able to provide an accurate temperature profile of the Earth. Examples of recent efforts in this direction are the software distributed through the projects as the creation of the open-access seismic data repository of the Incorporated Research Institutions for Seismology (I RI S) and Data Management System (DMS).
9.1 Lithosphere–atmosphere–ionosphere coupling model (LAIC) As a consequence of the damages and deaths provoked by large earthquakes that have stroked towns and big cities, many efforts are being devoted in order to get reliable mechanisms of seismic prediction. Since the last decade of the 20th century, the investigation concerning the seismic phenomenon was boarded taking into account factors not only of mechanical origin, as the relative movements between tectonic plates occurring in faults or subduction zones and measuring quantities as the magnitude of earthquakes, but also the effects of the electromagnetic fields, which have allowed the development of alternative lines of research directed toward the short-time prediction of earthquakes. Thanks to advances in communication technology (for example, satellites and GPS), electromagnetic, thermal, and other natural effects can be observed from the underground up to the ionosphere. The analysis of the observational data sets have showed important correlation with the earthquakes occurrence. Since then,
Challenges in seismology
341
many conferences and scientific reports have showed that there exists a relationship between the electromagnetic fields produced by the electric properties in the lithosphere, atmosphere, and ionosphere, which have showed to be very sensitive to the seismic effects. On the other hand, they have played an important role in our better understanding of the Earth dynamics, as well as their importance for the seismic phenomena. Therefore, the monitoring of the electromagnetic perturbations associated with earthquakes has attracted the attention as a promising candidate for short-term earthquakes’ prediction [5]. The conventional prediction of earthquakes was based on the statistical analysis of the measurements of crustal movements (magnitudes). Nevertheless, it has been concluded that this approach is not so useful for earthquake prediction because the statistical analysis is performed after the occurrence of an earthquake. At present, many efforts are directed to find efficient mechanisms to minimize the forecast time on timescales of hours, days, or perhaps weeks with the purpose of avoiding or reducing the deadly disasters provoked by strong earthquakes. However, in spite of the importance of short-term earthquake prediction, today it remains far from being realized, due to the complex interaction between the natural factors and the processes occurring in the Earth. New improvements in this direction are based on the use of observational results to construct models which are under development and supported by large projects worldwide. In 2004, Molchanov et al. [6] wrote that in order to improve our understanding of the seismic phenomenon it would be necessary a major interdisciplinary effort that goes beyond the conventional ground-movement analysis, which offers only macroscopic information of an earthquake after its occurrence. The development of a prediction scheme based on multi-premonitory phenomena means considering the relationship between the several acting elements as, for example, the identification of the near field of a potential future focal zone, as well as monitoring the electrical, magnetic, acoustic, seismic, and thermal precursors simultaneously and continually. Later on, Pulinets [7] conceived the first ideas that gave rise to the Lithosphere–Atmosphere–Ionosphere Coupling (LAIC) model to explain how the seismo-ionospheric variations are gathered by natural factors as radioactivity, aerosol, and atmospheric electricity, among others. In particular, this model takes into account that some precursory anomalies can appear in the atmosphere and the ionosphere during the preparation phase before a large earthquake [9]. As the lithosphere–ionosphere coupling is activated by electric and magnetic fields, the presence of high-conductivity structures in the Earth’s interior can be detected by the well-known magneto-telluric subsurface sounding methods, which use the natural ionospheric currents as time-varying sources. In this regard, Vanhamäki et al. [10] observed that ionospheric currents induce currents in oceanic areas which can influence the ionospheric magnetic fluctuations. Nevertheless, such phenomena occur only offshore. Supported by these results, Enomoto [11]
342 Chapter 9
Figure 9.1: (A) Model of seismic L–I magnetic induction coupling for a strong offshore earthquake and (B) the equivalent electric circuit. Figure taken from [11].
proposed a mechanism for the lithosphere–ionosphere coupling by means of a fault physical model of magnetic induction which considers the interaction between the earthquake nucleation and deep Earth gases before the occurrence of large offshore earthquakes (see Fig. 9.1A). As shown in Fig. 9.1B, Enamoto’s model was simplified as an electrical equivalent circuit, where the telluric current i is expressed by the equation i=I
RI I , = ∗ RC + RS + RI R +1
(9.1)
with R C + RS , (9.2) RI R = ρL/A being the electric resistance, ρ the resistivity, and L the current path. The indices I , C, and S denote the current source, crust, and sea, respectively. For more details on the model, the interested reader is referred to the review of [11]. R∗ =
In passing, we recall that the famous M = 7.3 Kobe earthquake that occurred on January 17, 1995, in Japan [12] changed drastically our way of thinking about the prediction of earthquakes, so that short-term earthquake prediction has become the target to reach. The Kobe
Challenges in seismology
343
earthquake left 6,434 people dead; about 4,600 of them were from Kobe [13]. As a consequence, research programs directed to analyze the observations of the diverse effects associated with earthquakes started to appear. There are reported evidences that prior to an earthquake, the electromagnetic behavior does take place in a wide frequency range (see [5] and references therein). Faced with such evidences, the electromagnetic effects (among others as the thermal effects) have been considered as an effective natural property associated with earthquake occurrences. This explains why the electromagnetic effects have attracted much attention as a promising candidate of short-term earthquake prediction. According to Hayakawa [5], the electromagnetic method for the earthquake prediction can be classified into two categories: 1. The detection of radio emissions from the earthquake hypocenter (or epicenter). 2. The detection of indirect earthquake effects taking place in the atmosphere and ionosphere by means of the preexisting radio transmitter signals (called “radio sounding”). The investigation on the electromagnetic effects seems to have reached a consensus that the ionosphere is unexpectedly extremely sensitive to seismic effects. For this reason, subionospheric VLF/LF propagation has been extensively studied [5,14,15]. Supported by observational results, Molchanov et al. [6] identified the general mechanisms that drive the preseismic phenomena in the atmosphere and ionosphere, which are summarized in the following points: (1) Upward migration of fluid substrate matter (bubble) can lead to ousting of the hot water/gas near the ground surface and cause an earthquake itself in the strength-weakened area; (2) Thus, the time and place of the bubble appearance could be random values, but earthquakes, geo-chemistry anomaly, and foreshocks (seismic, SA, and ULF electromagnetic signals) are casually connected; (3) Atmospheric perturbations of the temperature and density could follow preseismic hot water/gas release, resulting in generation of atmospheric gravity waves (AGW) with periods in a range of 6–60 min; (4) Seismo-induced AGW could lead to modification of the ionospheric turbulence and to the change of over-horizon radio-wave propagation in the atmosphere, perturbation of LF waves in the lower ionosphere, and ULF emission depression at the ground. The supposed development of the earthquake preparation process is shown schematically in Fig. 9.2. To construct the model, many observational data sets were collected in the ground from satellites. For a detailed description, the model considers the following five aspects: (a) (b) (c) (d) (e)
Seismo-induced modification of the VLF and LF subionospheric signals; Geo-chemistry and water table variations; Ground surface thermal variation observed from satellites; Modification of ionospheric turbulence; Depression of ULF noise from the magnetosphere and ionosphere.
344 Chapter 9
Figure 9.2: Scheme showing the stages of earthquake preparation: (A) Appearance of small bubbles ensemble beneath the lithosphere as a perturbation of heat flow from the interior, (B) preseismic stage: entrance of the bubbles into the crust, their merging, appearance of temperature, and density perturbation near the ground surface and weak foreshock activity inside the crust, and (C) near-seismic stage and main shock: further merging of the bubbles in the selected areas, intensification of SA and ULF magnetic field foreshocks, and eruption of large bubbles after upward migration in the strength-weakened site with creation of the main shock. Figure taken from [6].
On the other hand, Pulinets [7] realized that ionospheric and thermal anomalies are coupled through the ionization process produced by radon. Such a picture was clarified after a careful analysis of the atmospheric and ionospheric fluctuations observed in the Colima earthquake occurred on January 22, 2003, in Mexico [16]. His analysis allowed developing the physical mechanism of the thermal anomalies monitored before strong earthquakes [17]. The atmosphere–ionosphere coupling mechanism [18], as well as the anomalous electric field gen-
Challenges in seismology
345
eration before strong earthquakes, was thereafter reconsidered [7]. The physical nature of the observed thermal anomalies before an earthquake is related to the gas emissions. Latent heat release and thermal anomalies are related by a common chain, which are responsible for the ionization of atmospheric gases. This ionization is provided by α-active radon released over active tectonic faults and tectonic plates borders. In what follows, we textually reproduce the observations of Pulinets [7] associated with earthquakes: 1. Radon is emanated from the Earth’s crust continuously, even without the occurrence of earthquakes. What we observe as a precursory process is the deviation of the radon emission intensity from its undisturbed state. 2. The energetic effectiveness of the precursory process is very high. The relation of the thermal energy released in the form of latent heat to the energy spent for ionization is within the range from 104 to 108 . 3. The spatial pattern of the thermal anomalies distribution clearly demonstrates that the observed process is connected with the tectonic activity. The satellite images distinctly show the tectonic faults activation and the increase of the heat release at the borders of the tectonic plate where the epicenter of the impending earthquake is located. 4. The anomalous radon activity stops immediately (within few days) after the main shock.
Figure 9.3: Schematic presentation of the LAIC model. Figure taken from [8]. Reproduced with permission from Elsevier.
A schematic presentation of the LAIC model is given in Fig. 9.3. Conditionally, the model could be divided into several branches: left – thermal, middle – clouds, right – ionosphere. All of them have as the common root the radon emanation and the air ionization. Although the radon release before the occurrence of earthquakes is still an open question, radon effects are registered regularly, and the question on the source of the radon variations should be directed
346 Chapter 9 to seismologists. The main advantage of Pulinets’ conception is that most of the phenomena observed before an earthquake can be linked by the same physical mechanism. This is so because there is a synchronization in time and space of the atmospheric and ionospheric effects. On the other hand, thermal effects, including ongoing long-wave radiation (OLR) anomaly, and ionospheric anomalies were observed after Chernobyl atomic power plant catastrophe. Ionospheric anomalies were also observed after the Three-Mile Island atomic reactor explosion. OLR and ionospheric anomalies are commonly observed by satellites over hurricanes, while the formation of clouds by air ionization as produced by cosmic rays is a well established and commonly accepted phenomenon. In the earthquake case we deal with the natural ground radioactivity, but other sources of ionization give similar effects. The LAIC model provides new information for modern seismo-tectonics, which is listed as follows: 1. The earthquake preparation process manifests itself in a wide area, which for strong earthquakes (M > 6.0) involves all tectonic plates where the earthquake source is located. 2. The tectonic activation is observed to involve the borders of tectonic plates and different faults. 3. The activation is very dynamic in time and changes from day to day. 4. Anomalies are observed not only in the future epicenter vicinity but quite far from its position. 5. Radon activity could be monitored from space using the images of the thermal anomalies. 6. All kinds of anomaly are equally observed for different types of earthquake zone (subduction, intraplate, etc.). 7. Anomalies are also observed over the ocean surface. Important precursory effects of LAIC before large earthquakes can be detected in the ionosphere from ground-based observational systems like ionosondes and GPS (Global Positioning System)/GNSS (Global Navigation Satellite System) receivers. However, it must be noted that the relationship between ionospheric anomalies and electromagnetic signals generated by the earthquake preparation is still controversial and highly debated, as demonstrated by the high number of papers reporting the re-analysis of data and comments aiming to refute evidences of this correlation.
9.2 Predictability of earthquakes Several statistical methods have been applied to the analysis of seismological data (mainly catalogues) with the main purpose of improving our knowledge of the seismic phenomena. Nevertheless, despite the lack of success to get reliable methods for predicting large earthquakes, the research on earthquake prediction continues on the side of monitoring networks and data. Although predicting earthquakes remains an open problem, there has been important
Challenges in seismology
347
progress in improving our knowledge of the earthquake rupture physics and our understanding of the stress dynamical evolution of tectonic plates. At present, the scientific community is involved in global projects to test and evaluate the performances of some well established algorithms in different tectonic environments (see the websites: http://www.cseptesting.org/; http://www.corssa.org/). Among the initiatives to prevent earthquake disasters are the so-called “seismic alarms”, which are prediction methods that require knowing the place and time of earthquakes above some minimum magnitude value. In the search for precursory signals that indicate an impending earthquake in a given space-time window, several algorithms have been designed for earthquake prediction. One of the most reliable algorithms is the so-called M8. This intermediate-term algorithm was designed to predict earthquakes of magnitude above M > 8.0 [19–21]. Since 1986, the M8-algorithm has been applied for predicting smaller earthquakes, down to M = 5.0, using local seismic data sets [22]. This algorithm requires the knowledge of earthquake populations [23]. In order to count the number of earthquakes, a magnitude cut-off must be chosen. That is, the M8-algorithm does not choose a fixed magnitude cut-off, but rather it uses as input the number of earthquakes that are desired for the calculation. It selects two populations of main shocks. The first one, CAT20, is defined by a magnitude cut-off that yields an average of 20 earthquakes per year. Thus, to analyze a 30-year interval, the algorithm selects a magnitude cut-off that provides 600 earthquakes. Usually the selected cut-off will yield somewhat more than 600 earthquakes. If there are fewer than 20 earthquakes per year, a decision must be made to continue or terminate the analysis. A second population, CAT10, is selected by using the magnitude cut-off that produces 10 earthquakes per year. In brief, the algorithm is based on the following seven functions of the seismicity data: • • •
• •
• •
F1 (t) counts the number of earthquakes from the first population, CAT20, in the preceding six years. F2 (t) is the same count as F1 (t) for the second population, CAT10. F3 (t) = F1 (t) − F1 (t), where F1 (t) is the average number of earthquakes per six years in the interval from the beginning of the analysis, t0 , to t − 6 years. The measure is taken on the first population CAT20. F4 (t) is the same as F3 (t) measured on the second population CAT10. F5 (t) is a magnitude weighted sum of main shocks divided by the number of these events 2/3 to the 2/3 power, j 100.46Mj / j Nj . The summations are taken over the preceding six years from a third population, CAT20a = CAT20 − {events with magnitudes Mj above M0 − 0.5}. F6 (t) is the same as F5 (t) calculated on a fourth population, CAT10a = CAT10 − {events with Mj above M0 − 0.5}. F7 (t) is based on the number of aftershocks.
348 Chapter 9 It is worth mentioning that from a computational point of view, the M8-program allows a rather broad variation of the parameters, including a set of measures, two different shapes of an area of investigation, size of areas, and other constants of the M8 algorithm. Therefore, the M8 program should be considered as an exploratory tool. To use the M8-algorithm and for more details about it, the reader is referred to the M8 user manual [23]. On the other hand, the Earthquake Likelihood Models (RELM) [24], supported by the Southern California Earthquake Center (SCEC) and the US Geological Survey (USGS), was created to establish a facility for prospective testing of scientific earthquake predictions in California, so that a number of experiments are underway (see [25] and references therein). Schorlemmer et al. [26] describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests are to evaluate the physical models for earthquakes, assure that the source models used in seismic hazard and risk studies are consistent with the earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions, developing a statistical method for testing earthquake likelihood models. They also describe the theory of the RELM group (http://www.relm.org) by testing the procedure for grid-based probabilistic earthquake forecasts, whose mathematical part is fairly straightforward. It is only a generic description and its implementation depends on the goals one wants to achieve with the test. A number of free parameters in the testing have to be specified. They include the classes of models, the testing area and the grid, the declustering, etc. The so-called “rules of the game” (i.e., the rules that each model has to obey in order to be an accepted model) must be prescribed to allow for truly prospective, reproducible, and comparable testing. The free parameters need to be carefully chosen to preserve as much as possible the characteristics of the tested models and to maximize the information content in the results. Location-specific definitions, such as catalogue quality, testing area, etc., must also be made [27]. In [28,29], a description of the different model classes is given, and preliminary results are presented from the first 2.5 years of testing the time-invariant 5-year RELM forecasts. Their results indicate which models are consistent with the observations to date and which models have so far performed best in comparative testing. The RELM project conforms to the requirements for well-posed prediction experiments through a strict set of registration and testing standards. Moreover, the interest shown by earthquake scientists in the RELM project has motivated an international partnership to develop the project named Collaboratory for the Study of Earthquake Predictability (CSEP), which supports an international effort to conduct and rigorously evaluate earthquake forecasting experiments (see [30] and references therein). According to the CSEP project, the most important steps of an earthquake prediction protocol are the following:
Challenges in seismology 349 • • • • •
Present a physical model that can explain the proposed precursor anomaly; Exactly define the anomaly and describe how it can be observed; Explain how a precursory information can be translated into a forecast and specify such a forecast in terms of probabilities for given space/time/magnitude windows; Perform a test over some time that allows evaluating the proposed precursor and its forecasting power; Report on successful prediction, missed earthquakes, and false predictions.
As a concluding remark, it must be stressed that there is a common need for the development and coordination of advanced data products to make the results of seismological research more accessible to the public and in general to the Earth scientists in other disciplines. Finally, strong synergisms within the Earth sciences between seismology and other disciplines need to be fostered and strengthened. Progress on the seismological grand challenges noted here, and on the many societal applications of seismology, hinges on improved interdisciplinary interactions and communications, in addition to the shared, practical requirements described above.
References [1] T. Lay, Challenges in Understanding Earth’s Dynamic Systems, Report. USA, 2008. [2] T. Lay, T.C. Wallace, Modern Global Seismology, Academic Press, New York, 1995. [3] T. Höink, A.M. Jellinek, A. Lenardic, Viscous coupling at the lithosphere-asthenosphere boundary, Geochemistry, Geophysics, Geosystems 12 (10) (2011) 1–17. [4] C.J. O’Neill, A. Kobussen, A. Lenardi, The mechanics of continental lithosphere–asthenosphere coupling, Lithos 120 (1–2) (2010) 55–62. [5] M. Hayakawa, Probing the lower ionospheric perturbations associated with earthquakes by means of subionospheric VLF/LF propagation, Earthquake Science 24 (2011) 609–637. [6] O. Molchanov, E. Fedorov, A. Schekotov, E. Gordeev, V. Chebrov, et al., Lithosphere–atmosphere–ionosphere coupling as governing mechanism for preseismic short-term events in atmosphere and ionosphere, Natural Hazards and Earth System Sciences 4 (5–6) (2004) 757–767. [7] S. Pulinets, Lithosphere–atmosphere–ionosphere coupling (LAIC) model, in: Masashi Hayakawa (Ed.), Electromagnetic Phenomena Associated with Earthquakes, 2009, pp. 235–254. [8] S. Pulinets, D. Ouzounov, Lithosphere–Atmosphere–Ionosphere Coupling (LAIC) model – an unified concept for earthquake precursors validation, Journal of Asian Earth Sciences 41 (4–5) (2011) 371–382. [9] S.A. Pulinets, K.A. Boyarchuk, Ionospheric Precursors of Earthquakes, Springer, Berlin, Germany, 2004. [10] H. Vanhamäki, et al., Induction effects on ionospheric electric and magnetic fields, Annales Geophysicae 23 (2005) 1735–1746. [11] Y. Enomoto, Coupled interaction of earthquake nucleation with deep Earth gases: a possible mechanism for seismo-electromagnetic phenomena, Geophysical Journal International 191 (2012) 1210–1214. [12] Kobe, The Great Hanshin-Awaji Earthquake: Statistics and Restoration Progress, Archived from the original on June 26, 2011, STATISTICS, 2009. [13] Kobe, Kobe City FIRE Bureau (January 17, 2006). Kobe City Fire Bureau. Archived from the original on April 14, 2008. Retrieved 2008-05-25, 2006. [14] M. Hayakawa, Electromagnetic Phenomena Associated With Earthquakes, Transworld Research Network, Trivandrum (India), 2009.
350 Chapter 9 [15] M. Hayakawa, O. Molchanov, Electromagnetics: Lithosphere–Atmosphere–Ionosphere Coupling, TERRAPUB, Tokyo, 2002. [16] S.A. Pulinets, D. Ouzounov, L. Ciraolo, R. Singh, G. Cervone, A. Leyva, M. Dunajecka, A.V. Karelin, K.A. Boyarchuk, A. Kotsarenko, Thermal, atmospheric and ionospheric anomalies around the time of the Colima M7.8 earthquake of 21 January 2003, Annales Geophysicae 24 (2006) 835–849. [17] S.A. Pulinets, D. Ouzounov, A.V. Karelin, K.A. Boyarchuk, L.A. Pokhmelnykh, The physical nature of the thermal anomalies observed before strong earthquakes, Physics and Chemistry of the Earth 31 (2006) 143–153. [18] S.A. Pulinets, A.N. Kotsarenko, L. Ciraolo, I.A. Pulinets, Special case of ionospheric day-to-day variability associated with earthquake preparation, Advances in Space Research 39 (5) (2007) 970–977. [19] V.I. Keilis-Borok, V.G. Kossobokov, A complex of long-term precursors for the strongest earthquakes of the world, in: Proc. 27th Geological Congress, vol. 61, Nauka, Moscow, 1984, pp. 56–66. [20] V.I. Keilis-Borok, V.G. Kossobokov, Periods of high probability of occurrence of the world’s strongest earthquakes, in: Computational Seismology, vol. 19, Allerton, 1987, pp. 45–53. [21] V.I. Keilis-Borok, V.G. Kossobokov, Premonitory activation of seismic flow: algorithm M8, in: Lecture Notes of the Workshop on Global Geophysical Informatics with Applications to Research in Earthquake Prediction and Reduction of Seismic Risk, ICTP, Trieste, Italy, 1988. [22] M. Mojarab, H. Memarian, M. Zare, Performance evaluation of the M8 algorithm to predict M7 earthquakes in Turkey, Journal of Earth System Science 5 (2015) 1–16. [23] V.G. Kossobokov, User Manual for M8, http://indico.ictp.it/event/a08182/session/92/contribution/59/material/ 1/0.pdf. [24] J.D. Zechar, T.H. Jordan, Testing alarm-based earthquake predictions, Geophysical Journal International 172 (2008) 715–724. [25] E.H. Field, Overview of the working group for the development of regional earthquake likelihood models (RELM), Seismological Research Letters 78 (1) (2007) 7–16. [26] D. Schorlemmer, M.C. Gerstenberger, S. Wiemer, D.D. Jackson, D.A. Rhoades, Earthquake likelihood model testing, Seismological Research Letters 78 (1) (2007) 30–36. [27] D. Schorlemmer, M.C. Gerstenberger, RELM testing center, Seismological Research Letters 78 (1) (2007) 30–36. [28] D. Schorlemmer, J.D. Zechar, M.J. Werner, E.H. Field, D.D. Jackson, T.H. Jordan, The Relm Working Group, First results of the regional earthquake likelihood models experiment, Pure and Applied Geophysics (2010), publish online May 11, 2010. [29] J.D. Zechar, M.C. Gerstenberger, D.A. Rhoades, Likelihood-based tests for evaluating space–rate–magnitude earthquake forecasts, Bulletin of the Seismological Society of America 100 (3) (2010) 1184–1195. [30] D. Schorlemmer, M.J. Werner, W. Marzocchi, T.H. Jordan, Y. Ogata, D.D. Jackson, S. Mak, D.A. Rhoades, M.C. Gerstenberger, N. Hirata, M. Liukis, P.J. Maechling, A. Strader, M. Taroni, S. Wiemer, J.D. Zechar, J. Zhuang, The collaboratory for the study of earthquake predictability: achievements and priorities, Bulletin of the Seismological Society of America 89 (4) (2018) 1305–1313.