Journal Pre-proof Segment-by-segment comparison technique for generation of an earthquake-induced building damage map using satellite imagery Niloofar Khodaverdizahraee, Heidar Rastiveis, Arash Jouybari, Sharare Akbarian PII:
S2212-4209(19)31404-9
DOI:
https://doi.org/10.1016/j.ijdrr.2020.101505
Reference:
IJDRR 101505
To appear in:
International Journal of Disaster Risk Reduction
Received Date: 9 October 2019 Revised Date:
14 January 2020
Accepted Date: 23 January 2020
Please cite this article as: N. Khodaverdizahraee, H. Rastiveis, A. Jouybari, S. Akbarian, Segment-bysegment comparison technique for generation of an earthquake-induced building damage map using satellite imagery, International Journal of Disaster Risk Reduction (2020), doi: https://doi.org/10.1016/ j.ijdrr.2020.101505. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier Ltd.
Segment-by-Segment Comparison Technique for Generation of an Earthquake-Induced Building Damage Map Using Satellite Imagery
Abstract Known as an unpredictable natural disaster, earthquake is one of the most devastating natural disasters that causes significant life losses and damages, every year. After an earthquake, quick and accurate buildings damage identification for rescuing can reduce the number of fatalities. In this regard, Remote Sensing (RS) technology is an efficient tool for rapid monitoring of damaged buildings. This paper proposes a novel method, titled segment-by-segment comparison (SBSC), to generate buildings damage map using multitemporal satellite images. The proposed method begins by extracting image-objects from pre- and post-earthquake images and equalizing them through segmentation intersection. After the extraction of various textural and spectral descriptors on pre- and post-event images, their differences are used as an input feature vector in a classification algorithm. Also, the Genetic Algorithm (GA) is used to find the optimum descriptors in the classification process. The accuracy of the proposed method was tested on two different datasets from different sensors. Comparing the damage maps obtained from the proposed method with the manually extracted damage map, above 92% of the buildings were correctly labelled in both datasets.
Keywords: Earthquake, Disaster Management, Damage Map Generation, High-Resolution Satellite Imagery
∗Corresponding author:
1. Introduction Earthquake is one of the most hazardous natural events that has always threatened human life during history. Due to the unpredictability of earthquake [1], this natural disaster causes a lot of damage around the world almost every year. This threat may intensify with the urbanization growing, and can endangers millions of people [2, 3]. Managing the risk of an earthquake is a process that involves pre- and postevent phases. In this regard, Earthquake Early Warning (EEW) system, which 1
generates real-time alerts just as an earthquake starts, can be involved in the pre-event phase [4-8]. Despite the ability of this system to warn people, the time for escaping from buildings after the alarm may not be enough, and consequently earthquake may still cause lots of deaths and damages. Therefore, in the post-event phase of disaster management, rapid assessment of location of damaged buildings and their degree of damage in the affected area is essential information for search and rescue (SAR) operations. In other words, quick producing the buildings damage map with high accuracy can be an appropriate way to reduce disaster affects; the low-accuracy maps waste the time that is required to rescue the people in destructed areas by wrongly deploying the rescue teams toward undamaged areas [9, 10]. Accurate damage maps can be quickly generated by applying geospatial data such as high-resolution satellite/aerial images (HRSI), which can be easily available for all cities, and LiDAR point clouds [11-13]. In this regard, wide coverage, rapid access and increasing the spatial resolution of satellite images have made them as the main data sources for damage assessment, and has gained increasing attention of disaster management experts [14, 15]. Damage assessment using satellite imagery may be accomplished both manually and automatically [16, 17]. Interpreting the satellite image using a human agent increases the reliability and trueness of the generated damage map. However, it is laborer, tedious, and time-consuming, and also requires large-scale investigation. Therefore, automatic techniques have been accredited in the last decades [18, 19]. Automatic damage assessment may be performed at either pixel- or object-level strategies. In the pixel-level, each pixel is independently inspected as an individual object and labelled as damaged or undamaged that usually provide noisy results. In contrast, object-based approaches sub-divide an image into meaningful segments (image-objects), and all subsequent analyses are executed on them [20, 21]. Objectbased image analyses have many advantages such as reducing the salt and pepper noise, using shape information of objects, and overcoming the limitations of radiometric differences and misregistration between multi-temporal images or different sensors [22, 23]. Therefore, lots of the state-of-the-art studies have developed object-based techniques for buildings damage map generation [24, 25]. Regardless of pixel-based or object-oriented image analysis, automatic damage detection studies based on the available data can be categorized into three approaches of using (i) post-event data, (ii) multi-temporal data and (iii) integrating 2
auxiliary information with the remote sensing data [26]. Auxiliary data, such as vector maps or digital elevation models (DEM) can be used along with the satellite images for building damage detection. For example, buildings’ boundary can be easily detected by overlaying a vector map on the post-event image [27, 28], or both preand post-event image [29-32] to eliminate the probable building detection error. However, the vector map of a damage area may not always be available nor updated to be used in damage assessment [33]. Providing detailed textural and spectral information, post-event highresolution satellite images can be used to assess the earthquake-induced building damage. In this case, a wide range of properties such as spectrum, texture, edge, spatial relationship, structure, shape, and shadow of buildings have been used to recognize the damaged buildings from post-event satellite image [10, 34]. Although applying only post-event data is important for quick response, it is difficult to ascertain the exact grade of damage without comparing to its condition on the preevent satellite image. Therefore, change detection using both pre- and post-earthquake remote-sensing data by observing a building at different times can result more informative damage map [35]. During the last two decades, various change detection techniques such as image algebra, classification-based, transformation-based, advanced models, and visual analysis have been widely used for damage assessment using high-resolution satellite imageries [36]. Among these methods, the classification-based strategies is the most popular change detection technique. Lu et al. [36] categorized this strategy into six groups: (i) post-classification comparison, (ii) spectral-temporal combined analysis, (iii) Expectation Maximization (EM) detection, (iv) unsupervised change detection, (v) hybrid change detection, and (vi) artificial neural networks (ANN).
the post-classification
technique is more frequent compared to the other methods, and is usually performed in two steps: 1) classification, 2) comparison of two classified images [37, 38]. This is an efficient approach which can reduce the radiometric and geometric blunders, provide a complete matrix of change information, and also identify the type of changes occurred in different classes of objects. However, the final accuracy depends on the quality of the classified images [39]. In the following a number of state-of-theart studies which have applied the post-classification approaches for damage assessment are discussed.
3
Post-classification approaches for buildings damage detection have been used by a large number of researches. For example, Huyck, Adams [40] presented an automated method based on edge dissimilarities to detect collapsed buildings using QuickBird and IKONOS imageries of BAM area. Also, Li, Xu [41] combined the spectral and spatial descriptors extracted from bi-temporal QuickBird imageries of Dujiangyan to improve the damage detection results. Moreover, morphological features were applied by Chini, Cinti [42] to detect the damaged buildings using preand post-event QuickBird images of Kashmir study area. Janalipour and Taleai [43] suggested a pixel-based multi-criteria decision analysis method based on fuzzy decision-making system to detect the damage caused by BAM earthquake. Ranjbar, Ardalan [44] generated a relief priority map using texture analysis of pre- and post-Varzaghan earthquake images acquired by the GeoEye-1 satellite. Using the images of 2011 Tohoku earthquake and the SVM classifier, Moya, Zakeri [45] tested 3D-GLCM (gray level co-occurrence matrix) features, and Endo, Adriano [46] performed high-dimensional feature space derived from several sizes of pixel windows to identify the collapsed buildings. Additionally, several studies have produced the damaged map by deploying mathematical operations to combine different temporal images such as subtraction of bands (image differencing), rationing, and image regression [33]. In this case, Sirmacek and Unsalan [47] applied the ratio of the roof and shadow surface obtained from an aerial image to determine the rate of destruction for each building in Istanbul test area. Miura, Modorikawa [48] analyzed the textural information in the pre-event QuickBird and post-event WorldView II images for damage assessment after 2010 Haiti earthquake. A number of researchers have used morphological descriptors along with spectral values for damaged buildings detection [49, 50]. The abovementioned techniques represent the more common approaches used for change detection purposes. However a few studies have used the combination of different change detection algorithms (hybrid schemes) in an attempt to improve the results. In which, Al-Khudhairy, Caravaggi [51] utilized the image differencing, principle component analysis and classification algorithms to detect the damaged buildings using IKONOS images of Jenin region. Trinder and Salah [52] extracted the damaged map by combining change detection techniques such as image differencing, principal components analysis, minimum noise fraction, and post-classification. Klonus, Tomowski [53] launched a method based on combination of isotropic 4
frequency filtering, spectral and texture analysis, and segmentation algorithms for detecting the damages in the area of Abu Suruj by using QuickBird satellite image. Based on the literature review, the post-classification methods, which may be performed at pixel- or object-level, have been frequently used in change detection. Not considering the shape and relationship of objects, pixel-level techniques may cause erroneous results and incorrect labelling. Although several object-based methods have been developed to enhance the traditional post-classification results, a critical problem may still be observed in comparing the dissimilar image-objects obtained from segmentation of pre- and post-event satellite images. This dissimilarity in corresponding image-objects is because of the separate segmentation of images with their own parameters which is appropriate for that specific image. To overcome this drawback, a new method based on the segment-by-segment comparison (SBSC) of similar image-objects is presented in this paper to obtain the buildings damage map using both pre- and post-event HRSI. In which, another step (“segmentation intersection”) is added to the conventional post-classification algorithm, and the process is performed in four steps: 1) segmentation, 2) segmentation intersection, 3) classification, and 4) comparison. In other words, the obtained image-objects from both segmented images are equalized before starting image classification that is the main contribution and the first intention of this paper. This allows the corresponding pre- and post-event image-objects to be easily compared for damage detection. Also, the GA optimizer is used to select the most useful features as the second purpose. In the third objective, it is necessary to select a suitable classification technique, because it leads to more accurate results. For this reason, three conventional classification algorithms including, Artificial Neural Network (ANN), Minimum Distance (MD), Support Vector Machine (SVM) with linear, Gaussian and polynomial kernel functions (LSVM, RBFSVM, and PSVM) are implemented and compared to detect the most suitable algorithm. To investigate the robustness and effectiveness of the proposed SBSC method, the result of this method is compared with the traditional post-classification as the fourth goal. 2. Method The flowchart of the proposed segment-by-segment comparison (SBSC) method for buildings damage detection is presented in Fig. 1. As shown in this figure, after pre-processing, both input images are firstly partitioned into several segments, 5
and then are equalized to achieve similar image-objects. After that, extracting a number of features for all the image-objects, the difference of these descriptors in preand post-event image-objects are used as input features to classify the area into two classes of “Changed” and “Unchanged”. After dividing the “Changed” class into debris and non-debris areas, the final damage map is eventually generated in three classes of the “Unchanged”, “Debris” and “non-Debris”. Further details of the proposed method are described in the following.
Fig. 1. Flowchart of the SBSC method for generation of an earthquake-induced building damage map.
2.1. Pre-processing Due to some issues such as radiance or reflectance differences in the captured image on different times, radiometric correction has become an essential preprocessing stage for change detection studies [36, 54]. In this study, atmospheric and solar illumination effects are omitted from the pre- and post-event images. Also, histogram equalization and histogram matching are performed in order to match the 6
spectral responses of pre- and post-earthquake images. Moreover, since the aim of change detection is comparing multi-temporal images, pre- and post-event images are co-registered. 2.2. Identical Image-Objects Generation Image segmentation is known as the fundamental step of the object-based change detection process, which aims to divide the image into a number of meaningful objects, called image-objects, to be used as classification units [55, 56]. Several categories including point-based (e.g. grey-level thresholding), edge-based (e.g. edge detection techniques) and region-based (split and merge, region growing, multiresolution segmentation) techniques can be used in this step [57, 58]. Among these methods, multi-resolution segmentation have been extensively applied with successful performance in the previous object-level image analysis studies. Therefore, we also use this algorithm to generate primary image-objects [51, 59]. Multi-resolution segmentation, proposed by Baatz [60], is a bottom-up segmentation algorithm based on the region-merging technique which minimizes the average heterogeneity for a given number of image-objects and maximizes their respective homogeneity. The segmentation procedure starts with one pixel, and similar neighbors are merged together in subsequent steps until a heterogeneity threshold is attained. Therefore, the higher scale will permit bigger image-objects. The heterogeneity criterion which is shown in Eq. (1), is a combination of color (spectral values) and shape properties (considering smoothness and compactness), where f is the heterogeneity criterion, called “scale parameter”, w is the user-defined
weight for color and ℎ is the color criterion. ℎ , which is shown in Eq. (2), is the shape criterion consists of the smoothness and the compactness criteria shown in Eq. (3) and (4), respectively. In this equations, n is the object size, l and b are the perimeters of the object and its bounding box. Readers can refer to [61, 62] for further details regarding the multi-resolution segmentation algorithm and its theoretical background. = ∙ ℎ + 1 − ∙ ℎ
(1)
7
ℎ = ∙ ℎ + ℎ (2) ℎ = ∙
− ∙
!"#$
!"#$
+ % ∙
!"#&
!"#&
'
(3) ℎ = ∙
(
− ∙
!"#$
()!"#$
+ % ∙
!"#&
()!"#&
'
(4) Since the segmentation procedure may generate diverse image-objects with dissimilar size and shape for pre- and post-event images, comparison of these objects is strict to reach the precise damage map. Therefore, segmentation intersection of any number of polygons is accomplished to solve this problem. In other words, polygons or portion of polygons in both segmented images will be written to the output polygon to produce similar image-objects. In this case, the segmented images are firstly projected into a spatial reference. Then, the intersection step is performed by cracking the polygons into smaller polygons. The generated polygons, which are usually smaller than the initial image-objects, are considered as the new image-objects that are considered for both pre- and post-event images. Fig. 2 demonstrates the obtained identical image-objects after segmentation intersection for two sample images. Although the obtained large number of image-objects from the segmentation intersection decreases the processing speed, the SBSC would ameliorate the final accuracy of the change detection results.
Fig. 2. Visual representation of segmentation intersection; (a) Image-objects of a sample preevent image; (b) Image-objects of a sample post-event image; (c) Equal image-objects after segmentation-intersection process.
8
2.3.Feature Analysis Since selecting different features may lead to different classification results, feature selection is a critical step in every classification-based change detection method. Therefore, any nonfeasance in feature selection may cause unfavorable errors in the final results [29, 63]. In change detection, the purpose of feature analysis step is to extract the most useful features to distinguish between changed and unchanged areas.
2.3.1. Feature Extraction In this study, two categories of textural and spectral features are considered for image classification. Given that the image-objects of both images are the same, in this study, geometrical or shape features must not be applied in the change detection process. Textural features which contain the coarseness, smoothness, and uniformity information of image-objects could investigate the spectral information of subobjects. Textural information can be extracted from several groups including, Haralick, Gabor, Fractal, and first-order statistical features [26]. Among these, Haralick features, which can be produced from gray level co-occurrence matrix (GLCM) or gray-level difference vector (GLDV), are applied in this study based on their successful performance in previous researches [31, 64, 65]. GLCM is the tabulation of how often different combinations of pixel gray levels happen in an image, which is calculated in four spatial directions: 0°, 45°, 90°, and 135°, GLDV is the sum of the diagonals of the GLCM. In addition, there is an optimized version of each Haralick textural feature with a ‘quick 8/11’ postfix. The performance is only based on a bit depth of 8 or 11, and the results will be most reputable if the 8-bit data is used directly. In other words, it is based on the degree of the quantization which result 28 gray levels. However, the calculation of Haralick features are independent of the image data’s bit-depth [61, 65]. The calculated textural features with their equations from GLCM and GLDV are shown in Table 1 and Table 2, respectively. In these tables, i and j are the row and column number, N is the number of rows or columns of the GLCM/ GLDV matrix, *+, is the GLCM mean and 9
-+, is the GLCM standard deviation, ./ is the image-object level and 0+, is the normalized value in the cell i, j. Table 1. Calculated textural features, obtained from GLCM matrix, for entering to an optimization algorithm NUM
Feature
1
GLCMHomogeneity GLCMEntropy
2
Equation N −1
∑ 1+
i , j =0
N−1
∑p
i, j
i, j=0
NUM
Feature
9
GLCM Homogeneity (8/11)
pi ,i
(i − j )
2
(−ln pi, j )
10
GLCM Entropy (8/11)
N −1
GLCM-Mean
3
∑
i , j =0
N
GLCM Correlation
4
GLCM- Std. Dev GLCM-2nd Moment
6
GLCMdissimilarity
7
GLCMContrast
8
N−1
pi,i
∑1+(i − j)
i, j=0
N −1
∑p
i, j
i , j =0
2
(− ln pi, j )
N −1
pi , j
11
GLCM Mean (8/11)
2
(i−µi )(i−µi ) pi,j
Ng−1Ng−1
∑∑
12
GLCM Correlation (8/11)
σσ i j
i=0 j=0
5
Equation
N −1
σi2, j = ∑ pi , j (i , j − µi , j )
13
GLCM-Std. Dev (8/11)
i , j =0 N −1
∑(p
i, j
14
)2
GLCM-2nd Moment (8/11)
i , j =0 N −1
∑p
i,j
i−j
15
i , j =0
N −1
∑p
i, j
GLCM dissimilarity (8/11)
16
(i − j)2
GLCM Contrast (8/11)
i , j =0
∑
i , j =0
N
pi , j 2
Ng−1Ng−1
( i−µi )(i−µi ) pi,j
∑∑
σσ i j
i=0 j=0
N −1
σi2, j = ∑ pi , j (i , j − µi , j ) i , j =0
N −1
∑(p
i, j
)2
i, j =0 N −1
∑p
i,j
i−j
i , j =0
N −1
∑p
i, j
(i − j ) 2
i, j =0
Table 2. Calculated textural features, obtained from GLDV matrix, for entering to the optimization algorithm NUM 1 2
3
Feature GLDVContrast
Equation
GLDVEntropy
∑V (− lnV )
GLDV-2nd Moment
N −1
∑V K
2
NUM 5
K
K =0 N −1
K =0
K
N −1
K =0
GLDV Entropy (8/11) 7
2 K
GLDV-2nd Moment (8/11)
GLDV- Mean
∑ K (V
K =0
N −1
∑V K
K
)
2
K
K =0
∑V (− lnV )
K =0
K
GLDVMean (8/11)
8
K
N−1
∑V
K=0
N −1
4
Equation
N −1
6
K
∑V
Feature GLDV Contrast (8/11)
2 K
N −1
∑ K (V
K =0
K
)
Spectral features are related to the reflectance of the incident electromagnetic wave of different objects in each band, which usually evaluates the first, second, and 10
third statistical moments of an image-object’s pixel value and the object’s relations to other image objects’ pixel values. These are used to describe image-objects with information derived from their spectral properties [61, 66]. Selected spectral features in our damage map generation method and their equations are shown in Table 3.
Where / is the brightness weight of the image layer k, 1̅/ . is the mean intensity of
the image layer k of the image object v, 1̅+ . is the mean intensity of the image layer
i of the image object v, 45 is the image layer of positive brightness weight, 06 is the
set of pixel/voxels of an image-object v, 76 8 is the extended bounding box of an image-object v with distance d, 1/ 9, :, ;, < is the image layer value at
pixel/voxel 9, :, ;, <, 1̅k is the mean intensity of the image layer k, w is the weight of
the image layer k, => (d) is a neighbor to v at a distance d, ?., @ is the length of the
common border between v and u, =6A is the darker direct neighbor to v, =65 in the
“Mean Difference to brighter neighbors” feature is the brighter direct neighbor to v,
?6 is the image object border length, and =65 in the “Relative border to brighter objects” is the darker direct neighbor to v. Obtained feature values are finally normalized to fall between the values of 0 and 1 in order to standardize the input and validate the layers after optimum feature calculation. Table 3. Calculated spectral features for entering to an optimization algorithm NUM
1
Feature Brightness
2
Maximum Differences
3 4
Equation 1 K c (v) = B ∑ wKB cK (v) w K =1 max ci (v) − c j (v)
i , jε K B
c (v ) σ k ( Bv (d ) − pv )
Standard deviation
∑
( x , y , z , t )∈ pv
Skewness
(ck ( x, y , z , t ) − ck (v ))3 3
∑
( c k ( x , y , z , t ) − ck ( v ) 2 )
1 ∑ wu (ck (v) − ck (u)) w uε NV (d )
( x , y , z , t )∈ pv
5
Mean Difference to neighbors
∆k (v) =
6
Mean Differences to neighbours(abs)
∆ k (v) =
7
Mean Difference to brighter neighbours
8
Mean Difference to darker neighbours
9
Relative Border to brighter objects
2
1 ∑ wu ( ck (v) − ck (u) ) w uε NV ( d ) 1 ∆ kB (v ) = ∑ wu (ck (v ) − ck (u )) w uε NvB ∆ kD (v) =
1 ∑ wu (ck (v) − ck (u)) w uε NvD
∑ u∈NvB
11
b(v, u) bv
2.3.2. Feature Differencing Due to the fact that the difference between the extracted features from the preevent and the post-event image can be a clue of changed areas, the difference vector, as shown by Eq. (5), is used as a feature vector for change detection. In this equation, the small difference between two image-objects represents in unchanged areas, and conversely, the large difference may be a sign of changed areas. It should be considered that there is no barrier in a segment-by-segment comparison of imageobjects or using features differences due to identicalness of the obtained imageobjects. Therefore, the differences between pre- and post-event features can be a useful indicator for damage detection. ∆ K ∆ K
∆ ∆ ∆ ∆ D %I D %K I D %K I . =C −C . H C . H H . . . ∆ ∆ ∆ B ) G)× B )K G)× B )K0LM G)×
(5)
2.3.3. Optimum Feature Selection Although high dimensional features may provide more information about the image-objects, the irrelevant and wasteful ones decrease the processing speed and may reduce the accuracy of the classification. Therefore, the optimum feature space should be generated by eliminating the repetitious and irrational descriptors to overcome this problem [44]. In other words, the aim of this stage is to select the optimum features which are most useful to distinguish the differences between the changed and unchanged areas. In this case, GA has been performed in a large number of remote-sensing researches for optimal feature set selection [30, 67]. We also applied this optimization algorithm in the optimum feature selection that is described in the following. GA is a population-based optimization technique based on Darwin’s Principle of Natural Selection, which is an impressive tool for searching large and non-linear search spaces [68, 69]. It begins with an initial population and evaluates with the fitness function based on the maximized/minimized goal function. In optimum feature selection problem, each chromosome is a string of bits that 1 and 0 represent the presence and absence of a feature in classification, respectively [70]. Genetic operators such as elitism, crossover, and mutation are used to evolve the populations 12
with an iteration process [71]. Evaluation function of the algorithm is the overall accuracy of k-nearest neighbor (K-NN) classification for sample data. K-NN classification is performed considering each feature vector (chromosome) in each iteration and the resulted overall accuracy is used as the fitness value. The flowchart of the GA-based feature selection step to select the optimum ones for damage assessment, in this study, is illustrated in Fig. 3.
Fig. 3. Flowchart of the GA-based optimization algorithm for selecting the optimum features.
2.4. Damage Map Generation The classification has always been one of the most important tool in the postclassification-based change detection algorithms. In our proposed damage map generation method, after dividing the area into two categories of “Changed” and “Unchanged” through performing a classification algorithm, debris area (damaged buildings) is separated from non-debris (other changes), and the final damage map is generated in three classes of “Unchanged”, “Debris” and “non-Debris”. 13
2.4.1. Classification Although a wide range of image classification techniques have been developed in pattern recognition [72, 73], three algorithms including, MD, ANN, and SVM which are more popular in relation to the other methods are studied in this research to directly detect the changes. It should be considered that obtained training data,
including the optimum features vector (∆ , ∆% … . ∆) ) the desired output into two classes of “Changed”, and “Unchanged” for each sample, are considered to be the same for all of the mentioned classification algorithms. More details of each method are presented further on. Minimum Distance (MD) is a famous classification algorithm which uses the mean vectors of training samples for each class and calculates the Euclidean distance from an unknown pixel/image-object to the mean vector of each class. In other hands, all pixels/image-objects are classified to the closest class unless the user determines a distance threshold [72, 74]. In this study, by calculating the mean vectors of training samples which were obtained from the previous step and considering the unknown image-object in the class of the nearest mean, each image-object is labelled as “Changed”, or “Unchanged”. Artificial Neural Network (ANN) is a computational and non-parametric supervised algorithm that reduces the computational complexity in the data classification [75]. There are several ANN structures such as Multi-Layer Perceptron (MLP), Radial Basis Function (RBF), wavelet neural network, Self-Organizing Map (SOM) and Recurrent Neural Networks (RNNs) [76]. However, MLP, which is trained through the error back-propagation learning algorithm [77], is applied in this paper. Typically, as shown in Fig. 4, an MLP network topology divided into three segments: (i) input layer, (ii) one (or more) hidden layers and (iii) output layer [78]. The main objective of the MLP algorithm is to find an optimal set of weight (w) parameters through the training process. In this case, the training data consists of a vector of input values that are selected from the study area, and a vector of associated target/desired output values. During the training, the weight parameters are adjusted through calculating the difference between actual neural network outputs and desired outputs [79]. Here, as shown in Fig. 4, the training data including the optimum features vector (∆ , ∆% … . ∆) ) for each image-object are imported to the network as
input data. Then, passing these values through the hidden layer, the output layer 14
receives the values from the hidden layer and transforms them into the output values. In an MLP network, the number of hidden layers and the number of neurons in each layer are very important during classification process. To have a generalization ability in the classification algorithm, at least one hidden layer is necessary for the network. In addition, the number and distribution of the applied training data should be under careful consideration.
Fig. 4. The structure of the proposed MLP network for buildings change detection.
Support Vector Machine (SVM) is a supervised algorithm, which has widely been used in three groups of linear, non-linear, and multi-class SVM for image classification and change detection [80]. Since the image-objects are not easily separable in this study, the non-linear SVM is performed to determine the optimum hyperplane that separates the dataset into two classes of “Changed” and “Unchanged”. In this case, as shown in Eq. (6), the main objective is maximizing the margin and
minimizing the error criteria [72]. In this equation, ξ+ is the slack parameter, b is bias, and C is a positive constant parameter that controls the relative influence of the two
competing terms. Also, x+ is the training input vector and y+ is the target vector (-1, 1).
RS % ‖ % ‖ + U ∑W +X ξ+ Y@?ZM1< <[ :+ . 9+ + ? ≥ 1 − ξ+
(6)
The classifier finds the optimal hyperplane as a decision function in a high dimensional space to solving a non-linear separation in the original input space. Hence, the input vector x+ is mapped into a high dimensional space through a nonlinear vector mapping function, which can be replaced by a valid kernel function. Eventually, the decision function in high dimensional space is found by solving the 15
convex optimization problem which is shown in Eq. (7). Where, k (9+ , 9 ) and ] are the kernel function and the Lagrange coefficient, respectively.
^A = ∑+ ]+ − ∑+ ∑ ]+ ] :+ : 4_9+ , 9 ` a@?ZM1< <[ 0 ≤ ]+ ≤ U %
(7) d ]+ :+ = 0 +
Since some of the kernel functions may not provide optimal SVM configuration for remote sensing applications while its performance is affected by kernel parameters, the major problem of the non-linear SVM is performing a suitable kernel with proper parameters [81]. In this research, three different kernel functions displayed in Eq. (8-10) for testing the non-linear SVM were considered to classify the
test area. In which, - is the standard deviation that determines the width of the Gaussian distribution selected in the RBF kernel, T is the order of the kernel, and P is the Polynomial degree.
Linear: 4_9+ , 9 ` = 9+e 9
(8) 4_9+ , 9 ` = _1 + 9+e 9 `
Polynomial:
f
(9) 4_9+ , 9 ` = exp i
RBF:
j kl Kk# j %m&
&
n
(10) In this section, a brief description of the tested classification methods including K-NN, ANN, and SVM was presented. For further details regarding these algorithms, readers can refer to [72]. 2.4.2. Damage Assessment Despite the ability of the obtained change map in detecting the changes, the type of changes is still ambiguous. In addition, producing the change map without separating the “Debris” from “non-Debris” can waste the required time for rescuing by wrongly deploying the rescue teams toward a non-Debris place. In this step, therefore, a classification algorithm is applied to specify the type of changes. For this purpose, after performing the appropriate training samples to the classification 16
algorithm, the detected changed areas from the previous step are directly divided into two classes of “Debris” and “non-Debris” via a binary classification algorithm. It should be noted that the damage map of the two dataset were created by subdividing the change area into “Debris” area from “non-Debris” classes. These step may not be necessary if the time interval between the pre- and post-event images are short, and one can consider the whole changed objects as “Debris”. The necessity of this stage should be discovered during the training data collection step. 3. Experiments and Results 3.1. Dataset The efficiency of the proposed algorithm and methodology was evaluated with two datasets, collected by two different sensors. The first study area focuses on Portau-Prince, Haiti, located on the continent of the Caribbean, which was damaged by the January 12, 2010, Mw 7.0 earthquake. In this area, WorldView II satellite images acquired on 1 October 2009, before the earthquake, and 16 January 2011, after the Haiti earthquake were available. Both images consist of four multispectral bands including blue, green, red, and near infrared with the spatial resolution of 0.5m. From this dataset, 500 m2 sample area equal to 1000 ×1000 pixels in both pre- and postearthquake satellite images including various building roof types were selected which is depicted in Fig. 5.
Fig. 5. Selected test area from the Haiti dataset on pre- and post-event HRSI; (a) Preevent HRSI; (b) Post-event HRSI.
The second study area concentrates on Bam, located in southwestern Iran, which was stroked by the December 26, 2003, Mw 6.6 earthquake. In this area, QuickBird satellite images acquired on 30 September 2003, and 3 January 2004, 17
before and after the Bam earthquake were used to evaluate the proposed method. Both images are composed of four multispectral bands including blue, green, red, and near infrared with the spatial resolution of 0.6m. From this dataset, 600 m2 sample area (
1000 ×1000 pixels) were selected which is illustrated in Fig. 6.
Fig. 6. Selected test area from the Bam dataset on pre- and post-event HRSI; (a) Pre-event HRSI; (b) Post-event HRSI.
3.2. Results In the pre-processing step, first the atmospheric effects were corrected on images using ENVI version 4.8 (Exiles Visual Information Solutions, Boulder, Colorado). Then, pre- and post-event images were co-registered using 6 control points. Moreover, histogram matching and histogram equalization were performed in ERDAS IMAGINE 2014 [82][82][82, 83] to increase the spectral similarity and adjusting the intensities of the images. Depicted images in Figs. (5-6) are the histogram-equalized and matched of the sample datasets. It should be considered that all steps have been followed and executed properly on both datasets. After pre-processing, segmentation of pre- and post-event images was accomplished through the MRS algorithm. Defining the suitable parameters for segmentation lead to gain the optimized segmentation results. However, in a number of studies, image-object generation depends on the approach of a trial-and-test, and the parameters are determined through proceeding experience [83]. In this step, momentous parameters of shape, scale, and compactness were considered to 0.4, 17, and 0.7 for both images of Haiti dataset respectively, and 0.3, 20, and 0.7 for both images of Bam dataset respectively. All parameters were adjusted through trial-andtest. In addition, all four spectral bands were applied and given the equal weight of 18
one in both datasets. The results of the segmentation step for both study areas are shown in Fig. 7.
Fig. 7. Extracted image-objects on pre- and post-earthquake satellite images through MRS algorithm; (a) and (b) pre- and post-event of Haiti dataset; (c) and (d) pre- and postevent of Bam dataset.
After segmentation, identical image-objects for both pre and post-event satellite images were obtained through segmentation intersection process. The obtained identical image-objects for a small area of Bam dataset in a larger scale is illustrated by Fig. 8. As it is shown in this figure, the number of the identical imageobjects are clearly more than the initial image-objects. Overall, 14334 identical image-objects were obtained within the Haiti area by intersecting the 2714 and 3593 image-objects from pre- and post-event images. In Bam area, by intersecting the 1003 and 1252 image-objects from pre- and post-event images, 6063 identical imageobjects were obtained.
19
Fig. 8. Segmentation intersection results of a small area in Haiti dataset; (a) sample area; (b) Primary image objects obtained from MRS technique; (c) identical image objects via segmentation intersection step.
After image-object equalization, 33 features including 24 textural Haralick features and 9 spectral features were extracted for all the image-objects on both datasets. Then, the differences of these features on pre- and post-event images were imported to GA to remove the redundant and irrelevant features. In the performed GA algorithm, the initial generation was organized with a population of 50 chromosomes having 33 genes and encoded using binary encoding. The strength of each chromosome was evaluated by using the achieved overall accuracy from the K-NN classification fitness value. The evaluation process was carried out using 198 training data of “Changed” and “Unchanged” area. After computing the fitness value for each feature vector, chromosome, the most powerful one with a higher accuracy was moved to the next generation, and the rest of the chromosomes were used in mutation and crossover operators with the probability of 0.1 and 0.8. The configuration parameters of the applied GA in this study are summarized in Table 4. As shown in this table, the termination condition was 200 repetition; however, the convergence of GA was obtained after 40 generations in Haiti area and 25 generations in Bam area (Fig. 9).
20
Table 4. Parameters used in GA for selecting optimum features GA Parameter Population size Genome length Elite Count Fitness Function Number of generations Mutation Probability Crossover Probability Crossover
Value 50 33 1 KNN-Based Classification accuracy
200 0.1 0.8 Unique Crossover
Fig. 9. GA simulation diagram for selecting the optimum features; (a) Haiti Dataset; (b) Bam Dataset.
Finally, 9 features including, “GLCM- and GLDV- 2nd Moment”, “GLCM-2nd Moment (8/11) ”, “GLCM- and GLDV- Entropy”, “GLCM- Entropy (8/11) ”, “GLCM- and GLDV-Homogeneity”, “GLCM-Homogeneity (8/11) ”, were selected for the Haiti dataset as optimum features by using the GA, and 14 features including, “GLCM-2nd Moment”, “GLDV- and GLCM Contrast”, “GLCM-Homogeneity”, “GLCM-Homogeneity (8/11)”, “GLDV- Mean”, “GLDV- Mean (8/11)”, “GLCMDissimilarity”, “GLCM- Std. Dev”, “GLCM-Std. Dev (8/11)”, “GLCM- and GLDVEntropy”, and ‘Maximum Difference”, were selected for the Bam dataset. Tables 5 and 6 summarize the calculated features for two “Changed” and “Unchanged” samples within the Haiti dataset. As it is presented in these tables, the differences value of the most of the features in the “Unchanged” samples are smaller than the “Changed” area.
21
Table 5. Pre and post event feature values for two sample of unchanged image objects in Haiti Image Object ID GLCM-Correlation GLCM-homogeneity GLDV-2nd Moment GLCM-homogeneity (8/11) GLCM- Entropy GLDV- Entropy GLDV- Entropy (8/11) GLCM- Entropy (8/11) GLCM-Correlation (8/11)
Pre-event
10488 Post-event
∆f
Pre-event
14673 Post-event
∆f
0.79 0.45 0.27 0.45 0.63 0.36 0.36 0.63 0.80
0.80 0.42 0.24 0.42 0.63 0.41 0.41 0.63 0.80
0.08 0.06 0.16 0.01 0.01 0.01 0.02 0.01 0.01
0.81 0.25 0.10 0.25 0.88 0.59 0.59 0.89 0.82
0.80 0.23 0.09 0.22 0.83 0.63 0.62 0.82 0.79
0.1 0.07 0.06 0.50 0.09 0.10 0.02 0.08 0.04
Table 6. Pre and post-event feature values for two sample of changed image objects in Haiti. Image Object ID GLCM-Correlation GLCM-homogeneity GLDV-2nd Moment GLCM-homogeneity (8/11) GLCM- Entropy GLDV- Entropy GLDV- Entropy (8/11) GLCM- Entropy (8/11) GLCM-Correlation (8/11)
Pre-event
1518 Post-event
∆f
0.67 0.56 0.34 0.56 0.47 0.29 0.29 0.48 0.67
0.76 0.20 0.09 0.20 0.82 0.62 0.61 0.82 0.76
0.26 0.54 0.48 0.03 0.16 0.03 0.08 0.82 0.26
Pre-event
6098 Post-event
∆f
0.50 0.60 0.36 0.60 0.45 0.33 0.33 0.45 0.53
0.76 0.26 0.13 0.26 0.77 0.55 0.55 0.78 0.76
0.21 0.30 0.44 0.03 0.17 0.04 0.03 0.25 0.24
For a better understanding of these features and their performance, the difference values of the corresponding samples are illustrated in Fig. 10. From this figure, it can be deduced that the unchanged area illustrates a few differences and the changed area shows a high difference between pre- and post-event features.
22
Fig. 10. The performance of the selected pre- and post-event features on four samples of changed and unchanged segments within Haiti dataset; (a) and (b) pre- and post-event features on unchanged segments; (c) and (d) pre- and post-event features on changed segments.
After optimum feature selection, classification algorithms were trained using the training samples including 100 manually observed polygons. Besides, 98 polygons were considered as test data to evaluate the classification results.
Fig. 11,
demonstrates the total number of train and test samples including the “Changed” and “Unchanged” polygons selected from the sample datasets.
23
Fig. 11. Selected samples as training and test data overlying on the pre- and post-event HRSI; (a) and (b) Haiti dataset; (c) and (d) Bam dataset.
The collected training samples were applied to train the implemented classification algorithms including MD, SVM with three different kernel functions (Linear, Gaussian, and Polynomial), and ANN for classifying the images into two classes of “Changed”, “Unchanged.” In the applied MLP network, the input layer, which contains input subtracted features, were set to 9 for the Haiti dataset and 14 for the Bam dataset. Besides, 15 hidden neurons were considered for both datasets. Moreover, the three critical parameters in the implemented SVM algorithm including controller C, kernel width - for Gaussian kernel, and polynomial degree P for the
polynomial kernel were considered to 1, and 3 in both datasets by comparing the resulted overall accuracy from evaluating the different ranges of each parameter. The resulted change maps from the implemented algorithms in Haiti and Bam study areas are shown in Fig. 12 and 13, respectively. Visually evaluating the resulted classified maps using pre- and post-event images, it was observed that a number of 24
changed/debris regions were incorrectly labelled as unchanged by RBFSVM, PSVM, ANN and MD, whereas the regions were detected more accurately by LSVM.
Fig. 12. Change detection result obtained from several classification methods in Haiti dataset; (a) LSVM; (b) RBFSVM; (c) PSVM; (d) ANN; (e) MD.
25
Fig. 13. Change detection result obtained from several classification methods in Bam dataset; (a) LSVM; (b) RBFSVM; (c) PSVM; (d) ANN; (e) MD.
In the last step of damage assessment, “Debris” objects in “Changed” class must be separated from “non-Debris”. During the training sample collection step, it was discovered that the all “Changed” samples of the Bam dataset were belong to the “Debris” area while in the Haiti area a number of “Changed” objects, which were not belonged to the “Debris” area, were observed. Therefore, in the Haiti dataset, the objects of the “Changed” class in the final classification with the highest overall accuracy, were subdivided into two classes of “Debris” and “non-Debris” objects via another SVM classification algorithm. This classification was performed using appropriate training data including 90 image-objects (80 for “Debris” class, 10 for non-Debris class). The unbalanced training data is because almost all changes in the study area were related to damaged buildings rather than other changes (building a structure, seasonal changes of trees’ canopy, etc.). Fig. 14 shows the final obtained buildings damage map for both datasets.
26
Fig. 14. Damage detection results obtained from the SBSC method based on the SVM classification; (a) Haiti area; (b) Bam area.
3.3. Accuracy Assessment Although collected ground truth based on field observation is always the best way to evaluate an algorithm, however, due to the impossibility of doing so in our dataset (because of the time and location of the data), the time consuming process of visual interpretation was used. In this case, to evaluate the classification results, quantitatively, the collected test data were checked to calculate the overall accuracy, precision, recall, and F1-score parameters, which are the most conventional measures in the evaluation of classification algorithms. Based on [84] precision is the percentage of the results which are relevant, recall is the percentage of total relevant results correctly classified by the algorithm, F1-score is the harmonic average of the precision and recall, and overall accuracy is overall effectiveness of the classifier. Table 7 summarizes these parameters for the tested classification algorithms in both datasets. Table 7. Summary of the accuracy assessment parameters of several classification methods for change detection. Data Accuracy Assessment LSVM RBFSVM PSVM ANN MD
Overall Accuracy (%)
94 92 85 89 78
Haiti Dataset Precision Recall (%) (%)
96 91 86 82 64
86 71 85 82 57
F1-Score (%)
Overall Accuracy (%)
91 80 85 82 61
93 87 90 89 71
27
Bam Dataset Precision Recall (%) (%)
97 92 94 94 85
90 90 89 86 63
F1-Score (%)
93 91 91 90 72
According to Table 7, from the perspective of the accuracy assessment parameters, LSVM with higher overall accuracy, precision, recall, and the F1-score was more successful than ANN, MD, RBFSVM, and PSVM in both datasets. Since SVM is defined by a convex optimization problem, it delivers a unique solution. Whereas the ANN algorithm has multiple solutions associated with local minima; for this reason, it may not be robust over different samples. Moreover, LSVM has shown better performance compared to RBFSVM and PSVM in both datasets. This may be because the training data of both data sets is linearly separable in a higher dimensional space. Although SVM can achieve better results by choosing the appropriate kernel function even in the absence of linear data, the complexity of applying non-linear kernel functions (RBF and polynomial) is more than the linear kernel. Furthermore, training the non-linear kernels are more expensive. Therefore, selecting the training data based on the differences of features can be an advantage.
4. Discussion 4.1. Comparison with traditional post-classification algorithms The effectiveness of the proposed SBSC damage assessment method was compared with the traditional post-classification algorithms in both pixel-level and object-level modes, which have been applied in a number of studies in the Haiti dataset [85-87]. In this case, by comparing the two independent pre- and post-event classified images, obtained from MD, KNN, and SVM, damaged buildings were detected. In this evaluation, the same training samples were used in all algorithms to make a fair judgment between post-classification and the proposed method. Furthermore, textural features including “GLCM- 2nd Moment”, “GLCM- Entropy”, “GLCM-Homogeneity”, “GLCM- Dissimilarity”, and “GLCM- Mean” were used in the implemented post-classification algorithms. Fig. 15 demonstrates the obtained damage maps based on traditional post-classification technique.
28
Fig. 15. Obtained damage map with several classification methods in Haiti dataset; (a) pixelbased SVM; (b) pixel-based MD; (c) object-based K-NN.
The comparison between the proposed SBSC and traditional postclassification algorithms based on overall accuracy, precision, recall, and F1-score parameters are summarized by Table 8. According to this table, although object-based K-NN has more accurate results than pixel-based methods, the calculated overall accuracy, precision, recall, and F1-score show the superiority of the proposed SBSC method to the traditional post-classification in both pixel-based and object-based situations. Given the above, the pixels covered by the segment have the same characteristics, and the method can effectively improve the detection accuracy compared with the pixel-based method. Furthermore, the recall and the precision of 95%, 97%, obtained from Haiti dataset, and 97%, 90%, obtained from Bam dataset, indicate that the SBSC can detect most of the actually damaged buildings, successfully. Considering the obtained accuracy assessment parameters, the proposed SBSC method could be a useful source for building damage detection and crisis management. Table 8. The accuracy assessment parameters of the post-classification and proposed SBSC method after damage detection. Pixel-based SVM Pixel-based MD Object-based K-NN SBSC method (Haiti) SBSC method (Bam)
Overall Accuracy (%) 71 67 83 92 93
Precision (%) 53 44 62 97 97
Recall (%) 59 32 71 95 90
F1-Score (%) 56 37 66 96 93
To clarify the final damage detection results, visual comparison of the implemented methods for three small area is shown in Fig. 16. As shown by this figure, several undamaged pixels have wrongly identified as debris in the obtained damage map by the MD and SVM. In this case, due to the lack of information about 29
the shape and relation of the objects in pixel-based classification, the shadow of buildings has been specified as debris. Moreover, some debris has not been detected by these pixel-based methods. According to Fig. 14 and 15, although the false detection class is significantly less in the obtained damage map by object-based K-NN classification, the proposed method exhibited better results by using equal imageobjects. In addition, the problem of the classification error caused by the presence of shadows has almost solved by the proposed method.
Fig. 16. Comparison of the SBSC method with traditional post-classification techniques in three small regions of the Haiti study area. The six columns (left to right) are: (1) pre-event HRSI; (2) post-event HRSI; (3) buildings damage detection result of the SBSC; (4) ‘‘result of the object-based K-NN; (5) ‘‘result of the pixel-based SVM; (6) ) ‘‘result of the pixel-based MD; blue in the last four columns represents the undamaged class in each region while cyan is the damaged class.
4.2. Comparison with other methods The fair judgment between the different change detection techniques is very difficult due to the impact of the study area on the accuracy of the results. Therefore, the result of the proposed SBSC method was compared with other damage detection techniques, which specifically studied the Haiti or Bam dataset, which are summarized in Table 9. According to this table, the proposed method based on the SBSC of identical image-objects has achieved higher accuracy in building damage detection. This is because that the segmentation intersection to achieve identical image-objects, improve the completeness and accuracy of final results.
30
Table 9. Overall accuracy (OA) of other damage detection techniques on the Haiti data set. Reference
Test Data
OA
Ji et al. (2018)
post-event QuickBird satellite image and Aerial photo
Janalipour et al. (2017)
Pre-event map, post event WorldView & QuickBird images
65-90
He et al. (2016)
LiDAR data , GIS data, DEM
87.31
Cooner et al. (2016)
Pre-event WorldView and post-event QuickBird images
74-77
Rastiveis et al. (2015)
Post-event LiDAR data and pre-event vector map
91.5
Pham et al. (2014)
Pre-and post VHR optical images and LiDAR data
70-79
Miura etal. (2012)
Pre-event QuickBird and post-event worldView images
Taskin et al. (2011)
Pre- and post-event QuickBird satellite image
Rezaian et al. (2010)
Pre- and post-event Arial images
78
70 81.4 69
Since the proposed SBSC method indicates the same operation in other regions, there was no need to implement it on larger area. Furthermore, many previous studies performed test data smaller than or equal to the data used in this study [43, 88, 89]. Anyhow, more training data from all over the study area is needed in more extended region. 4.3. Strengths of the SBSC method Today, due to the diversity of remote sensing satellites high-resolution images can be captured shortly after an earthquake. Due to the automaticity of the algorithm, the buildings damage map can be prepared under 2-3 hours after receiving the data depending on the extent of the area. However, many factors, including the time of access to pre- and post-event images, the resolution of the images, the hardware used for data processing, the accuracy of the registration, the time spent for gathering the training data, and the diversity of buildings affect the time needed for buildings damage map generation. By investigating the proposed algorithm, based on the selected features using the GA, it can be concluded that the textural features compared to spectral features have better performance for change detection, and it is difficult to determine the distribution of the changed areas by the change of the intensity. Furthermore, the intact buildings have an orderly texture, while earthquakes destroy these regular arrangements. Thus, the differences between pre-and post-earthquake texture features as shown in Fig. 10, can make a good distinction between changed and unchanged areas. According to Fig. 12 and 13, different classification algorithms, with similar training samples and features, have made different change detection results. Hence,
31
selecting the appropriate classification algorithm and kernel function for SVM is one of the main issues to achieve accurate results. The other advantage of this algorithm is its user-friendliness that there is no need for remote-sensing expertise to use it. Therefore, implementing this algorithm in a web-based tool may include three steps: 1) loading the co-registered HRSI, 2) collecting training data, and 3) automatically calculating and displaying the final building damage map as the output. Moreover, thanks to the recent advances in classification algorithms, the necessity to collect the training data can also be gradually eliminated, and more powerful unsupervised algorithms could be used. 4.4. Challenges and future works Despite the higher overall accuracy of the proposed SBSC method, in some cases buildings’ segments are wrongly classified as debris due to the existence of various objects such as pipes, air vents, antennas, etc. on buildings’ roof. Therefore, according to the sensitivity of textural features to the appearance of heterogeneity caused by objects, future studies may examine a wider range of features. However, in most cases, the SBSC method correctly classified the buildings. Furthermore, the boundary of individual buildings was not separately specified in the final damage map. Therefore, separating buildings’ boundary and subsequently specifying the damage degree can be considered in future studies by using a vector map along with the images. Perhaps the major drawback of the SBSC method is increasing the number of image-objects after segmentation intersection that may reduce the processing speed. Thus, considering the speed besides the accuracy of the proposed method can be one of the important issues in the next studies. Due to the fundamental problems of comparing the dissimilar image-objects in previous studies, the primary aim of this research was to investigate the hypothesis that introducing the similar image-objects on both pre- and post-event HRSI via segmentation intersection can improve the damage map generation results. The obtained results of the proposed SBSC method demonstrates that this hypothesis is indeed correct.
32
5. Conclusions and Remarks In this paper, a new automatic method, segment-by-segment comparison (SBSC), was developed in order to identify the damaged buildings using pre- and post-event high-resolution satellite images. This method was tested over two areas of the 2010 Haiti earthquake and the 2003 Bam earthquake. In the proposed method, the obtained image-objects are equalized through segmentation intersection to solve the comparing problems of dissimilar image-objects. Moreover, the optimum features are selected using the genetic algorithm to detect the damaged buildings in the damage detection process. The calculated accuracy assessment parameters including the overall accuracy, precision, recall, and F1-score of 92%, 97%, 95%, 96%, and 93%, 97%, 90%, 93%, respectively, were obtained from the Haiti and the Bam dataset. Based on the obtained results, the proposed SBSC algorithm has achieved higher overall accuracy than other damage assessment techniques, implemented on the Haiti and Bam dataset. Moreover, comparing problems of dissimilar imageobjects in previous post-classification algorithms, were solved using segmentation intersection. However, it includes some limitations: (1) increasing the number of image-objects after segmentation intersection which results in a more computation cost, (2) sensitivity of textural features to the appearance of heterogeneity caused by objects on buildings’ roof, (3) non-specified buildings boundary in the final damage map. As future work, based on limitations of this research, it will be necessary to consider the speed beside the accuracy of the proposed method. Since the feature extraction has an important role in classification accuracy, considering a wider range of features and performing another algorithm for feature selection are strongly recommended. In this case, using deep learning-based techniques to extract deep features may be effective. Another future work is testing the algorithm using various datasets with the same or different spatial resolution. Finally, determining the buildings boundary and their damage level can be one of the future studies to improve the proposed SBSC method.
References
33
1. 2.
3.
4.
5. 6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
Geller, R.J., Earthquake prediction: a critical review. Geophysical Journal International, 1997. 131(3): p. 425-450. Cvetković, V.M., et al., Knowledge and Perception of Secondary School Students in Belgrade about Earthquakes as Natural Disasters. Polish journal of environmental studies, 2015. 24(4). Hosseinali, F., A.A. Alesheikh, and F. Nourian, Rapid Urban Growth in the Qazvin Region and Its Environmental Hazards: Implementing an Agent-Based Model. Polish Journal of Environmental Studies, 2014. 23(3). Erdik, M., et al., Rapid earthquake loss assessment after damaging earthquakes. Soil Dynamics and Earthquake Engineering, 2011. 31(2): p. 247266. www.jma.go.jp. The Earthquake Early Warning system. Visited 2020; Available from: https://www.jma.go.jp/jma/en/Activities/eew1.html. Allen, R.M., et al., The status of earthquake early warning around the world: An introductory overview. Seismological Research Letters, 2009. 80(5): p. 682-693. Chen, D.-Y., et al., An approach to improve the performance of the earthquake early warning system for the 2018 Hualien earthquake in Taiwan. Terr. Atmos. Ocean. Sci, 2019. 30: p. 423-433. Satriano, C., et al., PRESTo, the earthquake early warning system for southern Italy: Concepts, capabilities and future perspectives. Soil Dynamics and Earthquake Engineering, 2011. 31(2): p. 137-153. Vetrivel, A., et al., Identification of damage in buildings based on gaps in 3D point clouds from very high resolution oblique airborne images. ISPRS journal of photogrammetry and remote sensing, 2015. 105: p. 61-78. Khodaverdi, N., H. Rastiveis, and A. Jouybari, Combination of PostEarthquake LiDAR Data and Satellite Imagery for Buildings Damage detection. Earth Observation and Geomatics Engineering, 2019. 3(1): p. 12-20. Wang, X. and P. Li, Extraction of urban building damage using spectral, height and corner information from VHR satellite images and airborne LiDAR data. ISPRS Journal of Photogrammetry and Remote Sensing, 2020. 159: p. 322-336. Zhou, Z., J. Gong, and X. Hu, Community-scale multi-level post-hurricane damage assessment of residential buildings using multi-temporal airborne LiDAR data. Automation in Construction, 2019. 98: p. 30-45. Seydi, S.T. and H. Rastiveis, A DEEP LEARNING FRAMEWORK FOR ROADS NETWORK DAMAGE ASSESSMENT USING POST-EARTHQUAKE LIDAR DATA. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., 2019. XLII-4/W18: p. 955-961. Corbane, C., et al., Comparison of damage assessment maps derived from very high spatial resolution satellite and aerial imagery produced for the Haiti 2010 earthquake. Earthquake Spectra, 2011. 27(S1): p. S199-S218. Yamazaki, F., et al. Damage detection from high-resolution satellite images for the 2003 Boumerdes, Algeria earthquake. in 13th World Conference on Earthquake Engineering, International Association for Earthquake Engineering, Vancouver, British Columbia, Canada. 2004. Guida, R., A. Iodice, and D. Riccio. Monitoring of collapsed built-up areas with high resolution SAR images. in 2010 IEEE International Geoscience and Remote Sensing Symposium. 2010. IEEE.
34
17.
18.
19.
20.
21. 22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
Wen, Q., et al., Automatic Building Extraction from Google Earth Images under Complex Backgrounds Based on Deep Instance Segmentation Network. Sensors, 2019. 19(2): p. 333. Voigt, S., et al., Satellite image analysis for disaster and crisis-management support. IEEE transactions on geoscience and remote sensing, 2007. 45(6): p. 1520-1528. Rezaeian, M. and A. Gruen, Automatic classification of collapsed buildings using object and image space features, in Geomatics solutions for disaster management. 2007, Springer. p. 135-148. Benz, U.C., et al., Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS Journal of photogrammetry and remote sensing, 2004. 58(3-4): p. 239-258. Blaschke, T., Towards a framework for change detection based on image objects. Göttinger Geographische Abhandlungen, 2005. 113: p. 1-9. Duro, D.C., S.E. Franklin, and M.G. Dubé, A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote sensing of environment, 2012. 118: p. 259-272. Whiteside, T.G., G.S. Boggs, and S.W. Maier, Comparing object-based and pixel-based classifications for mapping savannas. International Journal of Applied Earth Observation and Geoinformation, 2011. 13(6): p. 884-893. Li, Q., L. Gong, and J. Zhang, Earthquake-Induced Building Detection Based on Object-Level Texture Feature Change Detection of Multi-Temporal SAR Images. Boletim de Ciências Geodésicas, 2018. 24(4): p. 442-458. Huang, H., et al., Combined multiscale segmentation convolutional neural network for rapid damage mapping from postearthquake very high-resolution images. Journal of Applied Remote Sensing, 2019. 13(2): p. 022007. Ranjbar, H.R., et al., Evaluation of physical data extraction of damaged buildings due to earthquake and proposing an algorithm using GIS and remote sensing layers. Scientfc - Research Quarterly of Geographical Data, 2014. 23(91). Turker, M. and E. Sumer, Building ‐ based damage detection due to earthquake using the watershed segmentation of the post ‐ event aerial images. International Journal of Remote Sensing, 2008. 29(11): p. 3073-3089. Turker, M. and B. San, Detection of collapsed buildings caused by the 1999 Izmit, Turkey earthquake through digital analysis of post-event aerial photographs. International Journal of Remote Sensing, 2004. 25(21): p. 47014714. Rastiveis, H., F. Samadzadegan, and P. Reinartz, A fuzzy decision making system for building damage map creation using high resolution satellite imagery. Natural Hazards and Earth System Sciences (NHESS), 2013. 13(1): p. 455-472. Izadi, M., A. Mohammadzadeh, and A. Haghighattalab, A New Neuro-Fuzzy Approach for Post-earthquake Road Damage Assessment Using GA and SVM Classification from QuickBird Satellite Images. Journal of the Indian Society of Remote Sensing, 2017: p. 1-13. Janalipour, M. and A. Mohammadzadeh, Building damage detection using object-based image analysis and ANFIS from high-resolution image (Case study: BAM earthquake, Iran). IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2016. 9(5): p. 1937-1945. 35
32.
33.
34.
35.
36. 37.
38.
39.
40.
41.
42.
43.
44. 45.
46.
47.
Anniballe, R., et al., Earthquake damage mapping: An overall assessment of ground surveys and VHR image change detection after L'Aquila 2009 earthquake. Remote Sensing of Environment, 2018. 210: p. 166-178. Dong, P. and H. Guo, A framework for automated assessment of postearthquake building damage using geospatial data. International journal of remote sensing, 2012. 33(1): p. 81-100. Li, X., et al., An improved approach of information extraction for earthquakedamaged buildings using high-resolution imagery. Journal of Earthquake and Tsunami, 2011. 5(04): p. 389-399. Dong, L. and J. Shan, A comprehensive review of earthquake-induced building damage detection with remote sensing techniques. ISPRS Journal of Photogrammetry and Remote Sensing, 2013. 84: p. 85-99. Lu, D., et al., Change detection techniques. International journal of remote sensing, 2004. 25(12): p. 2365-2401. Weber, C. and A. Puissant, Urbanization pressure and modeling of urban growth: example of the Tunis Metropolitan Area. Remote sensing of environment, 2003. 86(3): p. 341-352. Wu, C., L. Zhang, and L. Zhang, A scene change detection framework for multi-temporal very high resolution remote sensing images. Signal Processing, 2016. 124: p. 184-197. Xu, L., et al. The comparative study of three methods of remote sensing image change detection. in Geoinformatics, 2009 17th International Conference on. 2009. IEEE. Huyck, C.K., et al., Towards rapid citywide damage mapping using neighborhood edge dissimilarities in very high-resolution optical satellite imagery—Application to the 2003 Bam, Iran, earthquake. Earthquake Spectra, 2005. 21(S1): p. 255-266. Li, P., et al. Urban building damage detection from very high resolution imagery using one-class SVM and spatial relations. in Geoscience and Remote Sensing Symposium, 2009 IEEE International, IGARSS 2009. 2009. IEEE. Chini, M., F. Cinti, and S. Stramondo, Co-seismic surface effects from very high resolution panchromatic images: the case of the 2005 Kashmir (Pakistan) earthquake. Natural Hazards and Earth System Sciences, 2011. 11(3): p. 931-943. Janalipour, M. and M. Taleai, Building change detection after earthquake using multi-criteria decision analysis based on extracted information from high spatial resolution satellite images. International Journal of Remote Sensing, 2017. 38(1): p. 82-99. Ranjbar, H.R., et al., Using high-resolution satellite imagery to proide a relief priority map after earthquake. Natural Hazards, 2018. 90(3): p. 1087-1113. Moya, L., et al., 3D gray level co-occurrence matrix and its application to identifying collapsed buildings. ISPRS Journal of Photogrammetry and Remote Sensing, 2019. 149: p. 14-28. Endo, Y., et al., New Insights into Multiclass Damage Classification of Tsunami-Induced Building Damage from SAR Images. Remote Sensing, 2018. 10(12): p. 2059. Sirmacek, B. and C. Unsalan. Damaged building detection in aerial images using shadow information. in Recent Advances in Space Technologies, 2009. RAST'09. 4th International Conference on. 2009. IEEE.
36
48.
49.
50.
51.
52.
53.
54.
55. 56.
57. 58. 59.
60.
61. 62. 63. 64.
65.
Miura, H., S. Modorikawa, and S.H. Chen. Texture characteristics of highresolution satellite images in damaged areas of the 2010 Haiti earthquake. in Proceedings of the 9th International Workshop on Remote Sensing for Disaster Response, Stanford, CA, USA. 2011. Ma, J. and S. Qin. Automatic depicting algorithm of earthquake collapsed buildings with airborne high resolution image. in Geoscience and Remote Sensing Symposium (IGARSS), 2012 IEEE International. 2012. IEEE. Xiao, P., et al., Change detection of built-up land: A framework of combining pixel-based detection and object-based recognition. ISPRS Journal of Photogrammetry and Remote Sensing, 2016. 119: p. 402-414. Al-Khudhairy, D., I. Caravaggi, and S. Giada, Structural damage assessments from Ikonos data using change detection, object-oriented segmentation, and classification techniques. Photogrammetric Engineering & Remote Sensing, 2005. 71(7): p. 825-837. Trinder, J. and M. Salah, Aerial images and LiDAR data fusion for disaster change detection. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci, 2012. 1: p. 227-232. Klonus, S., et al., Combined edge segment texture analysis for the detection of damaged buildings in crisis areas. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2012. 5(4): p. 1118-1128. Jensen, J.R. and J. Im, Remote sensing change detection in urban environments, in Geo-spatial technologies in urban environments. 2007, Springer. p. 7-31. Lee, J. and T. Warner, Segment based image classification. International Journal of Remote Sensing, 2006. 27(16): p. 3403-3412. Im, J., J.R. Jensen, and M.E. Hodgson, Object-based land cover classification using high-posting-density LiDAR data. GIScience & Remote Sensing, 2008. 45(2): p. 209-228. Haralick, R.M. and L.G. Shapiro, Image segmentation techniques. Computer vision, graphics, and image processing, 1985. 29(1): p. 100-132. Pal, N.R. and S.K. Pal, A review on image segmentation techniques. Pattern recognition, 1993. 26(9): p. 1277-1294. Bialas, J., et al., Object-based classification of earthquake damage from highresolution optical imagery using machine learning. Journal of Applied Remote Sensing, 2016. 10(3): p. 036025. Baatz, M. Multi resolution Segmentation: an optimum approach for high quality multi scale image segmentation. in Beutrage zum AGIT-Symposium. Salzburg, Heidelberg, 2000. 2000. Definiens, A., Developer XD 2.0. 4. Reference Book, 2012. Bhalerao, A. and R. Wilson, Multiresolution image segmentation. 1991, University of Warwick. Rezaeian, M., Assessment of earthquake damages by image-based techniques. Vol. 107. 2010: ETH Zurich. Samadzadegan, F., M.J.V. Zoj, and M.K. Moghaddam, Fusion Of Gis Data And High-Resolution Satellite Imagery For Post-Earthquake Building Damage Assessment. 2010. Haralick, R.M. and K. Shanmugam, Textural features for image classification. IEEE Transactions on systems, man, and cybernetics, 1973(6): p. 610-621.
37
66.
67.
68.
69.
70. 71. 72. 73. 74. 75.
76. 77.
78. 79.
80.
81.
82.
83.
84.
Wang, X., et al., Object-based change detection in urban areas from high spatial resolution images based on multiple features and ensemble learning. Remote Sensing, 2018. 10(2): p. 276. Huang, C.-L. and C.-J. Wang, A GA-based feature selection and parameters optimizationfor support vector machines. Expert Systems with applications, 2006. 31(2): p. 231-240. Tian, J., et al., An Improved KPCA/GA-SVM Classification Model for Plant Leaf Disease Recognition? Journal of Computational Information Systems, 2012: p. 7737–7745. Olague, G., Automated photogrammetric network design using genetic algorithms. Photogrammetric Engineering and Remote Sensing, 2002. 68(5): p. 423-431. Li, P., et al., Genetic feature selection for texture classification. Geo-spatial Information Science, 2004. 7(3): p. 162-166. Tsai, C.-F., W. Eberle, and C.-Y. Chu, Genetic algorithms in feature and instance selection. Knowledge-Based Systems, 2013. 39: p. 240-247. Mather, P. and B. Tso, Classification methods for remotely sensed data. second ed. 2016: CRC press. Theodoridis, S. and K. Koutroumbas, Pattern Recognition, Edition. 2009, Academic Press, fourth edition Edition. Richards, J.A., Supervised classification techniques, in Remote Sensing Digital Image Analysis. 2013, Springer. p. 247-318. Bakhary, N., H. Hao, and A.J. Deeks, Damage detection using artificial neural network with consideration of uncertainties. Engineering Structures, 2007. 29(11): p. 2806-2815. Markou, M. and S. Singh, Novelty detection: a review—part 2:: neural network based approaches. Signal processing, 2003. 83(12): p. 2499-2521. Scarselli, F. and A.C. Tsoi, Universal approximation using feedforward neural networks: A survey of some existing methods, and some new results. Neural networks, 1998. 11(1): p. 15-37. Haykin, S.S., et al., Neural networks and learning machines. Vol. 3. 2009: Pearson Upper Saddle River. Rahideh, A. and M.H. Shaheed. Cancer classification using clustering based gene selection and artificial neural networks. in The 2nd International Conference on Control, Instrumentation and Automation. 2011. IEEE. Zhang, H., W. Shi, and K. Liu, Fuzzy-topology-integrated support vector machine for remotely sensed image classification. IEEE transactions on geoscience and remote sensing, 2012. 50(3): p. 850-862. Mountrakis, G., J. Im, and C. Ogole, Support vector machines in remote sensing: A review. ISPRS Journal of Photogrammetry and Remote Sensing, 2011. 66(3): p. 247-259. ERDAS (2014) ERDAS Imagine 2014. Hexagon Geospatial, https://hexagongeospatial.fluidtopics.net/reader/uOKHREQkd_XR9iPo9Y_Ijw /uem7mANllX7Ut01R2xxtQw. Ma, L., et al., A review of supervised object-based land-cover image classification. ISPRS Journal of Photogrammetry and Remote Sensing, 2017. 130: p. 277-293. Sokolova, M. and G. Lapalme, A systematic analysis of performance measures for classification tasks. Information Processing & Management, 2009. 45(4): p. 427-437. 38
85.
86.
87.
88.
89.
Ye, S., D. Chen, and J. Yu, A targeted change-detection procedure by combining change vector analysis and post-classification approach. ISPRS Journal of Photogrammetry and Remote Sensing, 2016. 114: p. 115-124. Hoque, M.A.-A., et al., Assessing tropical cyclone impacts using object-based moderate spatial resolution image analysis: a case study in Bangladesh. International Journal of Remote Sensing, 2016. 37(22): p. 5320-5343. Panuju, D.R., D.J. Paull, and B.H. Trisasongko, Combining Binary and PostClassification Change Analysis of Augmented ALOS Backscatter for Identifying Subtle Land Cover Changes. Remote Sensing, 2019. 11(1): p. 100. Tong, X., et al., Building-damage detection using pre-and post-seismic highresolution satellite stereo imagery: A case study of the May 2008 Wenchuan earthquake. ISPRS Journal of Photogrammetry and Remote Sensing, 2012. 68: p. 13-27. Nex, F., et al., Towards Real-Time Building Damage Mapping with Low-Cost UAV Solutions. Remote sensing, 2019. 11(3): p. 287.
39
Declaration of interests ☒ The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ☐The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: