Computer Communications 150 (2020) 150–157
Contents lists available at ScienceDirect
Computer Communications journal homepage: www.elsevier.com/locate/comcom
Detection of flood disaster system based on IoT, big data and convolutional deep neural network M. Anbarasan a , BalaAnand Muthu b ,∗, C.B. Sivaparthipan c , Revathi Sundarasekar d , Seifedine Kadry e , Sujatha Krishnamoorthy f , Dinesh Jackson Samuel R. g , A. Antony Dasel h a
Sri Sairam Institute of Technology, India V.R.S. College of Engineering & Technology, India c SNS College of Technology, India d Anna University, India e Beirut Arab University, Lebanon f Wenzhou Kean University, China g Vellore Institute of Technology University, India h School of Mechanical Engineering (SMEC), VIT, India b
ARTICLE
INFO
Keywords: Hadoop distributed file system (HDFS) Convolutional deep neural network (CDNN) Normalization Rule generation Missing value imputation
ABSTRACT Natural disasters could be defined as a blend of natural risks and vulnerabilities. Each year, natural as well as human-instigated disasters, bring about infrastructural damages, distresses, revenue losses, injuries in addition to huge death roll. Researchers around the globe are trying to find a unique solution to gather, store and analyse Big Data (BD) in order to predict results related to flood based prediction system. This paper has proposed the ideas and methods for the detection of flood disaster based on IoT, BD, and convolutional deep neural network (CDNN) to overcome such difficulties. First, the input data is taken from the flood BD. Next, the repeated data are reduced by using HDFS map-reduce (). After removal of repeated data, the data are pre-processed using missing value imputation and normalization function. Then, centred on the pre-processed data, the rule is generated by using a combination of attributes method. At the last stage, the generated rules are provided as the input to the CDNN classifier which classifies them as a) chances for the occurrence of flood and b) no chances for the occurrence of a flood. The outcomes obtained from the proposed CDNN method is compared parameters like Sensitivity, Specificity, Accuracy, Precision, Recall and F-score. Moreover, when the outcomes is compared other existing algorithms like Artificial Neural Network (ANN) & Deep Learning Neural Network (DNN), the proposed system gives is very accurate result than other methods.
1. Introduction Disaster management aspires to alleviate the possible damage as of the disasters, make certain instant and appropriate aid to the victims, and achieve effectual and also rapid recovery [1]. The chief characteristics of natural disasters are randomness, availability of partial resources in impacted regions, and also dynamic changes in the surroundings [2]. Unpredictability entails that strict impacts on populace and property amid natural disasters cannot envisage with satisfactory accuracy [3]. In addition, the upshots of disasters are considerably worse when they happen in urban places on account of the casualties and degree of the damage to goods as well as property that is caused [4]. Flood is the common natural disasters which can happen in any city [5]. As
this disaster is regarded hazardous to human life, an effectual countermeasure or alert system ought to be implemented to notify individuals in the early phase in order that security precautions could be considered to shun any calamity [6]. IoT as well as BD-centred alert systems are existed in current years. In the IoT, numerous objects which enclose the people will be in the network in one type or another [7]. Hence, IoT devices are employed to gather data and recognize hazards subsequent to disasters and also to localize wounded persons [8]. Even though IoT technologies cannot impede the occurrence of the disaster, it very well might be remarkably valuable apparatus for conveyance catastrophe readiness together with counter-active action data, for instance, disaster’s prediction and also early on warning systems [9]. In tragedy management, IoT technologies
∗ Corresponding author. E-mail addresses:
[email protected] (M. Anbarasan),
[email protected] (B. Muthu),
[email protected] (C.B. Sivaparthipan),
[email protected] (R. Sundarasekar),
[email protected] (S. Kadry),
[email protected] (S. Krishnamoorthy),
[email protected] (D.J. Samuel R.),
[email protected] (A.A. Dasel).
https://doi.org/10.1016/j.comcom.2019.11.022 Received 22 August 2019; Received in revised form 2 November 2019; Accepted 13 November 2019 Available online 19 November 2019 0140-3664/© 2019 Elsevier B.V. All rights reserved.
M. Anbarasan, B. Muthu, C.B. Sivaparthipan et al.
Computer Communications 150 (2020) 150–157
integration of data for monitoring disasters, emergency-response coordination, along with the disaster information management. This platform brings together Wireless Sensor Networks with the information exchange web service and mobile phone based disaster notification application, with the help of IoT centred data storage in addition to exchange structure. Furthermore, when sensors were improved or replaced utilizing traditional methods. Gangyan Xu et al. [20] illustrated the notion of cloud asset centred upon mobile agent, cloud computing, as well as a range of smart devices for flood control in urban regions. The urban region had a cloud asset. Cloud asset alluded to a physical asset which was increased with the ability of communicating, identification, sensing, reasoning, acting, and also could be stocked, evolved, controlled, and even shared via the cloud. It aided to realize automatic instantaneous data collection in addition to remote control of the asset, and facilitated its visibility along with traceability via an asset-centric solution, that was economic and also feasible under the particular perspective of urban flood control. Dalibor et al. [21] proposed an ideology for flood observation and notification. This proposal consists of a solar powered long range sensor module provided with ultrasonic sensors. This system was implemented in two cities of Japan to sense and record the water level, which operates on self-power. The design is done with the help of MaxBotix ultrasonic sensor which has a range up to ten meters. By maintaining the standards of Japanese River Bureau which falls under the Ministry of Land, this setup can be converted into a Flood detection setup. The paper has also mentioned the communication linkages between ultrasonic sensor and Generic sensor (EnOcean). It has also presented the discharged data measured for huge distances with long range low power communication.
provide benefits in respects of monitoring, tracking, controlling along with sensing the environment utilizing instantaneous data [10]. BD is stated as the technical paradigm which permits researchers to do an effectual analysis of huge sum of data which is made available via the present practices [11]. BD Framework for Disaster Management comprises ‘3’ stages: (i) Data Acquisition, (ii) Data Computation, and (iii) Data Interpretation [12]. BD might be characterized as encompassing ‘4’ dimensions: Data Volume, and gauging the quantity of data available, typical datasets occupying numerous terabytes and data velocity. Data velocity is a gauge of the rate of data streaming, creation, in addition to aggregation [13]. Centred on a research carried out by Institute of Environmental Studies, over sixty (%) of the world cities will be susceptible to flood in the next thirty years owing to the effect of the climate changes (such as sea-level increase) [14] that is detected early with the aid of IoT and BD for the chances of occurrence and non-occurrence of flood. This paper presented below consists of three sections, the second section analysis the earlier works associated with the method proposed, the third section will have a brief exploration of the methodologies which has been proposed, the fourth section discusses about the outcomes contrasted with existing methods and the last section interprets the overall outcomes of the paper. 2. Related work Shifeng Fang et al. [15] suggested an integrated approach to snowmelt floods early caution centred on geo-informatics (remote sensing, GIS (geographical information system), GPS (global positioning system), etc.), Internet of Things (IoT)and also cloud services. In the preliminary step, the IIS (Integrated Information System) was developed which had the capability to offer basic functions in addition to services to users or errands. The outcomes illustrated that the procedure of early-warning along with snow-melt flood simulation were significantly benefited. The approach encompassed few limits: (i) standardization and operational of IoT, (ii) data formats and data standards, (iii) BD management, and (iv) information system compatibility. Sandeep K. Sooda et al. [16] recommended a social combined IoT grounded smart flood monitoring as well as forecasting structural design with the convergence betwixt BD and High-Performance Computing. It categorized the geographical sites into different size hexagonal geometrical structures to identify suitable locations to fix IoT devices. After that, the social network scrutiny was utilized with the intention of energy saving, subsequently the dimensionality was lessened by means of SVD. Then, the K-Mean clustering algorithm was employed to classify the insight of flood in ‘5’ disparate levels. Holt–Winter’s process would envisage the flood rating. The approach employed fewer sensors. Prachatos Mitra et al. [17] presented an IoT in addition to machine learning-centred embedded system to envisage the likelihood of floods on a river basin. The model utilized a customized mesh network connections aimed at the Wire-less Sensor Networks (WSNs) to gather data, in addition a GPRS to transmit the data via internet. Here, the datasets were evaluated with the aid of an artificial neural network (ANN). The upshots of the analysis that were appended illustrated substantial enhancement over the presently existent methods. Prabodh Sakhardande et al. [18] created a system of inter-connected smart modulus as a means to facilitate centralized data acquisition and also offer an inter-linked network for data transmission in the shortage of any existent infrastructure. Importance was shown on how sensing and also communication technologies of IoT might efficiently be utilized on smart city scrutinizing and disaster management. The approach encompassed manifold Wi-Fi facilitated modules which jointly shared the distributed in addition to heterogeneous resources and data along with capabilities given by means of physical objects for instance sensors in addition to actuators. Tzu-Husan Lin and Der-Cherng Liaw [19] proposed a centralized intelligent internet of things based disaster management platform with
3. Flood disaster detection system based on IoT, big data and CDNN Worldwide nations are concerned about natural disasters. Disasters often happen in the vicinity of human livelihood. Mostly, it is natural (for instances, earthquake, flood, landslide, forest-fire, tsunami, and also lightning) or manmade (for instance, industrial explosion, leakage in an oil pipe-line, leakage in gas production, as well as terrorist attacks). Early detection of disaster will prevent people from encountering such disaster. Fig. 1a gives a clear picture of the proposed block diagram of the Flood Disaster Detection System. In this paper, detection of flood is focused around Internet of things, Big Data with the proposed CDNN, It has two phases, one is the training phase and the other is the testing phase. The first phase starts with the training of the system for the proposed CDNN for flood detection, the flood related big data is provided as an input to the system which encompasses the sensed information’s like Water Flow (WF), Water Level (WL), Rain Sensor (RS), Humidity (HM), etc. followed by the Hadoop Distributed File System and Map Reduce process. The Next process is the normalization process followed by the function generation using the attributes. The combined attribute function is then given as an input to the CDNN based classifier which divides the flood detection into chances and no chances. Proceeding to the testing phase, here the input provided for CDNN process is taken from the sensed IoT values, whereas the remaining process is conducted similar to the training phase such as HDFS, pre-processing and classification of data. Fig. 1b, shows the proposed methodology. 3.1. Training phase 3.1.1. HDFS MapReduce () Initially, the ‘Hadoop Distributed File System’ is processed in the BD. This technique is a dependable way of managing numerous BD. It also buttress the quick data transfer betwixt nodes. At its outset, it is nearly coupled with ‘MapReduce’. The function of this tool is to 151
M. Anbarasan, B. Muthu, C.B. Sivaparthipan et al.
Computer Communications 150 (2020) 150–157
Fig. 1a. Block diagram of flood disaster detection system.
Fig. 1b. Block diagram for the proposed methodology.
take away the repeated information that is existent in the flood BD. HDFS distributed automatically the file athwart clusters and retrieves data via file name. HDFS does not change the file once it is written. Therefore if any changes have to be made, then the whole file must be written. The HDFS has two phases for solving the query on HDFS such as map function in addition to reduced function which is expressed as follows,
node that operates on those tiny processes. Then an acknowledgement is delivered to the MN.
(1)
(b) Reduce () Function The next function that is important in the Hadoop tool is reduce () function. This function assembles the comprehensive sub-operation results and it then combined for the generation of aggregated decisioncentred outcomes delivered as an acknowledgement of the original big demands. The reduce function is denoted as exhibited in the subsequent mathematical equation.
𝐹𝑠 = [𝑀𝑓 , 𝑅𝑓 ]
𝑀𝑓 = 𝑚𝑎𝑝(𝑂)
(2)
Wherein 𝑀𝑓 implies the output of the map() function, 𝑚𝑎𝑝() is the function which performs mapping and 𝑂 denotes the original data which is alluded to as the flood BD.
Where, 𝐹𝑠 denotes the flood data, 𝑀𝑓 indicates the mapping function and 𝑅𝑓 represents the reducing function. (a) Map () Function The preliminary function that is existent in the Map/Reduction tool is the Map() function. This function prevails on the master node (MN). It segments the input data or the processes to numerous small subprocesses. These sub-processes are additionally scattered to the worker
𝑅𝑓 = 𝑟𝑒𝑑𝑢𝑐𝑒(𝑀𝑓 ) 152
(3)
M. Anbarasan, B. Muthu, C.B. Sivaparthipan et al.
Computer Communications 150 (2020) 150–157
system and makes the computation more effective. The convolutional layer (CL) aspires to study feature depictions of the inputs. Specially, every neuron of a FM is joined to an area of neighbouring neurons on the preceding layer. This neighbourhood is alluded to as the neuron’s receptive field on the prior layer. The new FM could be attained by 1st convolving the input with a learned kernel and after that implementing an element-wise non-linear activation function (AF) on the convolved outcomes. To make every FM, the kernel is shared by the entire input’s spatial locations. The entire FMs are attained by utilizing several disparate kernels. Mathematically, the feature value in the 𝑘th FM of 𝑙th layer, 𝑞𝑘𝑙 , is gauged by:
Where, 𝑀𝑓 is the mapped function’s output, 𝑟𝑒𝑑𝑢𝑐𝑒() is the function that reduces the components and 𝑅𝑓 is the reduced set of data. 3.1.2. Pre-processing After reducing the repeated data, the pre-processing is done in this phase. Pre-processing aims to clear and rearrange the dataset. In the proposed technique, the pre-processing step has ‘2’ steps namely, missing value imputation in addition to normalization which are explained in the below steps, (a) Missing value imputation The dataset has some missing values, which are replaced by the mean value of the non-missing values. That is mathematically expressed as follows,
𝑞𝑘𝑙 = 𝑤𝑙𝑘 𝐿𝑙 + 𝑏𝑙𝑘 𝑤𝑙𝑘
Wherein and signify the weight factor and bias term of the 𝑘th filter of the 𝑙th layer correspondingly, and 𝐿𝑙 implies the input data of the 𝑙th layer. The kernel 𝑤𝑙𝑘 which makes the feature map 𝑞𝑘𝑙 is shared. Such a weight sharing mechanism encompasses numerous advantages for instance it can lessen the model intricacy and also make the network easy to train. The AF initiates non-linearities to CNN that are enviable for multi-layer networks to detect non-linear features. Let 𝑛𝑎 (⋅) indicate ( )𝑙 the non-linear AF. The activation value 𝑛𝑎 𝑘 of convolutional feature 𝑙 𝑞𝑘 could be calculated as: ( )𝑙 ( ) 𝑛𝑎 𝑘 = 𝑛𝑎 𝑞𝑘𝑙 (11)
𝑂𝑖−1 + 𝑂𝑖+1 , 𝑖∈𝑁 (4) 2 Where, 𝑂𝑖 missing value, 𝑂𝑖−1 represent the preceding value from the missing value, and 𝑂𝑖+1 represents the subsequent value from the missing value, 𝑁 denotes the natural numbers (explicitly, 𝑁 = 1, 2, 3, ….).
𝑂𝑖 =
(b) Normalization In this sub-phase, the data is scaled to fit into a particular range. Though there are numerous sorts of normalization available, Min–Max normalizations is taken. Here, the Min–Max normalizations are utilized for fixing a particular range for the data. Min–Max Normalizations transforms a value 𝑂 to 𝑁𝑅 that fits in the gamut [0, 1]. It is provided by the below Eq. (5), ) (( ( ) ) 𝑂 − 𝑂𝑀𝑖𝑛 (5) 𝑁𝑅 = ( ) ∗ (1 − 0) + 0 𝑂𝑀𝑎𝑐 − 𝑂min
The pooling layer intends to attain shift-invariance via lessening the FM’s resolution. It is typically placed betwixt‘2’CL. Every FM of a pooling layer is joined to its equivalent FM of the previous CL. Signifying ( )𝑙 the pooling functions as 𝑝 (⋅), for every feature map 𝑛𝑎 𝑘 it has: (( ) ) 𝑙 𝑦𝑙𝑘 = 𝑝 𝑛𝑎 𝑘 (12)
Where, 𝑁𝑅 signifies the normalization, 0 and 1 implies the range, 𝑂𝑀𝑖𝑛 minimum value and 𝑂𝑀𝑎𝑐 denotes the maximum value.
The typical pooling operations are the average pooling in addition to max pooling. Then, the filtered combination attributes are inputted to the DNN. The functions of the DNN is explained in the below steps and the architecture diagram of the proposed CDNN is displayed in Fig. 2. In Deep Learning (DL) Algorithm, five layers neuron model was used. Initially, the weights of the input feature from the CNN are assumed arbitrarily. The output of the hidden node will be the sum of the product of the input value as well as the weight vector of all input nodes linked to it. Activation is implemented, in addition, the output is the input to the succeeding layer is obtained. DL neural network adopts a forward activation flow of outputs and backward error propagation of weight alteration. The DL-neural network classifier normally consists of layers as CL, pooling, along with fully connected layer. The last output decision of the DNN model is centred on the weights and biases of the prior layers on the network structure. Thus, the weights in addition to biases of the model are updated with Eqs. (13) and (14) correspondingly for every layer,
3.1.3. Rule generation After the pre-processing, the rule is generated grounded on the combination of attributes. The flood BD has four sensed values from the four sensors namely, Water Flow sensor data (WF), Water Level sensor data (WL), Rain Sensor data (RS), and Humidity data (HM). Here, the rule is generated centred on the blend of these attributes. The blend of attributes are expressed as follows, 𝐿1 = {𝑊 𝐹 , 𝑊 𝐿, 𝑅𝑆, 𝐻𝑀}
(6)
𝐿2 = {(𝑊 𝐹 , 𝑊 𝐿) , (𝑊 𝐹 , 𝑅𝑆) , (𝑊 𝐹 , 𝐻𝑀) , (𝑊 𝐿, 𝑅𝑆) , (𝑊 𝐿, 𝐻𝑀) , (𝑅𝑆, 𝐻𝑀)}
(7)
𝐿3 = {(𝑊 𝐹 , 𝑊 𝐿, 𝑅𝑆) , (𝑊 𝐹 , 𝑊 𝐿, 𝐻𝑀) , (𝑊 𝐹 , 𝑅𝑆, 𝐻𝑀) , (𝑊 𝐿, 𝑅𝑆, 𝐻𝑀)} 𝐿4 = {(𝑊 𝐹 , 𝑊 𝐿, 𝑅𝑆, 𝐻𝑀)}
(10) 𝑏𝑙𝑘
(8) (9)
𝑥 𝜕𝐶 𝑥𝜆 𝑊 − + 𝑚𝛥𝑊𝑙 (𝑡) 𝑟 𝑙 𝑛 𝜕𝑊𝑙 𝑥 𝜕𝐶 𝛥𝐵𝑙 = − + 𝑚𝛥𝐵𝑙 (𝑡) 𝑛 𝜕𝐵𝑙
𝛥𝑊𝑙 = −
Where, 𝐿1 , 𝐿2 , 𝐿3 and 𝐿4 are the run length of the data and also the total rule generation is labelled as 𝐿. 3.1.4. Classification utilizing CDNN The generated rules are abridged by utilizing the convolutional process and after that the filtered data is inputted to the Deep Neural Network (DNN) for identification of flood occurrence centred on the data. With more advantages in CDNN, the data can be compared with the unstructured format without any data labelling on it. Here first the convolutional process is performed by the point of augmenting the proposed system’s speed. Initially, the convolutional process is explicated. CNN has a few concepts termed parameter sharing as well as local connectivity. Parameter sharing is the sharing of weights via the entire neurons on a specific feature map (FM). Local connectivity is the notion of every neural-layer connected only to a sub-set of the input data; this aids to lessen the number of parameters in the entire
(13) (14)
Wherein 𝑊 , 𝑥, 𝐵, 𝑚, 𝜆, 𝑙, 𝑛, 𝑡, and 𝐶 signifies the weight, learning rate, bias, momentum, regularization parameter, layer number, the total number of training samples, updating step, together with cost function in that order. The AF of the DNN is, ∑ 𝑦𝑖 = 𝛥𝐵𝑙 + 𝑍𝑗 𝛥𝑊𝑙 (15) Where 𝑦𝑖 represents the output unit and the 𝑍𝑗 denotes the hidden unit which is mathematically expressed as follows, ∑ 𝑍𝑗 = 𝛥𝐵𝑙 + 𝐿 𝑊𝑖𝑗 (16) Where, 𝑊𝑖𝑗 signifies the weight betwixt input and hidden layer. The proposed CDNN’s Pseudocode is exhibited in Fig. 3. 153
M. Anbarasan, B. Muthu, C.B. Sivaparthipan et al.
Computer Communications 150 (2020) 150–157
Fig. 2. Architecture for the CDNN classifier. Table 1 Comparative tabulation for performance of CDNN along with DNN & ANN. (a) Accuracy, Specificity, Sensitivity, Number of data Proposed CDNN
100 200 300 400 500
Existing DNN
Existing ANN
𝐴𝑐
𝑆𝑦
𝑆𝑒
𝐴𝑐
𝑆𝑦
𝑆𝑒
𝐴𝑐
𝑆𝑦
𝑆𝑒
78.63 82.72 86.84 89.71 93.23
77.57 80.34 83.67 87.45 91.43
78.78 81.56 84.56 88.24 91.56
69.90 73.89 78.67 82.40 85.90
68.45 71.38 76.89 81.67 86.30
72.32 72.90 77.52 82.35 85.73
67.80 72.79 76.25 83.35 84.76
69.90 71.14 72.45 76.89 80.34
69.34 73.26 74.67 81.23 83.57
(b) Precision, Recall, & F-Measure Number of data Proposed CDNN
100 200 300 400 500
Existing DNN
Existing ANN
𝑃𝑛
𝑅𝑙
𝐹𝑒
𝑃𝑛
𝑅𝑙
𝐹𝑒
𝑃𝑛
𝑅𝑙
𝐹𝑒
79.34 82.57 85.29 88.20 92.23
78.90 81.52 84.89 88.21 90.36
79.11 82.04 85.08 88.20 91.28
69.21 72.90 76.36 82.87 86.45
68.45 73.29 76.43 82.60 85.89
68.82 73.09 76.39 82.73 86.16
67.85 71.89 74.23 79.90 83.83
68.75 72.43 75.69 78.65 81.67
68.29 72.15 74.95 79.27 82.73
Fig. 3. Pseudocode for the CDNN.
that are operated using NOAA-National Weather Service is regarded as the NEXRAD. Doppler radars effectively detect winds and atmospheric precipitation, which let scientists to anticipate and track weather events like ice pellets, rain, snow, tornadoes and hail, and certain non-weather objects say, insects and birds.
3.2. Testing phase In the testing phase, the input data is taken from the four sensor values from the dam, namely, Water Flow sensor data (WF), Water Level sensor value (WL), Rain sensor value (RS), and Humidity (HM). Next, the repeated values are reduced using HDFS map-reduce () which is explained in Section 3.1.1. Then, the data is pre-processed using missing value imputation and normalization functions which are mathematically derived in Eqs. (4) and (5). Next, the pre-processed data is given as input to the classifier CDNN which classifies (a) the chances of occurrence of flood disaster and (b) no chances of occurrence of the flood disaster. The CDNN is explained in Section 3.1.4.
4.2. Performance analysis Here, the proposed CDNN classifier’s performance is weighed against the existent DNN and ANN in respect of the recall, sensitivity, FMeasure, accuracy, precision, together with specificity. The parameters that are calculated are true negative (𝑇 𝑁), false positive (𝐹 𝑃 ), true positive (𝑇 𝑃 ) along with false negative (𝐹 𝑁) values. Comparative scrutiny of the proposed CDNN with the existent DNN and ANN is shown in Table 1.
4. Result and discussion 4.1. Database description
Discussion: The above tabulation shows the performance of the CDNN classifier compared with the DNN & ANN for the following test namely accuracy, specificity, sensitivity, recall, precision &F-measure respectively. The overall indication shows that the existing DNN and ANN
This proposed method utilizes the NEXRAD (Next-Generation Radar) together with NOAA (National Oceanic and Atmospheric Administration). A group of 160 Doppler weather radars (high-resolution) 154
M. Anbarasan, B. Muthu, C.B. Sivaparthipan et al.
Computer Communications 150 (2020) 150–157
Fig. 6. Sensitivity performance plotter chart for CDNN along with DNN & ANN. Fig. 4. Accuracy comparison graph for proposed CDNN and existing DNN and ANN.
Fig. 5. Comparison graph for the proposed CDNN with the existing DNN and ANN.
Fig. 7. Precision comparison graph for the proposed CDNN with DNN and ANN.
have underperformed when compared with the proposed CDNN without any hesitation.
has 77.57%, 80.34%, 83.67% and 87.45% specificity, respectively, which is big than the existent classifiers. Hence, it proves that the proposed CDNN provides better performance when weighed against the existing classifiers.
(a) Comparison based on Accuracy ( 𝐴𝑐 ) It is basically the measure of correctness of the flood detection and also it is given as, 𝑇𝑃 + 𝑇𝑁 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃 + 𝐹𝑃 + 𝐹𝑁 + 𝑇𝑁
(c) Comparison based on Sensitivity ( 𝑆𝑒 ) The sensitivity or the true positive rate (TPR) is stated as the number of positives identified correctly. The sensitivity measure is written as,
(17)
Discussion: The above Fig. 4 illustrates the accuracy performance of the proposed CDNN with the existing DNN and ANN. The performance varies centred upon the number of data. The data ranges from 100 to 500. When the number of data is 100, the proposed CDNN achieves 78.63% accuracy but the existent DNN obtained 69.90% accuracy and ANN has 67.80% which are low when weighted against the proposed CDNN classifier. Similarly, the proposed CDNN classifier attains high accuracy for the remaining data (200, 300, 400, and 500). Thus, it is inferred that the proposed CDNN provides better accuracy when weighted against the other existent classifiers.
𝑠𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 =
𝑇𝑁 𝑇𝑁 + 𝐹𝑃
(19)
Discussion: The above Fig. 6 demonstrates the proposed CDNN’s performance with the existent DNN and ANN-based on sensitivity. When the number of data is 400, the CDNN, DNN, and ANN achieved 88.24%, 82.35%, and 81.23% sensitivity, respectively. Here, the proposed CDNN encompasses a better sensitivity score. Similarly, for the entire remaining number of data counts, the proposed CDNN obtained higher performance than the existent methods. It proves that the proposed CDNN classifier provides better performance when weighed against the existent classifiers.
(b) Comparison based on Specificity ( 𝑆𝑦 ) The specificity or the true negative rate (TNR) is stated as the number of negatives identified correctly. The specificity measure is written as follows, 𝑆𝑝𝑒𝑐𝑖𝑓 𝑖𝑐𝑖𝑡𝑦 =
𝑇𝑃 𝑇𝑃 + 𝐹𝑁
(d) Comparison based on Precision ( 𝑃𝑛 ) Precision measures the number of attributes from the solution that are right according to the data and is stated as follows:
(18)
𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 =
Discussion: Fig. 5 compared the CDNN’s performance with the existent DNN and ANN-based on sensitivity measure. The existing ANN produces the worst performance when weighed against the existent DNN and proposed CDNN. This specificity measure varies centred on the number of data. For 500 data, the proposed CDNN has 91.43% specificity but the existent DNN achieves 86.30% specificity and the existing ANN has 80.34% which is lower than the proposed CDNN classifier. Then, for 100, 200, 300, and 400 data, the proposed CDNN
𝑇𝑃 𝑇𝑃 + 𝐹𝑃
(20)
Discussion: Fig. 7 depicts the graph of precision which is compared between CDNN, DNN, and ANN models. The Y axis consists of the number of data numbered from 100 to 500 and the X axis consists the percentage of precision. The percentage of precision in the method proposed is higher when compared to other two existing method. The measured precision percentage of CDNN is 79.34%, 82.57%, 85.29%, 155
M. Anbarasan, B. Muthu, C.B. Sivaparthipan et al.
Computer Communications 150 (2020) 150–157
5. Conclusion The current development of BD and the IoT technologies generate a vast opportunity for disaster management systems as well as disasterassociated authorities (emergency responders, public health, police, as well as fire departments) to attain top-notch assistance and enhanced insights for accurate and also timely decision-making. This paper proposed detection of flood disaster system centred on IoT, BD and CDNN classifier. In the initial stage, the repeated values are removed using HDFS MapReduce () and then the data is pre-processed. Next, the rules are generated centred on the combination of attributes. Then, the combination of attributes is given as input to the CDNN classifier which classifies them as (a) chances for the occurrence of flood and (b) no chances for the occurrence of a flood. The proposed system’s performance is analysed centred on the number of data. The proposed system’s performance is compared with the existent systems namely, DNN and ANN in respect of precision, accuracy, recall, F-Measure, specificity, and sensitivity. The compared outcome gives a clear picture that the CDNN algorithm has comparatively high accuracy level than the existing method. The percentage of outcomes achieved with a data count of 500 with CDNN is Accuracy (93.23%), Sensitivity (91.43%), Specificity (91.56%), Precision (92.23%), Recall (90.36%) and F-Score (91.28%). The above mentioned outcomes are higher with DNN & ANN models. To conclude, the flood detection system has outperformed other best methods prevailing in the market and in future the presented work can be enhanced with IoT based devices with even more longer range of sensors with decreased cost with futuristic algorithms used in every stage of the flood detection process.
Fig. 8. Recall graph for the proposed classifier and existing classifiers.
Declaration of competing interest Fig. 9. Comparison of F-measure for the CDNN method.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
88.20%, and 92.23% respectively. Therefore the proposed system performs much better than the already existing one.
References
(e) Comparison based on Recall ( 𝑅𝑙 ) Recall measures the number of attributes of the data that are correctly retrieved by the proposed solution and is stated as follows: 𝑟𝑒𝑐𝑎𝑙𝑙 =
𝑇𝑃 𝑇𝑃 + 𝐹𝑁
[1] Akash Sinha, Prabhat Kumar, Nripendra P. Rana, Rubina Islam, Yogesh K. Dwivedi, Impact of internet of things (IoT) in disaster management: a task-technology fit perspective, Ann. Oper. Res. (2017) 1–36. [2] Suleyman Celik, Sitki Corbacioglu, Role of information in collective action in dynamic disaster environments, Disasters 34 (1) (2010) 137–154. [3] Heri Sutanta, Lan D. Bishop, A.R. Rajabifard, Integrating spatial planning and disaster risk reduction at the local level in the context of spatially enabled government, 2010. [4] Gustavo Furquim, Roozbeh Jalali, Gustavo Pessin, Richard Pazzi, Jó Ueyama, How to improve fault tolerance in disaster predictions: a case study about flash floods using IoT, ML and real data, Sensors 18 (3) (2018) 907. [5] Aziyati Yusoff, Intan Shafinaz Mustafa, Salman Yussof, Norashidah Md Din, Green cloud platform for flood early detection warning system in smart city, in: 5th National Symposium on Information Technology: Towards New Smart World (NSITNSW), IEEE, 2015, pp. 1–6. [6] E. Shalini, S. Subbulakshmi, P. Surya, R. Thirumurugan, Cooperative flood detection using sms through IoT, Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 5 (3) (2016). [7] Jayavardhana Gubbi, Rajkumar Buyya, Internet of things (IoT): A vision, architectural elements, and future directions, Future Gener. Comput. Syst. 29 (7) (2013) 1645–1660. [8] Azzedine Boukerche, Rodolfo W.L. Coutinho, Smart disaster detection and response system for smart cities, in: IEEE Symposium on Computers and Communications (ISCC), IEEE, 2018, pp. 01102–01107. [9] Nur-adib Maspo, Aizul Nahar Harun, Masafumi Goto, Mohd Nasrun Mohd Nawi, Nuzul Azam Haron, Development of internet of thing (IoT) technology for flood prediction and early warning system (EWS), Int. J. Innov. Technol. Explor. Eng. 8 (2018) 219–228. [10] Azimah Abdul Ghapar, Salman Yussof, A. Bakar, A Internet of things (IoT) architecture for flood data management, Int. J. Future Gener. Comput. Syst. 11 (1) (2018) 55–62. [11] Ibrahim Abaker Targio Hashem, Ibrar Yaqoob, Nor Badrul Anuar, Salimah Mokhtar, Abdullah Gani, Samee Ullah Khan, The rise of big data on cloud computing: review and open research issues, Inf. Syst. 47 (2015) 98–115.
(21)
Discussion: The above Fig. 8 exhibits the recall comparison graph for the CDNN with the DNN and ANN. When the number of data is 300, the proposed CDNN has 84.89% recall which is high than the existent DNN (76.43%) and ANN (75.69%) classifiers. The existing classifiers provide lower performance when weighed against the proposed CDNN. Here, ANN provides lower performance than the DNN and proposed CDNN. Thus, it is inferred that the proposed CDNN achieves better performance when contrasted with the existing classifiers. (f) Comparison based on F-Measure ( 𝐹𝑒 ) 𝐹𝑒 implies the harmonic mean of the recall as well as precision, and also it is mathematically stated as follows: 𝐹 − 𝑀𝑒𝑎𝑠𝑢𝑟𝑒 = 2 ∗
𝑃 𝑛 ∗ 𝑅𝑙 𝑃𝑛 + 𝑅𝑙
(22)
Discussion: The above Fig. 9 analyses the proposed CDNN classifiers with the existent classifiers namely, DNN and ANN-based on F-Measure. The classifier’s performance varies centred on the number of data. The 𝐹𝑒 metric is centred on the precision as well as recall measures. When the number of data is 300, the CDNN classifier has 85.08% F-Measure which is high than the existent DNN (76.39%) and ANN (74.95%) classifiers. Similarly, the system’s performance varies grounded on the number of data. For all data count, the proposed CDNN provides higher F-Measure which shows that the proposed CDNN has better performance than the existent classifiers. 156
M. Anbarasan, B. Muthu, C.B. Sivaparthipan et al.
Computer Communications 150 (2020) 150–157 [17] Prachatos Mitra, Ronit Ray, Retabrata Chatterjee, Rajarshi Basu, Paramartha Saha, Sarnendu Raha, Rishav Barman, Saurav Patra, Suparna Saha Biswas, Sourav Saha, Flood forecasting using internet of things and artificial neural networks, in: IEEE 7th Annual Information Technology, Electronics and Mobile Communication Conference, 2016, pp. 1–5. [18] Prabodh Sakhardande, Sumeet Hanagal, Savita Kulkarni, Design of disaster management system using IoT based interconnected network with smart city monitoring, in: 2016 International Conference on Internet of Things and Applications, IEEE, 2016, pp. 185–190. [19] Tzu-Husan Lin, Der-Cherng Liaw, Development of an intelligent disaster information-integrated platform for radiation monitoring, Nat. Hazards 76 (3) (2015) 1711–1725. [20] Gangyan Xu, George Q. Huang, Ji Fang, Cloud asset for urban flood control, Adv. Eng. Inform. 29 (3) (2015) 355–365. [21] Dalibor Purkovic, Lee Coates, Marian Hönsch, Dirk Lumbeck, Frank Schmidt, Smart river monitoring and early flood detection system in Japan developed with the EnOcean long range sensor technology, in: 2019 2nd International Colloquium on Smart Grid Metrology, SMAGRIMET, IEEE, 2019, pp. 1–6.
[12] Mohammad Fikry Abdullah, Mardhiah Ibrahim, Harlisa Zulkifli, Big data analytics framework for natural disaster management in malaysia, in: International Conference on Internet of Things, Big Data and Security, Vol. 2, 2017, pp. 406–411. [13] Dontas Emmanouil, Doukas Nikolaos, Big data analytics in prevention, preparedness, response and recovery in crisis and disaster management, in: The 18th International Conference on Circuits, Systems, Communications and Computers (CSCC), in: Recent Advances in Computer Engineering Series, vol. 32, 2015, pp. 476–482. [14] Patrick J. Ward, W.P. Pauw, M.W. Van Buuren, Muh Aris Marfai, Governance of flood risk management in a time of climate change: the cases of Jakarta and Rotterdam, Environ. Polit. 22 (3) (2013) 518–536. [15] Shifeng Fang, Lida Xu, Yunqiang Zhu, Yongqiang Liu, Zhihui Liu, Huan Pei, Jianwu Yan, Huifang Zhang, An integrated information system for snowmelt flood early-warning based on internet of things, Inf. Syst. Front. 17 (2) (2015) 321–335. [16] Sandeep Sood, K. Rajinder Sandhu, Karan Singla, Victor Chang, IoT big data and HPC based smart flood management framework, Sustain. Comput.: Inf. Syst. 20 (2018) 102–117.
157