Future Generation Computer Systems 101 (2019) 865–879
Contents lists available at ScienceDirect
Future Generation Computer Systems journal homepage: www.elsevier.com/locate/fgcs
Detection of multiple-mix-attack malicious nodes using perceptron-based trust in IoT networks Liang Liu a , Zuchao Ma a , Weizhi Meng b , a b
∗
College of Computer Science and Technology Nanjing University of Aeronautics and Astronautics, Nanjing, China Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark
article
info
Article history: Received 13 January 2019 Received in revised form 27 April 2019 Accepted 10 July 2019 Available online 18 July 2019 Keywords: IoT network Malicious node Trust management Perceptron learning K-means method Insider attack
a b s t r a c t The Internet of Things (IoT) has experienced a rapid growth in the last few years allowing different Internet-enabled devices to interact with each other in various environments. Due to the distributed nature, IoT networks are vulnerable to various threats especially insider attacks. There is a significant need to detect malicious nodes timely. Intuitively, large damage would be caused in IoT networks if attackers conduct a set of attacks collaboratively and simultaneously. In this work, we investigate this issue and first formalize a multiple-mix-attack model. Then, we propose an approach called Perceptron Detection (PD), which uses both perceptron and K-means method to compute IoT nodes’ trust values and detect malicious nodes accordingly. To further improve the detection accuracy, we optimize the route of network and design an enhanced perceptron learning process, named Perceptron Detection with enhancement (PDE). The experimental results demonstrate that PD and PDE can detect malicious nodes with a higher accuracy rate as compared with similar methods, i.e., improving the detection accuracy of malicious nodes by around 20% to 30%. © 2019 Elsevier B.V. All rights reserved.
1. Introduction Internet of Things (IoT) has become a popular infrastructure to support many modern applications and services, such as smart homes, smart healthcare, public security, industrial monitoring and environment protection. Most existing smart devices can work collaboratively and construct a type of multihop IoT networks. These devices could be either sensors that collect information from surroundings or control units that gather information from sensors to make some suitable strategies. In addition, these devices can use various IoT protocols [1,2] to transfer their data including ZigBee, WiFi, Bluetooth, etc. The topology of multihop IoT networks is flexible but it is also fragile, i.e., it suffers from many insider threats, where an attack can be launched within a network. For example, attackers can compromise some devices in a multihop IoT network and then utilize these devices to infer sensitive information, tamper data, launch Drop attack or denial-of-service (DoS) attack. Therefore, it is very important to design an effective security mechanism for detecting malicious nodes in an IoT network. Motivation. Most existing studies mainly focus on a single and unique attack in an IoT environment, but an advanced attacker may choose an intelligent strategy to behave maliciously, i.e., they ∗ Corresponding author. E-mail address:
[email protected] (W. Meng). https://doi.org/10.1016/j.future.2019.07.021 0167-739X/© 2019 Elsevier B.V. All rights reserved.
may manipulate some specific packets with a probability [3–5]. More importantly, we notice that practical intruders can perform several attacks at the same time. Thus in this work, we consider a stronger and more advanced attacker, who can control some nodes illegally in IoT networks and perform a multiple-mix-attack with three malicious actions with a probability, such as tampering data, dropping packets, and sending duplicated packets. In practice, these malicious actions can be performed either simultaneously or separately, making the attacker even more difficult to be detected. Contributions. In this work, we first formalize attack models for tamper attack, drop attack, replay attack and multiple-mixattack, respectively. For detection, as it is not easy to predict the probability of each malicious action, we choose to use perceptron to help detect malicious nodes. In particular, perceptron can adjust the detection model according to the input, and we can collect some more targeted information to enhance the perceptron’s learning and achieve better detection performance. Subsequently, we propose two approaches of Perceptron Detection (PD) and Perceptron Detection with enhancement (PDE) in identifying malicious nodes. Based on the reputation of a path and the trustworthiness of a node, the former aims to detect the malicious nodes in IoT networks by using perceptron, while the latter attempts to further leverage the detection accuracy. More specifically, we first inject some packets into the IoT network and collect packets transferred in the network. Based on the collected information, we use the perceptron to calculate
866
L. Liu, Z. Ma and W. Meng / Future Generation Computer Systems 101 (2019) 865–879
the reputation of all nodes and cluster nodes to three groups including benign group (BG), unknown group (UG), malicious group (MG). Then we change the routing of transmitted packets and increase the injected packets to collect more information about those nodes in UG’s, i.e., their influence on the network. Then, we input such information to the perceptron again in order to enhance its learning process and obtain the final output, e.g., the trust values of nodes. Finally, we cluster all nodes into two groups: final benign group (FBG) and final malicious group (FMG). Experimental results indicate that our approach can detect malicious nodes with high accuracy and stability, i.e., improving the detection rate by 20% to 30% as compared with a similar method of Hard Detection (HD). Organization. The remaining parts of this article are organized as follows. Section 2 presents related work on how to detect security threats like malicious nodes. Section 3 formalizes tamper attack, drop attack, replay attack and multiple mix attack. Section 4 introduces our approaches of Perceptron Detection (PD) and Perceptron Detection with enhancement (PDE). Section 5 describes our experimental environment and discusses evaluation results. Finally, Section 6 concludes our work. 2. Related work Nowadays, Internet of Things (IoT) has become a popular research topic due to its wide adoption and sustainable development. Farooq et al. [6] focused on the IoT security with four layers including perception layer, network layer, middle-ware layer and application layer. They then defined three security goals: Data Confidentiality, Data Integrity and Data Availability. There are some security challenges in the network layer, such as Sybil Attack, Sinkhole Attack, Sleep Deprivation Attack, Denial of Service (DoS) Attack, Malicious code injection, Man-in-the-Middle Attack and so on. Security in IoT networks could be further classified into Authentication, Routing Security and Data Privacy. In this work, we mainly consider three typical insider attacks: tamper attack, drop attack and replay attack. In particular, tamper attack is one of the most serious internal attacks, where malicious nodes along a multihop path can modify the received packets (randomly or with specific malicious goals) before arriving at their destination [7,8]. If malicious nodes successfully tamper the data, they can make an influence on the IoT function, i.e., leading to a wrong decision. In the healthcare IoT, wrong decisions often cause irreparable disasters. The second internal attack that we focused is drop attack, where malicious nodes can drop the received packets (randomly or with specific malicious goals) to prevent these packets from reaching the destination [8]. Due to channel errors happened from time to time in WLAN, advanced attackers can pretend malicious events to be channel errors. The third internal attack is replay attack, where malicious nodes can send the received packets repeatedly to cause excess data flow, which can consume the link bandwidth and mislead network functions [9,10]. Trust-based detection. To defeat insider (or internal) attacks, building a proper trust-based mechanism is an effective way [11]. Probst and Kasera [12] introduced how to detect malicious sensor nodes and minimize their impact on applications by building trust management among sensor nodes. They proposed a method to compute statistical trust values and a confidence interval around the trust based on nodes’ behavior. Wang et al. [13] showed IDMTM, a trust-based mechanism for mobile ad hoc networks. Their approach evaluated the trust and identify a malicious node by using two developed metrics: Evidence Chain (EC) and Trust Fluctuation (TF), which could reduce the false alarms by analyzing the information collected from both local nodes and neighboring nodes. Zahariadis et al. [14] developed a routing protocol (ATSR)
to handle the network dimensions based on geographical routing principle. It detects malicious neighbors using a distributed trust model based on both direct and indirect trust information. Several other related studies can refer to [15–25]. ML-based detection. Machine learning (ML) is a common and powerful means used for detecting malicious attacks [26– 29]. Kaplantzis et al. [26] used support vector machine (SVM) to detect attacks but they did not identify malicious insider nodes. Similarly, Akbani et al. [27] used SVM to identify malicious devices with a restriction that each node is one-hop away from a trusted device. However, this restriction is difficult to guarantee in practice. Nahiyan et al. [28] proposed using K-means clustering to classify statistical data tuples into benign and malicious. Dromard et al. [29] designed a method of unsupervised incremental grid clustering to identify abnormal flows. The above studies demonstrate that machine learning could be very helpful in detecting malicious attacks in networks, and K-means clustering is a reliable solution to classify nodes. This is because choosing a threshold artificially to distinguish malicious nodes from all nodes may reduce the detection accuracy. Motivated by these observations, in this work, we select perceptron and K-means clustering to help identify malicious nodes in IoT networks. Advanced attacks and our focus. Most current research studies focus on a single attack, but they seldom consider an advanced attacker, who can handle several types of attacks at the same time. Such a mixed and cooperated attack is much more difficult to be detected timely, which has a larger potential to cause big damage. In this work, we consider a Multiple-Mix-Attack model that contains three attacks with uncertain probability. We advocate the establishment of trust mechanisms to help identify malicious nodes. The first problem is how to express the trustworthiness of a node and how to build the reputation based on the collected data. Trust should be estimated according to the behavior of a node. Recently, there are some research studies investigating data provenance, which can be used as the evidence of malicious behavior. Sultana et al. [30] considered data provenance to be a key factor in evaluating the trustworthiness of sensor data and they then proposed a lightweight scheme to securely transmit provenance for sensor data. Wang et al. [31] observed that the size of provenance might increase with the number of nodes traversed by the network packets and then proposed a dictionary based provenance scheme. They then identified that the provenance may expand too fast with the increase of packet transmission hops [32]. To address this issue, they then proposed a provenance encoding technique based on dynamic Bayesian network and overlapped arithmetic coding scheme. Ametepe et al. [33] considered Data Provenance used in a distributed environment. The above studies regarding data provenance provide technical support to analyze the behavior of nodes. The second challenge is how to analyze the behavior of malicious nodes. Liu et al. [34] proposed some algorithms based on unsupervised learning for detecting malicious nodes in multihop IoT networks, which motivates this work. However, they assumed that the probability that different nodes’ attacks happen in the same path is equal to each other. This is not always valid in some conditions, which may affect the detection accuracy and even cause accumulation errors. In this work, we lighten the assumption and consider some more general attack models. Based on the previous work [34], we propose a new approach named Perceptron Detection (PD) to detect malicious nodes in IoT networks and further proposed an Enhanced Learning Detection (ELD) to calculate the trust values of nodes in order to improve the detection accuracy.
L. Liu, Z. Ma and W. Meng / Future Generation Computer Systems 101 (2019) 865–879
867
Fig. 3. The path model.
Fig. 1. The node model.
Fig. 4. Tamper attack.
3.4. Tamper attack model Fig. 2. The Send-set model and Receive-set model.
3. Attack model This section formalizes different attack models, including tamper attack, drop attack, replay attack and multiple-mix-attack. 3.1. Node model
Tamper attack indicates the situation when malicious nodes along with a multihop path can modify the received packages (randomly or with specific malicious goals) before arriving at their destination. To formalize this type of attack, if a package passes a malicious node N, we can have the following:
{ { } fTA packi ′ (packi ′ ̸ = pack } i ) with probability N .PTA {packi } −→ { ′ ′ packi (packi = packi ) w ith probability (1 − N .PTA ) (5)
As mentioned earlier, we consider stronger malicious nodes that can launch several attacks simultaneously, i.e., by combining tamper attack, drop attack and replay attack. As shown in Fig. 1, we assume that a node N can be represented by using the following equation:
where N .PTA is the probability of node N making a tamper attack (mentioned in (1)), fTA indicates the formalization of tamper attacks. Then if some packages pass a malicious node N, we can have:
N = {PTA , PDA , PRA }
Rs = {pack1 , pack2 , pack3 ...packn } − →
fTA
(1)
Ss = pack1 ′ , pack2 ′ , pack3 ′ ...packn ′
{
where PTA is the probability of node N making a tamper attack, PDA is the probability of node N making a drop attack, and PRA is the probability of node N making a reply attack. For a benign node, the probability of node’s PTA , PDA and PRA should be all zero, while a malicious node’s PTA , PDA and PRA should be a positive number. 3.2. Send-set model and Receive-set model We observe tamper attack, drop attack and replay attack according to the changes of data packages happened, during the transmission from the source (the node that sends these packages) to the destination (the node that receives these packages). To describe the data status after passing a node, we denote Sendset (Ss) as the delivered packages below (as shown in Fig. 2): Ss = {packs1 , packs2 , packs3 ...packsn }
(2)
Then, all received packages can be named as Receive-set (Rs): Rs = {packr1 , packr2 , packr3 ...packrm }
(3)
3.3. Path model
packi = packi ′ IF packi is not tampered packi ̸ = packi ′ IF packi is tampered
(6)
(7)
where Rs is the Receive-set (refer to (3)) of a node; Ss is the Send-set (refer to (2)) of the node; fTA is the formalization of tamper attacks. For example, if Rs = {pack1 , pack2 , pack3 } passes a A, }which tampers pack2 , then A can send Ss = { malicious node pack1 , pack2 ′ , pack3 to other nodes (pack2 ̸ = pack2 ′ ). We know that Rs ∩ Ss represents all the packages that are not tampered by the malicious node; thus theoretically, we have the following (the node N is malicious): 1 − N .PTA =
The size of Rs ∩ Ss The size of Rs
(8)
where N .PTA is the probability of node N making a tamper attack (refer to (1)). If there is a multihop path Pa = ⟨N1 , N2 , N3 ...Nn ⟩ (refer to (4)), where Rs of N1 is denoted as Rsp and Ss of Nn is denoted as Ssp, then we have the following: n ∏
(1 − Ni .PTA ) = (1 − N1 .PTA ) ∗ (1 − N2 .PTA ) ∗ (1 − N3 .PTA ) ∗ · · ·
i=1
∗ (1 − Nn .PTA )
The path of a package can be represented as: Path = ⟨node1 , node2 , node3 ...noden ⟩
{
}
= (4)
where, if package A arrives the destination with a Path = ⟨node1 , node2 , node3 ... noden ⟩, it means A was delivered through node1 , node2 , node3 , . . . noden in a sequence (as shown in Fig. 3).
The size of Rsp ∩ Ssp The size of Rsp (9)
As a result, based on Fig. 4, we can detect tamper attacks The size of Rsp∩Ssp happened, when there is The size of Rsp ̸ = 1.
868
L. Liu, Z. Ma and W. Meng / Future Generation Computer Systems 101 (2019) 865–879
3.5. Drop attack model Drop attack indicates the situation when malicious nodes drop the received packets randomly or with specific malicious goals. To formalize this kind of attack, we assume a package passes a malicious node N, and can have the following: fDA
{} with probability N .PDA {packi } with probability (1 − N .PDA )
{
{packi } −→
Fig. 5. Drop attack.
(10)
where N .PDA is the probability of node N making a drop attack (refer to (1)), fDA is the formalization of drop attack. Then, if some packages pass a malicious node, we can have: fDA
Rs = {pack1 , pack2 , pack3 ...packn } −→ Ss = {packx1 , packx2 , packx3 ...packxr }
(11) Fig. 6. Replay attack.
{x1, x2, x3...xr } ⊆ {1, 2, 3...n} , { i ∈ {x1, x2, x3...xr } IF packi is not dropped i∈ / {x1, x2, x3...xr } IF packi is dropped
(12)
where Rs is the Receive-set (refer to (3)) of a node; Ss is the Send-set (refer to (2)) of the node; fDA is the formalization of drop attack. For example, if Rs = {pack1 , pack2 , pack3 } passes a malicious node A, which drops pack2 , then A can send Ss = {pack1 , pack3 } to other nodes (pack2 is dropped). We know that Rs ∩ Ss represents all the packages that are not dropped by the malicious node; thus, theoretically there is (the node N is malicious) 1 − N .PDA =
The size of Rs ∩ Ss
(13)
The size of Rs
where N .PDA is the probability of node N making a drop attack (refer to (1)). If there is a multihop path Pa = ⟨N1 , N2 , N3 , . . . , Nn ⟩ (refer to (4)), where Rs of N1 is denoted as Rsp, and Ss of Nn is denoted as Ssp, then we can have n
∏
(1 − Ni .PDA ) = (1 − N1 .PDA ) ∗ (1 − N2 .PDA ) ∗ (1 − N3 .PDA ) ∗ · · ·
i=1
∗ (1 − Nn .PDA ) =
The size of Rsp ∩ Ssp The size of Rsp
Thus, as shown in Fig. 5, we can identify drop attacks hapThe size of Rsp∩Ssp pened, when there is The size of Rsp ̸ = 1. 3.6. Replay attack model Replay attack describes the situation when malicious nodes send the received packets repeatedly to inject excess data flow, with the purpose of consuming the link bandwidth and compromising network functions. If a package goes through a malicious node N, we can have the following:
{packi } −→
{{
packi , packicp1 , packicp2 ...
}
with probability N .PRA
{packi } with probability (1 − N .PRA ) (15)
where N .PRA is the probability of node N making a drop attack (refer to (1)), fRA is the formalization of replay attack, packicpj is replayed from packi . If some packages pass a malicious node, then we can have: fRA
Rs = {pack1 , pack2 ...packn } − → Ss = pack1 , pack1cp1 , pack1cp2 ...pack1cpm ,
{
pack2 , . . . , packn }
{
¬∃packicpj in Ss IF packi is not replayed ∃packicpj in Ss IF packi is replayed
(16)
(17)
where Rs is the Receive-set (refer to (3)); Ss is the Send-set (refer to (2)); fRA is the formalization of replay attacks. For example, if Rs = {pack1 , pack2 , pack3 } passes a malicious node A that replays pack2 once, then A can send Ss = pack1 , pack2 , pack2cp1 , pack3 to other nodes (pack2 is replayed). There is a copy-relationship named CR between packi and packicpj , and we can have: Ss/CR = {[pack1 ]CR , [pack2 ]CR , [pack3 ]CR , . . . , [packn ]CR }
(18)
} { [packi ]CR = packi , packicp1 , packicp2 , . . . , packicpm
(19)
Let {packi |The size of [packi ]CR = 1} represent all the packages that were not replayed by the malicious node, then we can have the following (the node N is malicious): 1 − N .PRA =
(14)
fRA
where pack1cpi represents the copies of pack1 and we have the followings:
The size of {packi |The size of [packi ]CR = 1} The size of Rs
(20)
where N .PRA is the probability of node N making a replay attack (refer to (1)). If there is a multihop path Pa = ⟨N1 , N2 , N3 ...Nn ⟩ (refer to (4)), where Rs of N1 is denoted as Rsp and Ss of Nn is denoted as Ssp. If we know all [packi ]CR of Ssp, then we can have: n ∏
(1 − Ni .PRA ) =(1 − N1 .PRA ) ∗ (1 − N2 .PRA ) ∗ · · · ∗ (1 − Nn .PRA )
i=1
=
The size of {packi |The size of [packi ]CR = 1} The size of Rsp (21)
Thus, as shown in Fig. 6, we can detect replay attack happened when there is The size of {packi |The size of [packi ]CR = 1} ̸= 1. The size of Rsp 3.7. Multiple-mix-attack model Most existing studies mainly consider a sole attack scenario, but in fact, these attacks may happen at the same time. In this case, it can result in a more complicated attack model that is difficult to analyze. For example, if there are more than two malicious nodes in the same path, which perform two different
L. Liu, Z. Ma and W. Meng / Future Generation Computer Systems 101 (2019) 865–879
869
Table 1 Differences of attack models. Attacks
⎧ ⎪ ⎪ ⎪ ⎨
w ith probability N .PTA {} with probability N .PDA {packi } −−→ { } ⎪ packi , packicp1 , . . . , packicpm w ith probability N .PRA ⎪ ⎪ ⎩ {packi } with probability (1 − N .PTA − N .PDA − N .PRA ) fMMA
packi ′ (packi ′ ̸ = packi )
{
}
(22) where {packi } is a package received by a malicious node N, N .PTA is the probability of node N making a tamper attack, N .PDA is the probability of node N making a drop attack, N .PRA is the probability of node N making a replay attack, fMMA is the formalization of multiple-mix-attack. If some packages pass a malicious node, then we can have the following:
Method in detection
′
Tamper attack
packi ̸ = packi
Detection of unknown packages in Sink
Drop attack
packi is lost
Detection of lost packages in Sink
Replay attack
packi , packicp1 , packicp2
Detection of duplicated packages in Sink
Multiple-mixattack
packi ̸ = packi or packi is lost or packi , packicp1 , packicp2
Fig. 7. Multiple-mix-attack.
attacks, e.g., tamper attack and drop attack (assuming these nodes do not cooperate), the ultimate influence of tamper attack may be affected by drop attack (i.e., tampered packages may be dropped by other malicious nodes). In this condition, it is hard to analyze the probability of these attacks based on solely their damage or influence, because these attacks may affect each other. In this part, we show how to formalize such multiple-mixattack (as shown in Fig. 7). When a package passes a malicious node, we assume that the malicious node can perform a tamper attack with the probability PTA , a drop attack with the probability PDA and a replay attack with probability PRA , respectively. We further assume that a malicious node can conduct only one type of attacks at most on one package. Then we can have the following:
Behavior in Model
′
Detection of unknown packages, lost packages, duplicated packages and count normal packages in Sink
If the node N is malicious, then we can have the following: Ss = TS + RS + NS
(28)
In our work, we consider that launching any attack among tamper attack, drop attack and replay attack can cause damage (exclusive event) and there is no need to attack a package twice in a node. Then we can have: The size of NS 1 − N .PTA − N .PDA − N .PRA = (29) The size of Rs If there is a multihop path Pa = ⟨N1 , N2 , N3 ...Nn ⟩ (refer to (4)), where Rs of N1 is denoted as Rsp and Ss of Nn is denoted as Ssp, we can create TS, RS, NS based on Rsp and Ssp. If NS is denoted as NSp, then we can have: n ∏
(1 − Ni .PTA − Ni .PDA − Ni .PRA ) = (1 − N1 .PTA − N1 .PDA − N1 .PRA ) ∗ · · ·
i=1
∗ (1 − Nn .PTA − Nn .PDA − Nn .PRA ) =
The size of NSp The size of Rsp
(30)
where Rs is the Receive-set (refer to (3)); Ss is the Send-set (refer to (2)). For example, if Rs = {pack1 , pack2 , pack3 , pack4 } passes a malicious node A, while assuming A tampers pack1 , drops {pack2 , ′and replays pack } 3 for one time, then A can send Ss = pack1 , pack3cp1 , pack4 to other nodes. We denote TS as the set of packages that are tampered (including the original package) by a malicious node, as below:
Finally, we can identify a multiple-mix-attack in the network, The size of NSp when there is The size of Rsp ̸ = 1. In real-world applications, con′ firming packi is tampered from packi is difficult; thus, in this work, we use those unknown (tampered) packages in Ssp to construct TS (refer to (24)), i.e., TS = {packi | packi is not in Rs}. As we mentioned earlier, we mainly analyze 1 − Ni .PTA − Ni .PDA − Ni .PRA to avoid considering Ni .PTA , Ni .PDA , Ni .PRA directly. This is because these attacks may influence each other. Based on our formalized attack models, we propose a node-trust model in Section 4, in order to describe how to compute a node’s trust and detect malicious nodes.
TS = packi ′ , packi |packi ′ is tampered from packi , packi ∈ Rs,
3.8. Differences among attack models
fMMA
Rs = {pack1 , pack2 ...packn } −−→ Ss = {packx1 , packx2 , packx3 ...packxr }
(23)
{
′
}
packi ∈ Ss
(24)
We denote RS as the set of packages that are replayed (including the original package) by a malicious node, as below: RS = packicpj , packi |packicpj is replayed from packi ; packi ,
{
packicpj ∈ Ss
}
(25)
We define DS as the set of packages that are dropped by a malicious node, as follow: DS = Rs − Ss − (Rs ∩ TS)
(26)
In addition, we denote NS as the set of packages that are not modified by a malicious node: NS = Ss − TS − RS
(27)
In this part, we discuss the differences among these attack models. In particular, a tamper attack changes the packages received by a malicious node to compromise data integrity and data availability; a drop attack discards the packages received by a malicious node to cause data loss; a replay attack duplicates and sends the packages received by a malicious node to cause traffic congestion or threaten the logical function of IoT networks. If attackers make a multiple-mix-attack, then the consequence can be more severe than a single attack. Table 1 summarizes the differences of attack models. In most cases, in order to investigate the security of an IoTnetwork, we can choose a suitable node to inject packages first, and then analyze the received result in Sink. However, there is still a big gap between the detection of a single attack and the detection of a multiple-mix-attack.
870
L. Liu, Z. Ma and W. Meng / Future Generation Computer Systems 101 (2019) 865–879
That is, regarding the tamper attack model, if there is an attack, we can detect some unknown packages in Sink by comparing the injected packages with the received packages — because packages are different from injected packages after being tampered. Similarly, for the drop attack model, we can detect some missed packages; while for the replay attack model, we can identify some duplicated packages. In these single-attack models, we can identify malicious nodes by considering normal packages or abnormal packages, because there is only one attack type for each model. By contrast, in the multiple-mix-attack model, we have to identify malicious nodes by considering normal packages. For instance, when a multiple-mix-attack happens, each type of attack may influence each other, i.e., in a path, if package P is transferred by malicious nodes NA and NB accordingly, in which NA tampers P and sends P ′ to NB, and then NB drops P ′ , then we can only detect the lost P when we check the result in Sink. This is because we can only detect the drop attack but have no idea about the tamper attack. Intuitively, the appearance of these attacks may affect each other and it is difficult to figure out their relationships. This makes it complex to identify malicious nodes based on abnormal packages only. To help detect malicious nodes under the multiple-mix-attack model, we consider a solution based on the normal packages (transferred to Sink successfully) in the received result. That is, if a package is transferred successfully to Sink, we consider that it is not compromised by any attack in most cases. In practice, there could be some exceptions, i.e., assuming a package P is replayed by a malicious node A that sends P1 and P2 to other nodes, and then a malicious node B drops P2 after receiving both P1 and P2. In this case, we consider P1 as a normal package even it is compromised by two different attacks. Unless there are very few nodes in IoT networks, the probability of such exception is low, i.e., a compromised package remains normal in Sink. This can make our solution effective in practice. 4. Perceptron-based detection To detect insider attacks in an IoT network, we can inject packages and observe the received feedback in Sink, i.e., checking whether some packages are tampered, missed or copied. Then, we can calculate trust value for each node based on our trust model — multivariable linear regression model, and identify malicious nodes accordingly. Based on the nodes’ reputation, we use Kmeans method to cluster nodes into three groups: benign group (BG), unknown group (UG), and malicious group (MG). Later, we change the routing of packets’ transmission and increase the packet injection rate to measure the influence of those nodes in UG’s on the whole network. Further, we feed this information to perceptron again in order to enhance its learning and obtain the final output, including the trust values of nodes, the final benign group (FBG) and the final malicious group (FMG). Figs. 8 and 9 show the topology of an IoT network and how to inject packages. Our detection process can be summarized in the following steps: 1. To inject packages to a node (assume the node is benign) in the IoT network; 2. To construct detection equation set based on the result in Sink; 3. To train the perceptron with detection equation set; 4. To collect the parameter of the model of perceptron; 5. To calculate the trust of all nodes; 6. To cluster all nodes into three groups based on the K-means clustering method, including benign group (BG), unknown group (UG), malicious group (MG);
Fig. 8. The topology of an IoT network.
Fig. 9. To inject packages to the IoT network.
7. To adjust the routes of UG nodes and inject some packages to pass these nodes; 8. To analyze the result in Sink again and construct a new detection equation set (Enhanced Detection Equation Set, EDES); 9. To train the model of perceptron incrementally according to EDES; 10. To collect the parameter of the model of perceptron again; 11. To calculate the trust of all nodes; 12. To cluster all nodes into two groups based on the K-means clustering method and obtain the final detection output including the final benign group (FBG) and the final malicious group (FMG).
4.1. Trust model In practice, tamper attack, drop attack, and replay attack can be considered as a special case of multiple-mix-attack. Thus we design a node-trust model to help calculate the reputation of each node. According to (29), the trust of a node N can be defined as below: N .T = 1 − N .PTA − N .PDA − N .PRA
(31)
Generally, the trustworthiness of a node is the probability that the node is benign without making any attack. If the trust value of a node is 1, then this node can be regarded as a benign node, while if the trust value is lower than 1, then the node has the possibility to make an attack. Based on (30) and (31), for a multihop path Pa = ⟨N1 , N2 , N3 ...Nn ⟩ (refer to (4)), Rs of N1 (refer
L. Liu, Z. Ma and W. Meng / Future Generation Computer Systems 101 (2019) 865–879
This equation can be derived as below:
to (3)) and Ss of Nn (refer to (2)), we can have the following: n
∏
Ni .T = N1 .T ∗ N2 .T ∗ N3 .T ∗ · · · ∗ Nn .T =
i=1
The size of NS The size of Rs
(32)
ln(Pathm .T ) = ln(
=
where NS is mentioned in (27). As we discussed earlier, it is difficult to analyze N .PTA , N .PDA , N .PRA directly, but we can use N .T to evaluate the influence of a node. Regarding the reputation of a path Pa = ⟨N1 , N2 , N3 ...Nn ⟩ (refer to (4)), we define Pa.T =
The size of NS The size of Rs
=
n ∏
Ni .T
(33)
871
The size of NS The size of Rs
cm ∑
)
ln(Nmi .T )
(34)
i=1
= ln(Nm1 .T ) + ln(Nm2 .T ) + ln(Nm3 .T ) + · · · + ln(Nmcm .T ) We denote Eqm as Eq. (34), and thus the detection equation set can be represented as below:
i=1
where NS is mentioned in (27), which could be calculated by Rs of N1 (refer to (3)) and Ss of Nn (refer to (2)). In the next part, we introduce how to use our model to detect malicious nodes with perceptron. 4.2. Detection equation set Detection equation set consists of many detection equations, which are established according to our proposed trust model above. When we inject some packages into a node (denoted as A), there could be one or more paths connecting A to a Sink (denoted as D). Then we analyze the received feedback of D, and derive the trust values of all nodes in these paths. Theoretically, we can obtain an accurate trust value if we have enough information. However, it is hard to predict when we can obtain enough information to compute trust values effectively. A promising solution is to gradually improve the accuracy of trust computation. In this work, we use perceptron to help calculate the trust values. Firstly, we explain how to establish the detection equation set. An IoT network often consists of a Sink (denoted as D) and many IoT nodes (denoted as Ni (0 <= i <= n, n is the count of other nodes)). If we inject packages to Ni (assume Ni and D is benign), we can have the following:
⎧ c1 ∑ ⎪ ⎪ ⎪ ln(Path . T ) = ln(N1i .T ) ⎪ 1 ⎪ ⎪ ⎪ i=1 ⎪ ⎪ ⎪ ⎪ ⎪ = ln(N11 .T ) + ln(N12 .T ) + · · · + ln(N1c1.T ) ⎪ ⎪ ⎪ c2 ⎪ ∑ ⎪ ⎪ ⎪ ln(Path2 .T ) = ln(N2i .T ) ⎪ ⎪ ⎪ ⎪ i=1 ⎪ ⎪ ⎪ ⎪ ⎪ = ln(N21 .T ) + ln(N22 .T ) + · · · + ln(N2c2.T ) ⎨ {Eq1 , Eq2 , c3 − → ∑ ⎪ Eq3 ...Eqm } ⎪ ln(Path3 .T ) = ln(N3i .T ) ⎪ ⎪ ⎪ ⎪ i=1 ⎪ ⎪ ⎪ ⎪ = ln(N31 .T ) + ln(N32 .T ) + · · · + ln(N3c3.T ) ⎪ ⎪ ⎪ ⎪ ⎪ ...... ⎪ ⎪ ⎪ cm ⎪ ∑ ⎪ ⎪ ⎪ ln(Nmi .T ) ⎪ ⎪ ln(Pathm .T ) = ⎪ ⎪ i=1 ⎪ ⎪ ⎩ = ln(Nm1 .T ) + ln(Nm2 .T ) + · · · + ln(Nmcm.T )
4.3. Perceptron model In this part, we describe how to process perceptron. In the IoT network, we choose Ni to inject packages and there should be a total of n nodes, except for Ni and Sink D. In this case, there should exist m paths from Ni to D. Path1 = ⟨Ni , N11 , N12 ...N1c1 , D⟩
Path1 = ⟨Ni , N11 , N12 ...N1c1 , D⟩
Path2 = ⟨Ni , N21 , N22 ...N2c2 , D⟩
Path2 = ⟨Ni , N21 , N22 ...N2c2 , D⟩
...
...
Pathm = ⟨Ni , Nm1 , Nm2 ...Nmcm , D⟩
Pathm = ⟨Ni , Nm1 , Nm2 ...Nmcm , D⟩ Then according to (32) and (33), we can have:
Pa.T =
The size of NS The size of Rs
n
=
∏
Ni .T = N1 .T ∗ N2 .T ∗ N3 .T ∗ · · · ∗ Nn .T
⎛
a11
⎜ ...
The size of NS The size of Rs
where,
c ∏
Nmi .T )D.T
i=1
= Ni .T ∗ Nm1 .T ∗ Nm2 .T ∗ Nm3 .T ∗ · · · ∗ Nmcm .T ∗ D.T Ni .T and D.T are both equal to 1, because Ni and D are benign. We thus have:
=
and a matrix EM (Exist Matrix) EM = (e1 , e2 , e3 ...em ) = ⎝ an−1 1 an1
= Ni .T (
Pathm .T =
NTM = (ln(N1 .T ), ln(N2 .T ), ln(N3 .T )...ln(Nn .T ))
i=1
Thus we have the following detection equation of Pathm : Pathm .T =
Then we can construct a matrix NTM (Node Trust Matrix)
The size of NS The size of Rs cm ∏
Nmi .T
i=1
= Nm1 .T ∗ Nm2 .T ∗ Nm3 .T ∗ · · · ∗ Nmcm .T
a12
a13
an−1 2 an2
an−1 3 an3
...
...
... ... ... ...
a1m
⎞
... ⎟
an−1 m ⎠ anm
{ aij =
0 IF Ni is not in Pathj 1 IF Ni is in Pathj
and a matrix PTM (Path Trust Matrix): PTM = (ln(Path1 .T ), ln(Path2 .T ), ln(Path3 .T ), . . . , ln(Pathm .T )) Based on the detection equation set, we can have the following relationship: NTM ∗ EM = PTM
(35)
If we analyze the received feedback of Sink, then we can know EM and PTM. Then our main goal is to search a matrix NTM
872
L. Liu, Z. Ma and W. Meng / Future Generation Computer Systems 101 (2019) 865–879
Fig. 10. Perceptron’s working process.
obtain, in order to reduce the errors, we first cluster nodes to three groups: namely, benign group (BG), unknown group (UG), and malicious group (MG). The trust value of a normal node can be reduced according to the influence of malicious nodes in the same path, while the trust value of a malicious node can be increased according to the impact of normal nodes in the same path. In this case, there is a need to further investigate the trustworthiness of all nodes in UG. 4.5. Detection route optimization
Fig. 11. K-means method’s working process.
with high accuracy. Generally, the more accurate the node’s trust value we obtained is, the more effective detection of identifying malicious nodes. To search the NTM with high accuracy can be considered as a process of multivariable linear regression, regarding the trust values of nodes. To solve the multivariable linear regression problem, perceptron, a type of artificial neural network, can be very helpful, which could map an input to an output. It is one commonly used method to solve linear regression problems [35]. Mathematically, artificial neural network can be represented by n ∑
y(t) = f [
w i xi − w 0 ]
i=1
where xi is a set of neuron inputs, wi is the relevant weight, f is the activation function, t is the index of nodes in the neural network. If the hidden layer does not exist, then this kind of artificial neural network is called perceptron. According to Eq. (35), we can feed EM and PTM as the neuron inputs to train the perceptron. Meanwhile, we can optimize NTM using weights. When the training is finished, NTM is what we need. This is the process that we use perceptron to calculate the reputation of all nodes, as shown in Fig. 10. 4.4. Clustering based on the K-means method After obtaining the reputation of all nodes, we need to identify malicious nodes accordingly. An intuitive way is setting a trust threshold: if the trust value of a node is higher than the threshold, then this node is benign; otherwise, the node is malicious. Here the question is how to choose a proper threshold in our scenario. In this work, we advocate using the clustering method to classify groups and then identify malicious nodes. K-means is a typical clustering method, which has been widely used in practice [28, 34,36]. In more detail, K-means is a clustering method with unsupervised learning and the main argument is the count of clusters. The count of clusters is the number of types that we expect to cluster our nodes. The input of K-means method is the tuple of trust of nodes, which depends on the output of perceptron (refer to Section 4.3). Based on the output clusters of nodes, we can identify malicious nodes and benign nodes, as shown in Fig. 11. Intuitively, we can cluster all nodes into two groups: a benign group and a malicious group directly, based on the trust values. However, as we are not sure how accurate the reputation we can
By measuring the influence between nodes in the same path, we can detect malicious nodes by adjusting the routes via which packages are delivered. The adjustment follows a principle that tries to search paths containing the unknown nodes in UG (Unknown Group) only. Our solution is to separate those nodes in UG and distribute them to different discrete paths. We denote these paths as EPS (Enhanced Path Set), which can be then used to construct EDES (Enhanced Detection Equation Set) through injecting new packages in order to improve the detection accuracy. For example, if we have two nodes (N1, N2) with unclear trust values and there are three paths: path1 = ⟨Ns(injected node), N1, D(Sink)⟩, path2 = ⟨Ns(injected node), N2, D(Sink)⟩, path3⟨Ns(injected node), N1, N2, D(Sink)⟩. With our approach, we have to choose path1 and path2, because both N1 and N2 are included in the same path3. The smaller number of UG nodes in the same path, the less influence on trust values affected by other nodes. Now we introduce how to generate EPS. We define DDUG (the discrete degree about UG) of a path as the included number of nodes in UG; and DDMG (the discrete degree about MG) of a path as the included number of nodes in MG. We can obtain EPS according to the following EPSG algorithm. Algorithm 1 Enhanced Path Set Generation: EPSG (TPS, UG) Input: TPS (Topology Path Set from Ns to D, Ns is the node that we inject packages, D is Sink), UG (Unknown Group contains nodes with medium trust values); Output: EPS; 1: EPS (Enhanced Path Set) = {}; 2: while not all nodes in UG has been selected do 3: select a node denoted Ni from UG; 4: select all paths involving Ni from TPS to construct a path set denoted PS; 5: if there is one path in PS with DDMG = 0 at least then 6: select some paths with minimum DDUG from PS with DDMG = 0 to construct a path set denoted S1; 7: choose the path denoted P1 with minimum length from S1; 8: if P1 does not exist in EPS then 9: add P1 to EPS; 10: end if 11: else 12: select some paths with minimum DDMG from PS to construct a path set denoted S2; 13: select some paths with minimum DDUG from S2 to construct a path set denoted S3; 14: choose the path denoted P2 with minimum length from S3; 15: if P2 does not exist in EPS then 16: add P2 to EPS; 17: end if 18: end if 19: end while 20: return EPS;
L. Liu, Z. Ma and W. Meng / Future Generation Computer Systems 101 (2019) 865–879
873
Fig. 12. Enhanced training process. Table 2 Experimental evaluation. Detection result
Reality
Malicious Benign Total
Malicious
Benign
Total
True positive (TP) False positive (FP) P′ (Detect malicious)
False Negative (FN) True negative (TN) N′ (Detect benign)
P (Real malicious) N (Real benign) P +N
4.6. Enhanced detection process To summarize the steps in our approach, we first use the detection equation set to train a perceptron according to the received result of Sink, and obtain all nodes’ trust values. Then we inject new packages to those paths from EPS, and collect new received feedback of Sink. Based on the newly received result, we can update the detection equation set, named EDES (Enhanced Detection Equation Set), which can be used to train the perceptron incrementally. This training process can help improve the accuracy of the perceptron model. Later, we obtain the model’s weights and calculate the trust values of all nodes again. We further use the K-means method to cluster all nodes into two groups: final benign group (FBG) and final malicious group (FMG). As a result, we identify all malicious nodes in FMG for the IoT network. The whole process is depicted in Fig. 12. 5. Evaluation results In this section, we evaluate our proposed Perceptron Detection (PD) and compare its performance with a similar approach of Hard Detection (HD) [34]. HD is a mathematical method to detect malicious nodes that can perform a tamper attack. As the focus of HD is not fully the same as our target in this work, we tune HD to make it workable in a multiple-mix-attack environment. In particular, we add a module in HD to help detect unknown packages, lost packages and duplicated packages, and enable HD to search mix-attack malicious nodes. To compare the performance between HD and PD, we explore the improvement of using perceptron instead of pure mathematical analysis. In addition, we also perform a comparison between two conditions: our approach with enhancement (PDE) and our approach without enhancement (PD). As the overall performance of PDE is intuitively better than that of PD, we only show PDE in our results. Further, we analyze the influence of injected packages, the number of nodes, the percentage of malicious nodes and the diversity of network on our detection performance. In the evaluation, we consider a multiple-mix-attack with the ability to combine tamper attacks, drop attacks and replay attacks at the same time. The detection performance can be measured using both accuracy rate and error rate. Based on Table 2, we can define accuracy = (TP + TN)/(P + N) and error rate = (FP +
Table 3 Environmental settings. Item
Description
CPU
Intel Core i7-4700MQ, 2.4 GHz, 4Core (8 Threads) Memory Kingston DDR3L 8GB*2 OS Windows 10 Professional 1709 Windows 10 Professional 1709 3.7.1rc1 Scikit-learn 0.20
FN)/(P + N). Ideally, we expect to have both a higher accuracy rate and a lower error rate. 5.1. Environmental settings In our environment, all IoT nodes were deployed in a 100 ×100 m2 rectangle area discretely, and each node’s communication range is 10m. Our IoT network is generated randomly but it has some features. (1) For each node, there is at least one path from the node to Sink, enabling IoT devices to be connected. (2) The node that we injected packages should be deployed in the left edge of the rectangle area; and Sink should be deployed in the right edge of the rectangle area. (3) The node that we injected packages and the Sink should be benign. To avoid bias, we ran our simulation for each experiment in 10 rounds with 10 different networks. We then selected the average value to represent the final experimental result. In particular, we used Python to realize all algorithms, and used the scikit-learn, which is a famous machine learning tool library, to help cluster nodes according to their trust values via the K-means method. Table 3 shows the detailed experimental settings, in which the distribution of IoT nodes is random. Fig. 13 depicts a distribution example, where green nodes are benign, blue nodes are normal and red nodes are malicious. Our detection can be deployed at the base station where we only need to inject packages to IoT networks and collect packages from Sink. We can use a content-based comparison to confirm whether a package in Receive-set is the same as the other packages in Send-set. It is worth noting that our approach does not cause any additional energy cost to IoT nodes, indicating the feasibility and the generalization of our scheme.
874
L. Liu, Z. Ma and W. Meng / Future Generation Computer Systems 101 (2019) 865–879
Fig. 13. A distribution of IoT nodes. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Table 4 Variables and description. Variable name
Description
The number of nodes
It means the topology scale, which can affect the detection of malicious nodes.
The number of injected packages
These are the labels indicating whether nodes attack the IoT network; the number of these packages will influence the detection accuracy.
The type of attacks
Malicious nodes can perform a multiple-mix-attack, including tamper attacks, drop attacks and replay attacks at the same time; the specific type of attack can influence the detection accuracy.
The probability of attack
Malicious node can choose a strategy to launch attacks with a certain probability; where the probability can influence the detection accuracy.
The percentage of malicious nodes
It means how many nodes are malicious in the IoT networks, which can affect the detection accuracy.
The diversity of network
The diversity essentially indicates the type of paths via which injected packages are delivered. Our detection approach has to collect the received packages in Sink; thus, different types of paths can be helpful to our detection.
Fig. 14. The impact of the number of nodes on detection accuracy.
5.2. Focused variables Table 4 shows some variables that can affect the detection performance. In the following evaluation, we mainly investigate their impact on the detection results. 5.3. Impact of the number of nodes To explore the impact of this variable on the detection performance between HD and PDE, we consider a typical IoT network, with the number of nodes as 5, 10, 15, 20 and 25, respectively. In this experiment, we set the number of injected packages as 10000; the type of attack as multiple-mix-attack; the probability of attack as 0.3; the percentage of malicious nodes as 0.3; and the diversity of network is all-type (use all paths). Our results are shown in Figs. 14 and 15. The confidence interval of HD is [0.543566,0.883234], and the confidence interval of PDE is [0.891781,0.973418]. The confidence level is 0.95. According to the obtained results, it is observed that when the scale of the IoT network is small, the accuracy rate of HD and PDE is high. With the increase of nodes, HD could make many errors but PDE could still reach higher accuracy than HD in all cases. This
Fig. 15. The impact of the number of nodes on error rate.
is because when the number of node is 5, the number of paths is small. In this case, malicious nodes can be easily identified with 100% accuracy without errors. When the node number reaches 10 or more, the network topology becomes complicated and it is difficult to identify all malicious nodes. The obtained results indicate that when the network topology is complicated, PDE could outperform HD.
L. Liu, Z. Ma and W. Meng / Future Generation Computer Systems 101 (2019) 865–879
Fig. 16. The impact of the number of injected packages on detection accuracy.
Fig. 18. The impact of the type of attack on detection accuracy.
Fig. 17. The impact of the number of injected packages on error rate.
Fig. 19. The impact of the type of attack on error rate.
875
5.4. Impact of the number of injected packages
5.5. Impact of attack types
To examine the impact of injected packages on the detection performance between HD and PDE, we set the number of injected packages to be 100, 500, 1000, 1500 and 2000, respectively. In this experiment, we set the number of nodes as 15; the type of attack as multiple-mix-attack; the probability of attack as 0.3; the percentage of malicious nodes as 0.3; and the diversity of network is all-type (use all paths). The obtained results are described in Figs. 16 and 17. The confidence interval of HD is [0.491987,0.574412], and the confidence interval of PDE is [0.653541,0.919658]. The confidence level is 0.95. It is found when there are not enough injected packages, it is very hard to evaluate the behavior of nodes. With the increase of injected packages, PDE could outperform HD clearly. It is worth noting that injected packages indicate the probability of malicious nodes, which could be used for calculating the trust values of nodes. The larger number of injected packages, the more accurate trust can be computed. We can identify when the number of injected packages increases, the accuracy of PDE is improved. In our result, it is interesting to observe that PD achieved better performance than PDE when the number of injected packages is 100 (very small). This indicates that, in some cases, PDE’s enhanced training may cause a negative effect. This is because the number of injected packages are not enough to support an enhanced training in which the uncertainty may result in detection errors.
In this experiment, we aim to examine the impact of different attacks on the detection performance of HD and PDE, including tamper attack, drop attack, replay attack and multiple-mix-attack. In particular, we set the number of nodes as 15; the number of injected packages as 10000; the probability of attack as 0.3; the percentage of malicious nodes as 0.3; and the diversity of network is all-type (use all paths). The performance results are depicted in Figs. 18 and 19. The confidence interval of HD is [0.595593,0.671406], and the confidence interval of PDE is [0.933,0.933]. The confidence level is 0.95. Our results show that PDE could achieve better performance than HD. Regarding a single attack (either tamper attack or drop attack or replay attack), PD and PDE could reach a similar detection result. While if there is a multiple-mix-attack, then PDE could outperform PD, indicating that the use of learning enhancement into PDE is effective in improving the detection accuracy. For example, we assume that a malicious node A conducts a tamper attack and a manipulated package is transferred to another malicious node B. If B conducts a drop attack, then we can know the package is unsuccessfully delivered and there exists a drop attack. However, it is unclear whether there is a tamper attack. In such a case, PDE can work well, as it allows choosing some special paths and injecting new packages to investigate the influence between
876
L. Liu, Z. Ma and W. Meng / Future Generation Computer Systems 101 (2019) 865–879
Fig. 20. The impact of the probability of attack on detection accuracy.
Fig. 22. The impact of the percentage of malicious nodes on detection accuracy.
Fig. 21. The impact of the probability of attack on error rate.
Fig. 23. The impact of the percentage of malicious nodes on error rate.
different attacks. This enhanced learning process can help identify attacks in a more accurate way.
nodes to be 0.1, 0.2, 0.3, 0.4 and 0.5, respectively. In this experiment, we set the number of nodes as 15; the number of injected packages as 10000; the type of attack as multiple-mix-attack; the probability of attack as 0.3; and the diversity of network is all-type (use all paths). The performance results are depicted in Figs. 22 and 23. The confidence interval of HD is [0.445129,0.701270], and the confidence interval of PDE is [0.893926,0.945673]. The confidence level is 0.95. It is found that PDE could still outperform HD. While when the percentage of malicious nodes is small, all detection methods could make more errors. This is because the number of paths that may contain fewer malicious nodes, resulting in a smaller number of valid detection equation set (as discussed in Section 4.2). This makes the perceptron hard to produce an accurate model, which degrades the detection accuracy of both PD and PDE. With the increase of malicious nodes, all detection methods could reduce their error rates and improve the detection results.
5.6. Impact of attack probability To explore the impact of attack probability on the detection performance of HD and PDE, we set the probability of attack to be 0.1, 0.3, 0.5, 0.7 and 0.9, respectively. In this experiment, we set the number of nodes as 15; the number of injected packages as 10000; the type of attack as multiple-mix-attack; the percentage of malicious nodes as 0.3; and the diversity of network is all-type (use all paths). The results are shown in Figs. 20 and 21. The confidence interval of HD is [0.510532,0.663067], and the confidence interval of PDE is [0.893339,0.945860]. The confidence level is 0.95. It is found that PDE could outperform HD in all cases. This is because a higher attack probability indicates a clear reputation reduction for those nodes in the same path, making it harder to retrieve accurate information for different attacks. Under this situation, the enhanced training process in PDE can help reduce the influence between different attacks and different nodes. Therefore, PDE could offer a better detection rate than PD. 5.7. Impact of the percentage of malicious nodes To investigate the impact of the percentage of malicious nodes on the detection performance, we set the percentage of malicious
5.8. Impact of network diversity To explore the impact of network diversity on the detection performance of HD and PDE, we set the rate of valid paths to be 0.2, 0.4, 0.6, 0.8 and 1, respectively. In this experiment, we set the number of nodes as 15; the number of injected packages as 10000; the type of attack as multiple-mix-attack; the probability
L. Liu, Z. Ma and W. Meng / Future Generation Computer Systems 101 (2019) 865–879
877
Fig. 24. The impact of the diversity of network on detection accuracy.
Fig. 26. The impact of the probability of attack on detection accuracy.
Fig. 25. The impact of the diversity of network on error rate.
Fig. 27. The impact of the diversity of network on accuracy.
of attack as 0.3, and the percentage of malicious nodes as 0.3. We present the results in Figs. 24 and 25. The confidence interval of HD is [0.583750,0.775849], and the confidence interval of PDE is [0.893926,0.945673]. The confidence level is 0.95. It is observed that when the network diversity (the rate of valid paths) is low like 0.2, the detection accuracy of HD and PDE is smaller than that in other situations. This is because the identification of malicious nodes is based on analyzing the same nodes in different paths. As an example, we assume that a path includes node A and node B. If there is an attack detected, either node A or node B could be malicious. We can only know node B is benign based on the detection of other paths that include node B as well. In this case, we can determine node A is malicious. This is why the detection accuracy could be increased when the rate of valid paths goes up.
the diversity of network, the confidence interval of HD is [0.583750,0.775849], and the confidence interval of PDE is [0.893926,0.945673]. The confidence level is 0.95. It is found that the performance of detecting malicious nodes with only tamper attacks is quite similar to the performance of detecting a multiple-mix-attack. It is the same for other factors such as impact of the number of nodes, the impact of the number of injected packages, and the impact of the percentage of malicious nodes. Thus, our results demonstrate that the effectiveness of our scheme would not be degraded in a single-attack environment.
5.9. Performance of detecting single tamper attacks In this part, we aim to evaluate the performance between HD and PDE in detecting malicious nodes that only launch a tamper attack. The parameter settings are the same as the above multiple-mix-attack experiments. We present the obtained results in Figs. 26 and 27. For the case regarding the probability of attack, the confidence interval of HD is [0.510532,0.663067], and the confidence interval of PDE is [0.893339,0.945860]. For the case regarding
5.10. Discussion and limitations In the evaluation, we have examined many variables that may affect the detection performance, such as impact of the number of nodes, the impact of the number of injected packages, the impact of attack probability, the impact of the percentage of malicious nodes, and the impact of network diversity. Overall, it is observed that PDE can achieve better detection performance as compared with HD algorithm, i.e., PDE can provide a better detection rate by around 20% to 30%. In addition, our proposed enhanced detection based on the route optimization could further improve the detection rate by 8% to 18%. As our work is an early study in discussing a scenario of multiple-mix-attack, there are some limitations that can be addressed in our future work. For example, our current work mainly
878
L. Liu, Z. Ma and W. Meng / Future Generation Computer Systems 101 (2019) 865–879
focuses on the identification of malicious nodes without distinguishing the special attack types. It is one of our future directions to explore how to distinguish different attack types. In addition, our current detection method mainly relies on the diversity of IoT networks, which means that the detection accuracy would be affected in a condition with low diversity of paths. In our future work, we plan to investigate how to maintain the detection performance even with few valid paths in an IoT network. 6. Conclusions Due to the broad application of IoT networks, there is a significant need to design proper security mechanisms in identifying malicious nodes. Most existing studies mainly consider a single attack, but we notice that an advanced intruder may perform some attacks in a collaborative manner to make a more harmful impact. In this work, we target on this issue and focus on three typical attacks: tamper attack, drop attack and replay attack. We first formalize a single attack model and a multiple-mix-attack model, and then propose an approach of Perceptron Detection (PD), which uses both the perceptron and the K-means method to help compute trust values and detect malicious nodes. To further improve the detection accuracy, we optimize the route of network and develop an enhanced learning process for PD, called Perceptron Detection with enhancement (PDE). Our experimental results demonstrate that our proposed PD and PDE can achieve better detection accuracy as compared with a similar method of Hard Detection (HD), and that PDE can further improve the detection performance of PD by around 20% to 30%. There are many possible topics for our future work. For instance, how to identify different attack types for a particular malicious node. As our detection method depends on the network diversity, if there are not enough valid paths in an IoT network, it is a challenge on how to maintain our detection performance. Further, it is an interesting topic to investigate different strategies on how to perform different attacks collaboratively. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgment This work was supported by the Foundation of Graduate Innovation Center in NUAA, China, Grant Number KFJJ20181608. References [1] C. Withanage, R. Ashok, C. Yuen, K. Otto, A comparison of the popular home automation technologies, in: Innovative Smart Grid Technologies-Asia (ISGT Asia), 2014 IEEE, IEEE, 2014, pp. 600–605. [2] J. Zheng, M.J. Lee, A comprehensive performance study of ieee 802.15. 4, Sensor Netw. Oper. 4 (2006) 218–237. [3] W. Li, W. Meng, L. Kwok, H.H. Ip, PMFA: toward passive message fingerprint attacks on challenge-based collaborative intrusion detection networks, in: Network and System Security - 10th International Conference, NSS 2016, Taipei, Taiwan, September 28-30, 2016, Proceedings, 2016, pp. 433–449, http://dx.doi.org/10.1007/978-3-319-46298-1_28. [4] W. Meng, X. Luo, W. Li, Y. Li, Design and evaluation of advanced collusion attacks on collaborative intrusion detection networks in practice, in: 2016 IEEE Trustcom/BigDataSE/ISPA, Tianjin, China, August 23-26, 2016, 2016, pp. 1061–1068, http://dx.doi.org/10.1109/TrustCom.2016.0176. [5] W. Li, W. Meng, L.F. Kwok, Investigating the influence of special onoff attacks on challenge-based collaborative intrusion detection networks, Future Internet 10 (1) (2018) 6, http://dx.doi.org/10.3390/fi10010006. [6] M.U. Farooq, M. Waseem, A. Khairi, S. Mazhar, A critical analysis on the security concerns of internet of things (iot), Int. J. Comput. Appl. 111 (7) (2015).
[7] N. Komninos, E. Philippou, A. Pitsillides, Survey in smart grid and smart home security: Issues, challenges and countermeasures, IEEE Commun. Surv. Tutor. 16 (4) (2014) 1933–1954. [8] C. Wang, T. Feng, J. Kim, G. Wang, W. Zhang, Catching packet droppers and modifiers in wireless sensor networks, in: 6th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks, SECON’09, IEEE, 2009, pp. 1–9. [9] R. Mahmoud, T. Yousuf, F. Aloul, I. Zualkernan, Internet of things (iot) security: Current status, challenges and prospective measures, in: 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST), IEEE, 2015, pp. 336–341. [10] Q. Wen, X. Dong, R. Zhang, Application of dynamic variable cipher security certificate in internet of things, in: 2012 IEEE International Conference on Cloud Computing and Intelligent Systems (CCIS), Vol. 3, IEEE, 2012, pp. 1062–1066. [11] W. Meng, Intrusion detection in the era of iot: Building trust via traffic filtering and sampling, IEEE Comput. 51 (7) (2018) 36–43, http://dx.doi. org/10.1109/MC.2018.3011034. [12] M.J. Probst, S.K. Kasera, Statistical trust establishment in wireless sensor networks, in: 13th International Conference on Parallel and Distributed Systems, ICPADS, Hsinchu, Taiwan, December 5-7, 2007, pp. 1–8. [13] F. Wang, C. Huang, J. Zhao, C. Rong, IDMTM: A novel intrusion detection mechanism based on trust modelfor ad hoc networks, in: 22nd International Conference on Advanced Information Networking and Applications, AINA, GinoWan, Okinawa, Japan, March 25-28, 2008, pp. 978–984. [14] T.B. Zahariadis, P. Trakadas, H. Leligou, S. Maniatis, P. Karkazis, A novel trust-aware geographical routing scheme for wireless sensor networks, Wirel. Pers. Commun. 69 (2) (2013) 805–826. [15] D.K. Anguraj, S. Smys, Trust-based intrusion detection and clustering approach for wireless body area networks, Wirel. Pers. Commun. 104 (1) (2019) 1–20. [16] J. Cho, A. Swami, I. Chen, A survey on trust management for mobile ad hoc networks, IEEE Commun. Surv. Tutor. 13 (4) (2011) 562–583. [17] S. Ganeriwal, L. Balzano, M.B. Srivastava, Reputation-based framework for high integrity sensor networks, ACM Trans. Sensor Netw. TOSN 4 (3) (2003) 1–37. [18] W. Li, Y. Meng, L. Kwok, Enhancing trust evaluation using intrusion sensitivity in collaborative intrusion detection networks: Feasibility and challenges, in: 9th International Conference on Computational Intelligence and Security CIS 2013, 2013, pp. 518–522. [19] W. Li, W. Meng, Enhancing collaborative intrusion detection networks using intrusion sensitivity in detecting pollution attacks, Inf. Comput. Secur. 24 (3) (2016) 265–276. [20] W. Li, W. Meng, L. Kwok, H.H. Ip, Enhancing collaborative intrusion detection networks against insider attacks using supervised intrusion sensitivity-based trust management model, J. Netw. Comput. Appl. 77 (2017) 135–145. [21] W. Meng, X. Luo, W. Li, Y. Li, Design and evaluation of advanced collusion attacks on collaborative intrusion detection networks in practice, in: 2016 IEEE Trustcom/BigDataSE/ISPA, Tianjin, China, August 23-26, 2016, 2016, pp. 1061–1068. [22] W. Meng, W. Li, Y. Xiang, K.R. Choo, A bayesian inference-based detection mechanism to defend medical smartphone networks against insider attacks, J. Netw. Comput. Appl. 78 (2017) 162–169, http://dx.doi.org/10. 1016/j.jnca.2016.11.012. [23] W. Meng, W. Li, C. Su, J. Zhou, R. Lu, Enhancing trust management for wireless intrusion detection via traffic sampling in the era of big data, IEEE Access 6 (2018) 7234–7243. [24] W. Meng, K.R. Choo, S. Furnell, A.V. Vasilakos, C.W. Probst, Towards bayesian-based trust management for insider attacks in healthcare software-defined networks, IEEE Trans. Netw. Serv. Manage. 15 (2) (2018) 761–773. [25] J. Yun, S. Seo, J. Chung, Centralized trust-based secure routing in wireless networks, IEEE Wirel. Commun. Lett. 7 (6) (2018) 1066–1069. [26] S. Kaplantzis, A. Shilton, N. Mani, Y.A. Sekercioglu, Detecting selective forwarding attacks in wireless sensor networks using support vector machines, in: 3rd International Conference on Intelligent Sensors, Sensor Networks and Information, IEEE, 2007, pp. 335–340. [27] R. Akbani, T. Korkmaz, G. Raju, A machine learning based reputation system for defending against malicious node behavior, in: IEEE GLOBECOM 2008, IEEE, 2008, pp. 1–5. [28] K. Nahiyan, S. Kaiser, K. Ferens, R. McLeod, A multi-agent based cognitive approach to unsupervised feature extraction and classification for network intrusion detection, in: International Conference on Advances on Applied Cognitive Computing (ACC), 2017, pp. 25–30. [29] J. Dromard, G. Roudière, P. Owezarski, Online and scalable unsupervised network anomaly detection method, IEEE Trans. Netw. Serv. Manag. 14 (1) (2017) 34–47. [30] S. Sultana, G. Ghinita, E. Bertino, M. Shehab, A lightweight secure scheme for detecting provenance forgery and packet dropattacks in wireless sensor networks, IEEE Trans. Dependable Secure Comput. 12 (3) (2015) 256–269.
L. Liu, Z. Ma and W. Meng / Future Generation Computer Systems 101 (2019) 865–879 [31] C. Wang, S.R. Hussain, E. Bertino, Dictionary based secure provenance compression for wireless sensor networks, IEEE Trans. Parallel Distrib. Syst. 27 (2) (2016) 405–418. [32] C. Wang, E. Bertino, Sensor network provenance compression using dynamic bayesian networks, ACM Trans. Sensor Netw. (TOSN) 13 (1) (2017) 5. [33] W. Ametepe, C. Wang, S.K. Ocansey, X. Li, F. Hussain, Data provenance collection and security in a distributed environment: a survey, Int. J. Comput. Appl. (2018) 1–15. [34] X. Liu, M. Abdelhakim, P. Krishnamurthy, D. Tipper, Identifying malicious nodes in multihop iot networks using diversity and unsupervised learning, in: 2018 IEEE International Conference on Communications (ICC), IEEE, 2018, pp. 1–6. [35] A.B. Nassif, D. Ho, L.F. Capretz, Towards an early software estimation using log-linear regression and a multilayer perceptron model, J. Syst. Softw. 86 (1) (2013) 144–160. [36] X. Liu, M. Abdelhakim, P. Krishnamurthy, D. Tipper, Identifying malicious nodes in multihop iot networks using dual link technologies and unsupervised learning, Open J. Internet Things (OJIOT) 4 (1) (2018) 109–125.
Liang Liu is currently a Lecturer in the College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu Province, China. His research interests include distributed computing, big data and system security. He received the B.S. degree in computer science from Northwestern Polytechnical University, Xi’an, Shanxi Province, China in 2005, and the Ph.D. degree in computer science from Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu Province, China in 2012.
879
Zuchao Ma received his Bachelor’s degree in 2018, from the Nanjing University of Aeronautics and Astronautics, China. He is currently a master student in College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, China. His research interests include Cloud Security, System Security and IoT Security.
Weizhi Meng is currently an assistant professor in the Cyber Security Section, Department of Applied Mathematics and Computer Science, Technical University of Denmark (DTU), Denmark. He obtained his Ph.D. degree in Computer Science from the City University of Hong Kong (CityU), Hong Kong. Prior to joining DTU, he worked as a research scientist in Institute for Infocomm Research, A*Star, Singapore, and as a senior research associate in CS Department, CityU. He won the Outstanding Academic Performance Award during his doctoral study, and is a recipient of the Hong Kong Institution of Engineers (HKIE) Outstanding Paper Award for Young Engineers/Researchers in both 2014 and 2017. He is also a recipient of Best Paper Award from ISPEC 2018, and Best Student Paper Award from NSS 2016. His primary research interests are cyber security and intelligent technology in security, including intrusion detection, smartphone security, biometric authentication, HCI security, trust computing, blockchain in security, and malware analysis. He served as program committee members for 20+ international conferences. He was co-PC chair for IEEE Blockchain 2018, IEEE ATC 2019, IFIPTM 2019, Socialsec 2019. He is a senior member of IEEE.