ARTICLE IN PRESS
JID: INS
[m3Gsc;July 20, 2019;2:37]
Information Sciences xxx (xxxx) xxx
Contents lists available at ScienceDirect
Information Sciences journal homepage: www.elsevier.com/locate/ins
Incentive mechanism for cooperative authentication: An evolutionary game approachR Liang Fang a, Guozhen Shi c, Lianhai Wang d, Yongjun Li a,b, Shujiang Xu d, Yunchuan Guo a,∗ a
Institute of Information Engineering, Chinese Academy of Sciences, China School of Cyber Security, University of Chinese Academy of Sciences, China c Department of Information Security, Beijing Electronic Science and Technology Institute, China d Shandong Provincial Key Laboratory of Computer Networks, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Science), China b
a r t i c l e
i n f o
Article history: Received 15 June 2018 Revised 13 March 2019 Accepted 7 July 2019 Available online xxx Keywords: Cooperative authentication Privacy Evolutionary game
a b s t r a c t In mobile opportunistic networks (MONs), cooperative authentication is an efficient way to filter false or misleading messages. However, due to privacy issues and concerns related to the consumption of resources, without adequate incentives, most mobile users (or nodes) with limited resources often act selfishly. These users are frequently uninterested to help others to authenticate such messages. In this study, a cooperative authentication model was formulated in the form of an evolutionary game. This model addresses the problems caused when cooperative nodes do not have all the information regarding other neighboring nodes, which makes them inadequately rational. Herein, the behavior dynamics and evolutionary stable strategy (ESS) of neighboring nodes were derived. We showed that the behavior dynamics converge to the ESS. This induces the neighboring nodes to independently decide whether to participate in authentication or not, without depending on information from other nodes (therefore, our approach can be implemented in the decentralized manner). Further, a scheme to help the source node was also designed to determine an optimal budget. Experiments were conducted both on simulated as well as real datasets. The results demonstrate that our approach exhibits overwhelming advantages to incentivize selfish nodes in MONs to cooperate. © 2019 Elsevier Inc. All rights reserved.
1. Introduction 1.1. Background With the explosive development of wireless technologies in the past few years, the mobile opportunistic networks (MONs), recognized as a ubiquitous approach to route messages [10]; sense data [21] (e.g., sense air quality, sense noise and monitoring surveillance of habitats); authenticate messages [5]; and share information [18], have been widely used in real life scenarios. In order to perform the above mentioned tasks quickly and accurately, a large number of mobile nodes R ∗
This paper belongs to the special issue “PriCom” edited by Prof. W. Pedrycz. Corresponding author. E-mail address:
[email protected] (Y. Guo).
https://doi.org/10.1016/j.ins.2019.07.030 0020-0255/© 2019 Elsevier Inc. All rights reserved.
Please cite this article as: L. Fang, G. Shi and L. Wang et al., Incentive mechanism for cooperative authentication: An evolutionary game approach, Information Sciences, https://doi.org/10.1016/j.ins.2019.07.030
JID: INS 2
ARTICLE IN PRESS
[m3Gsc;July 20, 2019;2:37]
L. Fang, G. Shi and L. Wang et al. / Information Sciences xxx (xxxx) xxx
Fig. 1. Cooperative authentication of an opportunistic system for sensing noise.
are required to cooperatively help others. In general, these mobile nodes (e.g., wireless sensors) with their limited resources (in terms of bandwidth and energy/power) are generally interconnected through open and wireless links (such as Bluetooth, Wi-Fi in ad hoc mode, etc.). Owing to their limited resources and open nature, the MONs are suffering from an increasing number of security attacks including unauthorized access, imitations and injections of false data. To mitigate these attacks, a cooperative approach has been recently proposed as a tool to authenticate the message identity and filter false messages [10]. This approach is also called the cooperative authentication. Fig. 1 illustrates the cooperative authentication scenario for noise sensing using the opportunistic networks. In this scenario, a group of users with smartphone equipped with the noise sensors, GPS location positioning sensors, and wireless communication modules, roams within the monitoring region. A user opportunistically senses noise data and reports those data to the monitoring center. To prove the truthfulness of the sensed data, the sensing node requests its neighboring nodes to authenticate the data prior to reporting them. The more users participate in the authentication, the higher the probability that monitoring center will identify the truth. By cooperative authentication, the imitative identities or false data can be efficiently filtered as early and accurately as possible. Thus, not only the waste of global resources is drastically reduced, but also the heavy verification burdens of the monitoring center are also mitigated [5]. Attributed to these advantages, the cooperative authentication gradually becomes one of the research focus points [7]. 1.2. Motivation In spite of the above mentioned advantages, the existing scheme for cooperative authentication cannot efficiently work in the current MONs, where selfish users account for the majority. In these networks, if adequate incentives are not provided, selfish users may be uninterested to cooperatively authenticate data (thus, preventing the cooperation scheme from working efficiently) for the following reasons: (1) Privacy Leakage. In MONs, most nodes communicate with each other through open wireless channels. As a result, a misbehaving node easily detects the presence of other nodes, recognizes their identification and tracks their locations by periodically monitoring data traffic. Moreover, in order to help other nodes authenticate a message, it is required to install the corresponding software (e.g., Android applications), which may compromise privacy. For instance, by monitoring information-leaking channels on Android (per-app data-usage statistics and speaker status (on or off)), an application may, without any permission, acquire sensitive information such as smartphone user’s identity, his geo-locations and driving route [8,23]. (2) Limitation of Resources. In MONs, the resources (e.g., energy, computing resources, and bandwidth) of most cooperation nodes are often limited. Once they are used up, these resources have to be replenished, in order to maintain nodes’ ability to run. Additional resources are consumed during the process of cooperative authentication; therefore, selfish nodes do not actively authenticate messages. Thus, without adequate incentives, a drastic decrease in the number of cooperators is observed, and the probability of filtering false identities or messages is also significantly reduced. Please cite this article as: L. Fang, G. Shi and L. Wang et al., Incentive mechanism for cooperative authentication: An evolutionary game approach, Information Sciences, https://doi.org/10.1016/j.ins.2019.07.030
JID: INS
ARTICLE IN PRESS L. Fang, G. Shi and L. Wang et al. / Information Sciences xxx (xxxx) xxx
[m3Gsc;July 20, 2019;2:37] 3
1.3. Challenge In this study, the game aspects of cooperative authentication were investigated. The main challenges of this study are as follows: (1) Most existing game-based approaches that are used to incentivize nodes to actively authenticate messages, assume that the individual users who decide whether to participate in authentication or not are adequately rational. Namely, individual users make their strategic choice based on an entirely rationally determined evaluation of probable outcomes. However, the assumption of “adequately rationality” is not so appropriate for MONs [15]. In the MONs, a large number of nodes often move from one location to another, thus network topology frequently changes. As a result, in practice, no one can exactly know all the information pertaining to the entire network. This indicates that the adequate rationality assumption is not realistic enough in MONs. Therefore, it is extremely important to model a bounded rationality of user behavior. (2) We need to formulate the payoffs of both cooperators and defectors and design an incentive mechanism, in order to encourage an appropriate number of nodes to cooperate. If the evolution game is adopted, we also need to decide whether an Evolutionarily Stable Strategy (ESS) exists or not, and if it exists, then to determine the reason for its existence. (3) If such an ESS exists, the ESS-based algorithms should be designed to incentivize the selfish nodes. We should also evaluate whether the ESS-based approach motivates neighboring nodes to cooperate with an acceptable cost. 1.4. Contribution The main contributions of this study can be summarized as follows: (1) To model the bounded rationality in opportunistic networks, we formulate an incentive approach for cooperative authentication as an evolutionary game and use an evolution game to model the behavior of nodes in MONs.1 (2) We also design a budget-assignment mechanism to incentivize neighboring nodes to cooperate. Based on the evolutionary game model, we conduct game-theoretical analyses and obtain the ESS in incomplete information games with bounded rational nodes. (3) We design an ESS-based algorithm and conduct experiments both on simulated as well as real datasets. The results show that our proposed strategy can efficiently incentivize an appropriate number of neighboring nodes to participate in authentication (thus enhancing the filtering probability) at an acceptable level of resource consumption. The remaining part of this paper is organized as follows: In Section 2, relevant studies are introduced and the differences between this study and related literature studies are discussed. In Section 3, the system model is presented and the factors that affect node’s payoffs are systematically analyzed. In Section 4, an incentive approach is modeled for cooperative authentication as an evolutionary game. Game analyses are conducted and the optimal strategy for neighboring nodes is provided in Section 5. In Section 5 an optimal strategy is proposed for a source node. Simulations are presented in Section 6. Finally, conclusions are provided in Section 7. 2. Related studies Cooperative authentication is an efficient way to recognize false identities and messages, as well as to conserve resources. Using cryptographic primitives, Jo et al. [7] and Lin et al. [10] designed cooperative authentication protocols for vehicular ad hoc networks (VANETs) to alleviate the computation burden of vehicles. This was achieved by sharing verification results and by using an evidence-token approach, respectively. Lu et al. [12] proposed a bandwidth-efficient cooperative authentication (BECAN) method, which could be used to filter false messages via the cooperative bit-compressed authentication technique. Zhou et al. [22] and Anusha et al. [1] efficiently realized a multi-level privacy-preserving cooperative authentication in distributed m-healthcare systems. Heng et al. [3] introduced a signal authentication architecture based on a network of the cooperative Global Positioning System receivers. These schemes greatly reduced the cost of authentication. However, these methods also face with the challenge of node selfishness. In other words, selfish nodes with limited resources may be reluctant to cooperatively authenticate messages reducing the degree of authentication capability. Incentive Mechanism. To encourage nodes to participate in cooperation, many incentive mechanisms for participatory sensing have been proposed. These mechanisms can be potentially used in cooperative authentication and they can be roughly divided into two categories [4,14]: the “Data-Upload-First” mechanisms and the “Price-Decision-First” mechanisms. In the first mechanisms, participants do not know how much of an incentive they will receive prior to uploading sensory data. Only after the completion of the sensory data upload, the platform evaluates participant’s incentive reward, according to his data. This approach is often used in the “platform-centric” model. Yang et al. [19] used the Stackelberg game to design an incentive mechanism and showed how to compute a unique equilibrium. Lv et al. [13] mainly focused on efficient 1 Although MONs are a typical dynamic work with a dynamic topology, mobile nodes (e.g., a taxi) often enter and exit the same area many times during a certain period, they can play the game repeatedly and evolve their behavior over time, thus we can use an evolution game to model their behavior.
Please cite this article as: L. Fang, G. Shi and L. Wang et al., Incentive mechanism for cooperative authentication: An evolutionary game approach, Information Sciences, https://doi.org/10.1016/j.ins.2019.07.030
ARTICLE IN PRESS
JID: INS 4
[m3Gsc;July 20, 2019;2:37]
L. Fang, G. Shi and L. Wang et al. / Information Sciences xxx (xxxx) xxx
method for encouragement of existing participants to recruit more participants. In the “platform-centric” model, wherein the platform absolutely controls the total payment and ignores the collection cost, the participants may forego future tasks and ultimately reduce the quality of data collection [4]. To solve the above mentioned problem, the “Price-Decision-First” approach was proposed. In this approach, the sensory data are uploaded only after the reward that each participant receives is decided. Thus, participants have a choice whether to accept the incentive offer or to refuse it. This approach is often called the “user-centric” model and is widely used in mobile networks [6,11]. Guo et al. [6] and Lv et al. [11], respectively, used the coalitional game theories to evaluate cooperation in mobile ad hoc networks and VANETs. However, they focused on a short-term utility of cooperators and ignored their longterm benefits. In order to address this problem, Gao, et al. [5] proposed an infinitely-repeated game to analyze the threat of selfish behavior. Yin et al. [20] used a repeated game with an incomplete information to motivate nodes to forward an advertisement. Wang et al. [17] adopted an evolutionary game to cooperatively sense the spectrum. The existing game-based incentive mechanisms for cooperative authentication adopt the assumption of “adequate rationality”. However, this assumption is often not sufficiently realistic in MONs [15], because nodes in MONs encounter opportunistically, and none of them can have full knowledge about the complete network topology, even a node might not know about the existence of other nodes. Therefore, the assumption of “bounded rationality” is more suitable. 3. System model 3.1. Cooperative authentication In this subsection, cooperative authentication in the MONs is summarized [5,10]. We consider a MON composed of a source node n0 , a destination node (denoted as nds , e.g., data center), and m mobile neighboring nodes within the onehop communication range of n0 (denoted by N = {n1 , . . . , nm }). In cooperative authentication, if node n0 provides its data to nds and hopes that nds believes that the provided data are true, then n0 requests its neighboring nodes (which might be compromised) to authenticate the data [6]. Thus, before source node n0 sends a message mess to destination node nds , n0 first broadcasts the mess to its m neighboring nodes and requests them to cooperatively authenticate the mess. Each cooperative neighboring node (called cooperator) returns one-bit message authentication codes (MAC) on mess to n0 (in cooperative authentication, MAC provides assurance to the destination nds of a message which comes from the expected sender and has not been altered in transit). It is assumed that all cooperators form a set denoted by NC ={n1 , . . . , nr }(0 ≤ r ≤ m), where r is the number of cooperators. After receiving r-MACs from r cooperators, n0 sends the mess and the r-MACs from {n0 } ∪ N to node nds . Based on the MACs, node nds decides whether the mess is from the expected sender or not. 3.2. Basic idea and problem statement Section 1 illustrates that if a neighboring node takes part in cooperation, then that node’s privacy might be compromised. To weaken its own privacy leakage, a cooperator should adopt privacy protection schemes. However, a cooperator cannot freely perform both cooperative authentication and privacy protection, because these activities consume both computing resources and communication resources. In general, these resources are often scarce and once they are used up, nodes cannot survive. Thus, selfish neighboring nodes will be typically uninterested to help source node n0 . To encourage selfish nodes to cooperate, we regard an authentication service as a form of goods. If source node n0 requires this service, it acts as a bidder and pays the service provider for the service. Each neighboring node that authenticates a message makes an offer, sells its service, and receives a payment that is not lower than its cost. There are two stages of a message to be authenticated. In the first stage, source node n0 broadcasts the message (along with the total of γ budget) to its neighboring nodes. In the second stage, each neighboring node decides whether to authenticate the message or not, in order to maximize its own payoff. In this mechanism, both the source node and the neighboring nodes are players. The strategy of the source node is the total budget γ and the strategy of the neighboring nodes is cooperating or denying. In the second stage, the evolution approach is used to compute the optimal probability with which a bounded rational node cooperates with others to maximize his payoff. In this study, two questions are of natural concern, as follows: Q1: For a given budget γ , without complete information pertaining to other nodes, how does a neighboring node select its strategy (cooperating or denying) to maximize its average payoff? What is the probability that a neighboring node selects the cooperating strategy and is it stable? Q2: How can the source node optimally select the value of γ to guarantee the filtering probability? 3.3. Filtering probability In cooperative authentication, injected false data can be recognized and filtered if the following conditions are satisfied simultaneously [6]: (1) there exists at least one uncompromised cooperator that accurately authenticates the message (i.e., at least one node can accurately find the false data), and (2) no adversary correctly guesses the MACs generated by the uncompromised neighboring nodes. Please cite this article as: L. Fang, G. Shi and L. Wang et al., Incentive mechanism for cooperative authentication: An evolutionary game approach, Information Sciences, https://doi.org/10.1016/j.ins.2019.07.030
ARTICLE IN PRESS
JID: INS
[m3Gsc;July 20, 2019;2:37]
L. Fang, G. Shi and L. Wang et al. / Information Sciences xxx (xxxx) xxx
5
However, the second condition is too strict and adversaries often compromise nodes in MONs. Without loss of generality, the compromised probability for each node is assumed to be ρ (0 ≤ ρ ≤ 1), and the filtering probability FP (which measures how many injected false data can be recognized before they reach the destination node [9])2 is defined by:
F P (r, ρ ) = 1 −
1 ρ i × (1 − ρ )r−i × r−i
r r i
(1)
2
i=0
where r is the number of nodes that participates in the cooperative authentication. According to formula (1), we have the following proposition. Proposition 1. If 0 ≤ ρ < 1, filtering probability FP(r, ρ ) given by (1) strictly increases with r and strictly decreases with ρ . Proof. Formula (1) can be re-written as follows:
1 + ρ r 1 − ρ r−i ρi =1−
r r F P (r, ρ ) = 1 − i
2
i=0
(2)
2
Solving the first and second derivatives of FP with respect to the variables r and ρ , we can easily obtain that: ∂ F P∂(rr,ρ ) > 0 (r,ρ ) and ∂ F P∂ρ < 0, when 0 ≤ ρ < 1. Therefore, we proved the Proposition 1. 3.4. Probability of m logical neighboring nodes Intuitively, the number of neighboring nodes affects filtering probability. Any neighboring node is a potential cooperator; therefore, an increased number of neighboring nodes could increase filtering probability. In general, cooperation authentication has the following characteristics: (1) nodes are mobile and for any given node, its neighboring nodes will change over time. (2) Task of cooperative authentication cannot be instantaneously completed and a certain period of time is often required. As such, if the encounter period of two mobile nodes is too short and less than the required period (even if they are within communication range of each other), the cooperators cannot lead to the completion of cooperative authentication. The aforementioned two characteristics indicate that we should distinguish between two distinct concepts, namely logic neighbors and physical neighbors. Two nodes physically neighbor with each other if their geographical distance is less than their communication distance. Two nodes are logically neighboring if they always physically neighbor with each other during the entire period required to complete the cooperative authentication. We use Pm.neigh to denote the probability that there are m logical neighbors within the transmission range of source node n0 during the period of cooperative authentication. Next, we evaluate Pm.neigh . We assume that the number of neighboring nodes follows the Poisson distribution (this assumption has been widely used in MONs [18]). Especially for a source node n0 , the number of its logical neighbors depends on the area of its communication region s, the period required to complete cooperative authentication τ , and the arrival rate of neighboring nodes in the unit area at the unit period λ. Let N (t + τ ) denote the number of n0 ’s logic neighbors at time (t + τ ). Generally, N (t + τ ) − N (t ) follows a Poisson distribution with parameter λτs , which is defined by:
Pm.neigh = Pr{[N(t + τ ) − N(t )] = m} =
(λs/τ )m m!
e−λs/τ =
(λπ d2 /τ )m m!
2 e−λπ d /τ
(3)
where d is the communication distance. The average number of logical neighbors (ANLN) can be computed as:
E(N(t + τ ) − N(t )) =
∞
m×
m=0
(λs/τ )m m!
e−λs/τ =
λs λπ d2 = τ τ
(4)
According to formula (4), the average number of logical neighbors of the source node is proportional to the arrival rate and communication area and is inversely proportional to the period required to complete cooperative authentication. Formula (4) indicates the longer the period required to complete cooperative authentication, the less the nodes which can continuously communicate with the source node, and thus, the less the number of logical neighbors. 3.5. Degree of privacy As shown in Section 1, privacy is a key element that affects neighboring node’s decision whether to participate in cooperation or not. To mitigate the degree of privacy leakage, several techniques have been proposed. One of them is dummy technique, in which a neighboring node generates and blends a number of dummy nodes (or dummies) together with the normal neighboring nodes, in order to achieve k-anonymity (i.e., the node’s identity or location cannot be distinguished from at least (k − 1) other nodes, thus, hiding the identity or location of the normal nodes). A natural question is “how many dummies should be generated by normal nodes?” In this study, we use the assumption of Liu et al. [11], whereby 2
In this study, routing is not considered.
Please cite this article as: L. Fang, G. Shi and L. Wang et al., Incentive mechanism for cooperative authentication: An evolutionary game approach, Information Sciences, https://doi.org/10.1016/j.ins.2019.07.030
ARTICLE IN PRESS
JID: INS 6
[m3Gsc;July 20, 2019;2:37]
L. Fang, G. Shi and L. Wang et al. / Information Sciences xxx (xxxx) xxx
each node generates (k − 1) dummies to provide the k-anonymity. This assumption is reasonable, because in the worst-case scenario, whereby only one node exists within its communication range, the node must generate (k − 1) dummies to obtain the k-anonymity. Clearly, the more dummies the normal nodes generate, the higher the degree of privacy (DoP) becomes. To measure DoP, many privacy metrics have been proposed, e.g., entropy and l-diversity [16]. For simplicity, herein linear function is used, whereby the DoP provided by the cooperatively generating dummies is approximately proportional to the number of nodes r that generate (k − 1 ) × r dummies. That is, DoP ≈ ϑ × r, where ϑ is the achieved degree of privacy when only one node generates (k − 1) dummies. 3.6. Cooperation cost and free-rider problem Cooperation cost: includes two costs: authentication cost (ca ) and privacy cost (cp ). Authentication cost: When a node participates in authentication, its resources are consumed and authentication cost will is spent. In general, for a given authentication, the authentication cost depends on multiple factors,e.g., the message length and the degree of urgency of a message. In this study, we simply consider authentication cost to be ca . For more details to evaluate authentication cost, please refer to the study of Guo et al. [6]. Privacy cost: Dummies are not free. When a dummy node joins or leaves an anonymity set, keys should be recomputed and redistributed. This consumes both computation and communication resources. For simplicity, the cost of generating (k − 1) dummies is assumed to be cp . We define the total cooperation cost c for taking part in cooperation to be the sum of privacy cost and authentication cost, c = c p + ca . Free-rider problem: Owing to the high cost of generating dummies, a selfish node may free-ride on others’ efforts [11], i.e., it passively waits for other nodes within the same area to generate dummies and benefits from them. If in the same area, there are r (r > 0) selfless nodes, which generate (k − 1 ) × r dummies, and one selfish node (i.e., a free rider), which does not generate dummies, then the selfish node’s DoP is the same as the other selfless nodes’ DoP. Therefore, the selfish node’s DoP approximately equals ϑ × r at zero cost. 3.7. Budget assignment In our study, virtual credits are used to motivate nodes to take part in cooperation. The source node n0 pays a reward to all cooperators after the task of cooperative authentication is completed. The total paid reward required for the authentication depends on the source node budget. The goal of the source node requesting its neighboring nodes to cooperate is to increase the filtering probability. According to Proposition 1, the higher required the filtering probability, the greater the number of cooperators is needed and more resources are consumed. Therefore, n0 should have higher budget. We use γ to denote the total budget that n0 is willing to pay for cooperative authentication. Let F P be the actual probability provided by the neighboring nodes after the authentication ends. The total reward paid to all cooperators for the entire authentication is defined as F P × γ . Therefore, the total reward paid to the cooperators depends on budget γ and actual F P. This approach is simple but rational. The entire budget does not have to be spent. For instance, if source node n0 offers a higher budget (in order to encourage more cooperators), but only one neighboring node with a high compromised probability takes part in cooperation during the entire authentication, obviously, it would be inappropriate for n0 to pay the total budget to the only cooperator. Thus, the entire budget will not be spent. In our model, the total F P × γ reward is paid out equally to all cooperators. Therefore, the reward nmi that the source node n0 pays to the cooperator i (1 ≤ i ≤ r) is defined by:
nmi =
γ × FP r
=
r γ × 1 − 1+2ρ r
(5)
4. Evolutionary cooperating game for neighboring nodes In this section, we mainly answer Question 1 (Q1) presented in Section 3, which is: “for the total budget γ , how does a neighboring node optimally select its strategy (cooperating or denying)?”. To achieve this goal, we model cooperative authentication in an “inadequately rational” environment as an evolutionary game (we consider this model as an evolutionary cooperating game). Then, replicator dynamics are used to analyze the behavior of neighboring nodes in the cooperating game. Finally, based on the ESS, we answer the Q1. 4.1. Game definition The cooperating game is defined as a triplet G =({n0 } ∪ N , S, U ), where N ={n1 , . . . , nm } is a set of neighboring nodes in a one-hop communication range from n0 ; S ={Si }m is the set of strategies of nodes, where Si = {C, D} is the set of i=1 strategies of node ni (1 ≤ i ≤ m), and C and D, respectively, indicate that node ni cooperatively authenticates a message (called cooperating) and refuses to authenticate a message (called denying). For simplicity, the strategy selected by node i is denoted by si . Moreover, all strategies of nodes but node i are denoted by a set s−i ; U= {u1 (s1 , s−1 ), . . . ,um (sm , s−m )} is the set of payoff functions of all nodes, where ui (si , s−i ) is the payoff of node i, defined as follows. Please cite this article as: L. Fang, G. Shi and L. Wang et al., Incentive mechanism for cooperative authentication: An evolutionary game approach, Information Sciences, https://doi.org/10.1016/j.ins.2019.07.030
ARTICLE IN PRESS
JID: INS
[m3Gsc;July 20, 2019;2:37]
L. Fang, G. Shi and L. Wang et al. / Information Sciences xxx (xxxx) xxx
7
γ ×F P (r+1,ρ )
• If node i cooperatively authenticates a message (i.e., si = C), then the payoff of node i is + ϑ × ( r + 1 ) − c 3 ,4 , r+1 where r is the number of cooperators other than node i. • If node i does not authenticate a message (i.e., si = D), but there are r cooperators, then the payoff of node i is ϑ × r. Note: (1) if r is greater than zero, the payoff of node i is greater than zero (because node i is a free rider and it uses the dummies generated by the r other nodes to protect its own privacy, as shown in Section 5 of Section 3). (2) If all neighboring nodes refuse to authenticate a message, then the payoff to all nodes is equal to zero. We summarize the payoff ui (si , s−i ) of node i as follows:
ui (si , s−i ) =
γ ×F P (r+1,ρ ) r+1
+ ϑ × (r + 1 ) − c ϑ ×r
si = C si = D
(6)
If node i knows the number of cooperators (i.e., node i knows the value of r in advance, then node i can easily obtain an optimal strategy by evaluating the payoffs (ui (C, s−i ) and ui (D, s−i )) in both cooperating and denying strategies, respectively, by using formula (6). If ui (C, s−i ) ≥ ui (D, s−i ), then node i will select the cooperating strategy, otherwise node i will select the denying strategy. However, in practice, the source node n0 which competes vigorously with its neighboring nodes, is the only node with the known value of r and does not broadcast information on r to them. Therefore, a neighboring node cannot independently decide whether to cooperate or not. To solve this problem, we use the replicator dynamics to with select an optimal strategy. 4.2. Replicator dynamics As shown in Section 1, in MONs, mobile nodes can play games repeatedly and their behavior evolves over time. Therefore, we can use the replicator dynamics to model different behaviors of nodes. Namely, a node tries a different strategy in each play and uses the methodology of understanding-by-building to learn from the interactions. At time t, node i selects strategy si (si ∈ {C, D}) with probability xsi (xsi ∈ [0, 1]); at time (t+1), node i adjusts the probability xsi with the growth rate xsi where xsi
dxs i dt
dxs i dt
,
is proportional to the difference between the current payoff U si of node i that adopts strategy si and the
current average payoff U of all nodes. If the growth rate is greater than zero, then the probability of selecting si increases in the next play. Otherwise, the probability of selecting si decreases. Then, the replicator dynamic expression (which describes how xsi changes with time t) for node i can be defined as follows:
dxsi = xsi (U si − U ) dt
(7)
From the above mentioned formula, we see that, if U si > U holds, then the growth rate of xsi is positive. If U si < U holds, then the growth rate of xsi is negative. Next, we evaluate U si and U . Let x denote the probability that a logical neighboring node cooperates. We use UC (x, m) and UD (x, m), respectively, to denote an average payoff for pure strategy C and pure strategy D of the node with cooperation probability x in the case that m other logical neighboring nodes exist, respectively. In that case, UC (x, m) can be computed as follows:
UC (x, m ) =
m m r x (1 − x )m−r uC (r + 1 ) r
(8)
r=0
γ F P (r,ρ ) where uC (r ) = + ϑ × r − c is the current payoff of a cooperating node when r nodes cooperate. Similarly, the average r payoff UD (x, m) can be computed as follows:
UD (x, m ) =
m m r x (1 − x )m−r uD (r ) r
(9)
r=0
where uD (r ) = ϑ × r is the current payoff of a non-cooperating node when r nodes cooperate. For a given node, the average payoff U C when that node adopts the cooperating strategy can be computed as follows:
U¯C =
∞
Pm.neigh × UC (x, m )
(10)
m=0
Similarly, the average payoff U D when adopting the denying strategy is given by:
U¯ D =
∞
Pm.neigh × UD (x, m )
(11)
m=0
According to formulas (10) and (11), we obtain U as follows:
U = x × UC + (1 − x ) × UD
(12)
3 owing to space limitations, in our model we assume that: (1) if a neighboring node participates in cooperation, that node will generate dummies to protect its own privacy, (2) if a neighboring node does not take part in cooperation, it will not generate any dummies. 4 Note: in this case, there are total (r+1) cooperators.
Please cite this article as: L. Fang, G. Shi and L. Wang et al., Incentive mechanism for cooperative authentication: An evolutionary game approach, Information Sciences, https://doi.org/10.1016/j.ins.2019.07.030
ARTICLE IN PRESS
JID: INS 8
[m3Gsc;July 20, 2019;2:37]
L. Fang, G. Shi and L. Wang et al. / Information Sciences xxx (xxxx) xxx
4.3. Analysis of evolutionarily stable strategy An ESS is a strategy which, if adopted by a population in a given environment, cannot be invaded by any alternative strategy that is initially rare. We use u(s, t) to represent the payoff of an individual using strategy s that is competing against another individual using strategy t. Then the ESS can be defined as follows [17]. Definition. A strategy s is an ESS, if and only if, for all s = s∗ : (1) u(s∗ , s∗ ) ≥ u(s, s∗ ) and, (2) if u(s∗ , s∗ )= u(s, s∗ ), then u(s∗ , s) > u(s, s). Herein, the first condition (also referred to as a strict Nash equilibrium), denotes the best response strategy for the user The second condition (also referred as a “Maynard Smith’s second condition”) indicates that, although strategy s is neutral with respect to the payoff against strategy s∗ , the population of users who continue to play strategy s∗ has an advantage when playing against s. In this way, the population using mutation strategy s always continues to decrease, until the entire population uses strategy s∗ . To obtain the ESS, we should define the replicator dynamics equation as follows: We use x∗ to denote the ESS of our game (if the ESS exists). Namely, an individual node will select the cooperating strategy with probability x∗ , and no one can increase its payoff by unilaterally changing its strategy. According to formula (7), we can obtain the replicator dynamics as follows:
dx = x (U C − U ) = x (1 − x )(U C − U D ) = x(1 − x ) Pm.neigh × (UC (x, m ) − UD (x, m )) dt ∞
(13)
m=0
To solve the replicator dynamics, we first calculate UC (x, m ) − UD (x, m ) by:
UC (x, m ) − UD (x, m ) =
m m r x (1 − x )m−r (uC (r + 1 ) − uD (r ) ) r r=0
1 + ρ r+1 m ( m + 1 )! ( m + 1 )! m−r m−r r+1 = x (1 − x ) − x (1 − x ) ( r + 1 )! ( m − r )! 2 (m + 1 )x r=0 (r + 1 )!(m − r )! r=0 m+1 γ ( 1 − ρ )x +ϑ − c = 1− 1− +ϑ −c (14) ( m + 1 )x 2 γ
m
As shown in Section 3, the number m of logical neighboring nodes follows the Poisson distribution with parameter λτs , therefore we have:
UC − U D =
∞
Pm.neigh × (UC (x, m ) − UD (x, m ))
m=0
=
m=0
−λπ d2 /τ
γe = λπ d2 x/τ =
∞ 1 − ρ k+1 (λs/τ )m −λs/τ γ e 1− 1− x m! ( m + 1 )x 2
+
∞ (λs/τ )m −λs/τ e (ϑ − c ) m!
m=0
m+1
m+1 ∞ ∞ (λπ d2 (1 − 1−2ρ x )) (λπ d2 /τ ) − ( m + 1 )! τ × ( m + 1 )!
m=0
+ϑ −c
m=0
2 1 1−ρ π d2 (1−ρ )λ γ e−λπ d /τ λπ d2 /τ γτ 2 e − 1 − eλπ d (1− 2 x )/τ − 1 +ϑ −c= 1 − e− 2τ x ) +ϑ −c 2 2 x λπ d x/τ λπ d
Substituting formula (15) into formula (13), we obtain πd γτ limx→0+ (x(1 − x )( λπ d2 (1 − e−
2 ( 1 −ρ ) λ x) 2τ
) 1x
dx dt
γτ
=x(1 − x )( λπ d2 (1 − e−
π d 2 ( 1 −ρ ) λ x) 1 2τ )x
+ ϑ − c )) = 0, zero can be regarded as a root of equation
2 γτ − π d (21τ−ρ )λ x ) 1 1 − e +ϑ −c =0 x λπ d2
dx dt
(15)
+ ϑ − c ). Because = 0. Let:
(16)
2 2 To clearly discuss the ESS x∗ of our game, we define K = π d (21τ−ρ )λ and θ = (ϑ −cγ)τλπ d . In fact, the following theorem is constructed:
Theorem 1. Starting from any interior point x ∈ (0, 1), the replicator dynamics defined by formula (13) converge to the ESS x∗ . Specifically, • When K + θ < 0, the replicator dynamics converges to x∗ = 0. • When (1) K e−K + θ > 0 or (2) K e−K < −θ < K and θ + 1 > e−K , the replicator dynamics converges to x∗ = 1. • When K e−K < −θ < K and θ + 1
ARTICLE IN PRESS
JID: INS
L. Fang, G. Shi and L. Wang et al. / Information Sciences xxx (xxxx) xxx
[m3Gsc;July 20, 2019;2:37] 9
γτ
Proof. Let f (x ) =−e−K + θ x + 1 and K1 = λπ d2 > 0. We re-write formula (15) as:
UC − U D =
K1 f (x ) x
(17)
where x ∈ (0, 1). According to formulas (13) and (17), the sign of ddxt depends on the sign of f(x) for a given x ∈ (0, 1). Therefore, we can obtain the ESS by solving equation f (x ) = 0. The first derivative f (x) of f(x) with respect to x is calculated by f (x ) = K e−Kx + θ . Therefore, function f (x) monotonically decreases with respect to x and f (1 ) = K e−K + θ < f (x ) < K + θ = f (0 ) holds. Next, we discuss different cases: Case 1. If K e−K + θ > 0, then f (x) is always greater than 0. Thus, function f(x) monotonically increases with respect to x. limx→0 f (x ) = 0, f(x) > 0 ; therefore, always holds for a given x ∈ (0, 1). According to formulas (13) and (17), both U C > U D and ddxt > 0 hold. Therefore, the replicator dynamics converges to x∗ = 1. Case 2. If K + θ < 0, then f (x) is always less than 0. Thus, function f(x) monotonically decreases with respect to x. Because limx→0 f (x ) = 0 holds, f(x) < 0 always holds for a given x ∈ (0, 1). According to formulas (13) and (17), both U C < U D and dx < 0 hold. Therefore, the replicator dynamics converges to 0. dt Case 3. If K e−K < −θ < K, then there is exactly one point x (0 < x < 1) such that f (x ) = 0. Therefore, if x > x > 0, then f (x) > 0; if 1 > x > x, then f (x) < 0. These results indicate that function f(x), respectively, monotonically increases and monotonically decreases with respect to x when x > x > 0 and 1 > x > x. Owing to limx→0 f (x ) = 0, we have two sub-cases: • If limx→1 f (x ) = −e−K + θ + 1 > 0, then f(x) > 0 always holds for a given x ∈ (0, 1). According to formulas (13) and (17), both U C > U D and ddxt > 0 hold, Therefore, the replicator dynamics converge to x∗ = 1. • If limx→1 f (x ) = −e−K + θ + 1 < 0, then there is only one root x∗ (x¯ < x∗ < 1) of equation f (x ) = 0, such that (a) 0 < f(x) when 0 < x < x∗ , (b) f(x) < 0 when x∗ < x < 1 and (c) f (x∗ ) = 0. According to formula (17), this root corresponds to the root of formula (16). Thus, according to formulas (13) and (17), (a) ddxt > 0 holds when 0 < x < x∗ , (b) ddxt < 0 holds when x∗ < x < 1. Therefore, the replicator dynamics converges to x∗ . Formula (16) has only one root; therefore we can efficiently solve this equation using either bisection or Newton’s method [2]. In general, a neighboring node knows its own compromised probability ρ , cooperation cost c, and DoP ϑ. A node can easily obtain the average number λπ d2 /τ of logical neighbors by observing the traffic or using a mapping service (e.g., Google map) [9]. Therefore, once receiving parameter γ from a source node, the neighboring node can decide whether to cooperate in the de-centralized approach using Theorem 1 or not. Up to now, we have answered Q1 presented in Section 3: to maximize its own payoff, a neighboring node should select the cooperating strategy with probability x∗ .
5. Optimal budget of source node In this section, we answer Q2 presented in Section 3, that says: “how does the source node select its optimal budget
γ ?”. For a given target filtering probability F P required by destination node nds and the individual compromised probability ρ of each node, according to formula (2) source node can obtain the required number r of cooperators by: r=
ln(1−F P ) ln(1 + ρ ) − ln 2
(18)
Similar to its neighboring nodes, observing the traffic or using a mapping service, source node n0 can also obtain the average number λπ d2 /τ of logical neighbors. Therefore, n0 can evaluate the cooperation probability x (of a neighboring node) 2 required to achieve the target filtering probability as follows: if λπτd ≥ r¯ holds, then x = λπτ rd2 ; otherwise x = 1. In cooperative authentication, source node n0 can play the role of a neighboring node and it knows the parameters ρ , c and ϑ by observing its own compromised probability, cooperation cost and DoP. Thus, n0 can calculate its optimal budget γ by:
γ=
0
(c−ϑ )λπ d2 1−ρ − λπ d2 /τ ) 2 τ
1−e
x¯
if c − ϑ > 0
(19)
otherwise
Based on the above mentioned analysis, we can derive the ESS-based algorithms, namely Algorithms 1 and 2. Neighboring nodes can independently obtain the parameter values required by Algorithm 2, without depending on the information from other neighboring nodes. Therefore, our approach can be implemented in the de-centralized manner. Please cite this article as: L. Fang, G. Shi and L. Wang et al., Incentive mechanism for cooperative authentication: An evolutionary game approach, Information Sciences, https://doi.org/10.1016/j.ins.2019.07.030
JID: INS 10
ARTICLE IN PRESS
[m3Gsc;July 20, 2019;2:37]
L. Fang, G. Shi and L. Wang et al. / Information Sciences xxx (xxxx) xxx
Algorithm 1: Evolutionary game for the source node. Input: The given target filtering probability F P 2 Initiating: The source node n0 sets λπτd , ρ , c and ϑ . Action: Calculate the required number r of cooperators by (18). Calculate the optimal budget γ by (19). Broadcast γ and the message mess to be authenticated to the neighboring nodes. Wait for MACs from neighboring nodes. After receiving MACs, n0 calculate the payment nmi of cooperator i by (5) and then pay cooperator i.
Algorithm 2: Evolutionary game for a neighboring node. Initiating: A neighboring node ni sets λπτd , ρ , c and ϑ . Action: Wait to receive the total budget γ and mess. if γ and mess are not received then wait. end else Calculate probability x by solving (15); Choose the cooperating strategy with probability x; if choosing the cooperating strategy then Generate MAC and k − 1 dummies; Send MAC to n0 and deploy k − 1 dummies; Wait for payment from n0 ; end end 2
Fig. 2. Evolution process.
6. Experimental evaluation 6.1. Numerical experiment on simulated data sets In the simulation, without the special statement, we set the default value of the parameters to ρ = 0.2, λπ d2 /τ = 17.265 , ϑ = 5 and c = 10. (1) Evolution Process. We fixed parameters ρ , λπ d2 /τ , ϑ, and c, and then picked different γ and initial cooperation probability x of a neighboring node in to order to check how the evaluation process is conducted. The evolution was updated in the following manner: x = x + ddxt × t, where t = 0.1 is a step size. From Fig. 2, we can see that for a given total budget γ , the replication dynamics always converges to the ESS x∗ regardless the initial value of probability x. (2) ESS x∗ dependence on budget and number of logical neighboring nodes. Fig. 3 shows that ESS monotonically (but not strictly) increases with the total budget γ . This is consistent with our conclusion: the greater the budget, the higher 5 As shown in the next subsection, the real average number of logical neighbors in the case of the medium vehicle density is 17.26, thus, we set λπ d2 /τ to 17.26.
Please cite this article as: L. Fang, G. Shi and L. Wang et al., Incentive mechanism for cooperative authentication: An evolutionary game approach, Information Sciences, https://doi.org/10.1016/j.ins.2019.07.030
JID: INS
ARTICLE IN PRESS L. Fang, G. Shi and L. Wang et al. / Information Sciences xxx (xxxx) xxx
[m3Gsc;July 20, 2019;2:37] 11
Fig. 3. ESS x∗ dependence on the total budget γ .
Fig. 4. ESS x∗ dependence on the number of neighboring nodes.
Fig. 5. Filtering probability dependence on the total budget γ .
the probability that node cooperates. Fig. 4 shows that ESS x∗ monotonically decreases with the average number of logical neighboring nodes if the number of neighboring nodes is greater than a given threshold value. The reason is that our scheme can incentivize an appropriate number of neighboring nodes (but not too many nodes) to cooperate. Thus, our approach can save the resources of neighboring nodes. (3) Filtering probability dependence on budget γ . Fig. 5 shows that filtering probability monotonically (but not strictly) increases with the total budget γ . For instance, in the case that compromised probability of a node is too high (e.g., ρ = 0.8), even if the budget is sufficient, we cannot guarantee that filtering probability approaches to one. The reason is that there are not enough neighboring nodes around n0 . Please cite this article as: L. Fang, G. Shi and L. Wang et al., Incentive mechanism for cooperative authentication: An evolutionary game approach, Information Sciences, https://doi.org/10.1016/j.ins.2019.07.030
JID: INS 12
ARTICLE IN PRESS
[m3Gsc;July 20, 2019;2:37]
L. Fang, G. Shi and L. Wang et al. / Information Sciences xxx (xxxx) xxx
(a) Dense / medium scenario
(b) Sparse scenario
Fig. 6. Number of completed authentication tasks.
6.2. Numerical experiment on real data sets In the experiment, we adopted a city scenario including 519,930 GPS records of 14,480 taxis in three representative areas of Beijing - the Guomao area, covering 20.534 km × 26.567 km; the Babaoshan area, covering 21.641 km × 28.569 km; and the Beiqijia area, covering 25.724 km × 40.976 km. These records were gathered from 8:0 0:0 0 a.m. to 8:59:59 a.m. on August 13, 2015. During this period, the densities of vehicles in the Guomao, Babaoshan and Beiqijia areas were high, medium and low, respectively (namely a dense scenario, a medium scenario, and a sparse scenario, respectively). We assume that: (1) passengers in a taxi own a smartphone to collect noise data; their neighboring nodes use smartphones equipped with Wi-Fi with a communication distance of 200 m as a communication tool to authenticate noise data. (2) Period of time required to complete authentication task is about 60 s (these two assumptions are often used in MONs [20]); (3) target filtering probability F P required by a source node is 0.95 and the compromise probability of a node is ρ = 0.2 (thus, according to formula (18), we have r = 6). Under these assumptions, the real average number of logical neigh2 bors in the Guomao, Babaoshan and Beiqijia areas were 25.48, 17.26 and 1.05, respectively (and, λπτd in these three areas equaled 25.48, 17.26 and 1.05, respectively). Owing to space limitations, we only compared the number of authentication tasks completed by different approaches. Apart from our approach, three other strategies were included in the experiment: (1) the selfish strategy, i.e., without the incentives, no node will cooperate; (2) the selfless strategy, i.e., neighboring nodes unconditionally cooperate until their resources are spent, and (3) the Y percent-selfless strategy, i.e., Y percent of neighboring nodes unconditionally cooperate and other nodes are selfish. In Fig. 6, it can be seen that number of tasks completed in our approach is always greater than number in the other approaches regardless the vehicle density (in this study, a task is completed when the target filtering probability is achieved). For instance, when the percent of the remaining resources was 100, we had: (1) in the dense scenario, 10,672 authentication tasks were completed if the 30%-selfish approach was used, 7129 tasks were completed if the selfless approach was used, and 13,360 tasks were completed if our approach was used. (2) In the sparse scenario, the number of tasks completed in our approach was 87 times greater than the number of tasks completed in the 20%-selfish approach. Although the number of tasks completed in the selfless approach approximated the number of tasks completed in our approach, in practice it is almost impossible that all nodes are selfless. Therefore, our approach has overwhelming advantages over the other related approaches. As mentioned in Section 3, for a given budget, a neighboring node selects the cooperating strategy with probability x. This indicates that not all nodes are going to participate in the cooperative authentication. Comparative analysis with respect to the selfless approach indicates that though the selfless approach can achieve a result close to our approach, redundancy consumption is generated because all nodes participate in the cooperation. In this case, when the total budgets are the same, payoff for each cooperator is less than that in our approach and the total consumption is more than that in our approach. Compared to the Y-selfless approaches, as shown in Fig. 4, the ESS monotonically decreases with the average number of logical neighboring nodes and the ESS becomes far more less than 20% when the number of neighboring nodes is greater than 50. This indicates that the denser the nodes, the less the probability a node needed to participate in the cooperation and the less the total consumption. Thus, our approach can save the resources of neighboring nodes. 7. Conclusion To incentivize nodes to cooperate, we formulate cooperative authentication in MONs as an evolutionary game. We answer two questions, namely 1) how does a neighboring node independently select the optimal strategy and maximize the payoff, and 2) how can a source node optimally select the total budget to guarantee the target filtering probability. We develop Please cite this article as: L. Fang, G. Shi and L. Wang et al., Incentive mechanism for cooperative authentication: An evolutionary game approach, Information Sciences, https://doi.org/10.1016/j.ins.2019.07.030
JID: INS
ARTICLE IN PRESS L. Fang, G. Shi and L. Wang et al. / Information Sciences xxx (xxxx) xxx
[m3Gsc;July 20, 2019;2:37] 13
algorithms for the proposed game approach. The conducted experiments prove the effectiveness of our approach. In this study, we discussed only homogeneous neighboring nodes. Undeniably, a lot more systematic explorations are demanded to investigate heterogeneous neighboring nodes, which will be pursued in our future study. Conflict of interest None. Acknowledgements This work was supported by the National key research and development program of China (No. 2016YFB0801001), the National Natural Science Foundation of China (No. U1836203), the Strategic Priority Research Program of the Chinese Academy of Sciences (No. XDC02040400). References [1] P. Anusha, D. Sudha, A novel approach for authentication in distributed m-healthcare using AAPM and multi-level cooperative authentication, Int. J. IEEE 4 (1) (2016) 1–6. [2] S. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. [3] L. Heng, D.B. Work, G.X. Gao, GPS signal authentication from cooperative peers, IEEE Trans. Intell. Transp. Syst. 16 (4) (2015) 1794–1805. [4] H. Gao, et al., A survey of incentive mechanisms for participatory sensing, IEEE Commun. Surv. Tutorials 17 (2) (2015) 918–943. [5] L. Gao, N. Ruan, H. Zhu, Efficient and secure message authentication in cooperative driving: a game-theoretic approach, in: Proc. of IEEE ICC, 2016. [6] Y. Guo, L. Yin, L. Liu, B. Fang, Utility-based cooperative decision in cooperative authentication, in: Proc.of IEEE INFOCOM, 2014. [7] H.J. Jo, I.S. Kim, D.H. Lee, Reliable cooperative authentication for vehicular networks, IEEE Trans. Intell. Transp. Syst. (2017) 1–15. [8] F. Li, X. Wang, B. Niu, H. Li, C. Li, L. Chen, TrackU: exploiting user’s mobility behavior via WiFi list, in: Proc. of IEEE GLOBECOM, 2017. [9] R. Li, C. Cheng, M. Qi, W. Lai, Design of dynamic vehicle routing system based on online map service, in: Proc. of IEEE ICSSSM, 2016. [10] X. Lin, X. Li, Achieving efficient cooperative message authentication in vehicular ad hoc networks, IEEE Trans. Veh. Technol. 62 (7) (2013) 3339–3348. [11] X. Liu, K. Liu, L. Guo, X. Li, Y. Fang, A game-theoretic approach for achieving k-anonymity in location based services, in: Proc. of IEEE INFOCOM, 2013. [12] R. Lu, X. Lin, H. Zhu, X. Liang, X. Shen, BECAN: a bandwidth-efficient cooperative authentication scheme for filtering injected false data in wireless sensor networks, IEEE Trans. Parallel Distrib. Syst. 23 (1) (2012) 32–43. [13] Y. Lv, T. Moscibroda, Fair and resilient incentive tree mechanisms, Distrib. Comput. 29 (1) (2016) 1–16. [14] F. Restuccia, S.K. Das, J. Payton, Incentive mechanisms for participatory sensing: survey and research challenges, ACM Trans. Sens. Netw. 12 (2) (2016) 1–40. [15] N. Ruan, L. Gao, H. Zhu, W. Jia, X. Li, Q. Hu, Toward optimal dos-resistant authentication in crowdsensing networks via evolutionary game, in: Proc. of IEEE ICDCS, 2016. [16] R. Trujillo-rasua, I.G. Yero, k-metric antidimension: a privacy measure for social graphs, Inf. Sci. 328 (2016) 403–417. [17] B. Wang, S. Member, K.J.R. Liu, T.C. Clancy, Evolutionary cooperative spectrum sensing game: how to collaborate? IEEE Trans. Commun. 58 (3) (2010) 890–900. [18] M. Xing, S. Member, J. He, L. Cai, S. Member, Utility maximization for multimedia data dissemination in large-scale VANETs, IEEE Trans. Mob. Comput. 16 (4) (2017) 1188–1198. [19] D. Yang, G. Xue, X. Fang, J. Tang, Crowdsourcing to smartphones: incentive mechanism design for mobile phone sensing, in: Proc. of IEEE MOBILCOM, 2012. [20] L. Yin, Y. Guo, F. Li, Y. Sun, J. Qian, A. Vasilakos, A game-theoretic approach to advertisement dissemination in ephemeral networks, World Wide Web J. 20 (2017) 1–20. [21] D. Zhao, H. Ma, S. Tang, X.Y. Li, COUPON: a cooperative framework for building sensing maps in mobile opportunistic networks, IEEE Trans. Parallel Distrib. Syst. 26 (2) (2015) 392–402. [22] J. Zhou, X. Lin, X. Dong, Z. Cao, PSMPA: patient self-controllable and multi-level privacy-preserving cooperative authentication in distributed m-healthcare cloud computing system, IEEE Trans. Parallel Distrib. Syst. 26 (6) (2015) 1693–1703. [23] X. Zhou, et al., Identity, location, disease and more: inferring your secrets from android public resources, in: Proc.of ACM CCS, 2013.
Please cite this article as: L. Fang, G. Shi and L. Wang et al., Incentive mechanism for cooperative authentication: An evolutionary game approach, Information Sciences, https://doi.org/10.1016/j.ins.2019.07.030