JID:PLA
AID:126165 /SCO Doctopic: Biological physics
[m5G; v1.261; Prn:13/12/2019; 15:34] P.1 (1-7)
Physics Letters A ••• (••••) ••••••
Contents lists available at ScienceDirect
Physics Letters A www.elsevier.com/locate/pla
The evolution of cooperation within the multigame environment based on the Particle Swarm Optimization algorithm Xianjia Wang a,b , Wenman Chen a,∗ , Jinhua Zhao a a b
Economics and Management School, Wuhan University, Wuhan 430070, China Institute of System and Engineer, Wuhan University, Wuhan 430070, China
a r t i c l e
i n f o
Article history: Available online xxxx Communicated by M. Perc Keywords: Multigame environment Payoff matrix Particle Swarm Optimization Evolutionary game Cooperation
a b s t r a c t This paper studies the evolution of cooperation in a so-called multigame environment based on the Particle Swarm Optimization (PSO) algorithm. In a multigame environment, players use different game payoff matrices and acquire their utilities from their interactions with their neighbors. According to the PSO algorithm, each player updates its strategy according to both the strategy adopted by the player with the highest utility in its neighborhood and the most profitable strategy in their own past actions. Simulation results show that the multigame environment is conducive to the promotion of cooperation. Besides, within the multigame environment, for any player, imitating the most profitable strategy in its past actions promotes cooperation more effectively than imitating the strategy adopted by the player with the highest utility in its neighborhood. © 2019 Elsevier B.V. All rights reserved.
1. Introduction The widespread emergence of cooperation in the real world is a puzzling phenomenon, which is mainly because defectors theoretically outperform cooperators within a well-mixed population [1–3]. In social and behavioral sciences, the widespread emergence of cooperators within a population remains a conundrum. In a large amount of studies, this conundrum is considered as a social dilemma and has been investigated as prisoner’s dilemma game [4–7], snowdrift game [8–13] and public good game [14, 15]. To avoid the emergence of defection in a population, five renowned mechanisms [16] promoting cooperation have been proposed such as kin selection [17], direct reciprocity [18], indirect reciprocity [19], group selection, and network reciprocity [20–25]. Some other proposed mechanisms: reward [26,27], reputation [28, 29], punishment [30–37] and so on [38–46], have also been proved to be effective to elevate the level of cooperation. Previous studies concerning the evolution of cooperation have largely assumed that all players in the same population perceive their interactions in the same way. However, in reality, players may have different perceptions on what they might lose or obtain when competing with their own neighbors [47–50]. Motivated by the fact, we introduce the multigame environment into our model in this paper. Within the multigame environment, players
*
Corresponding author. E-mail addresses:
[email protected],
[email protected] (W. Chen).
https://doi.org/10.1016/j.physleta.2019.126165 0375-9601/© 2019 Elsevier B.V. All rights reserved.
would adopt different payoff matrices as the perceptions of interactions are different between players. Moreover, to study the evolution of cooperation within more realistic setup, we take a more realistic strategy update rule, namely the Particle Swarm Optimization (PSO) algorithm, into this paper’s consideration. Recently, some strategy update rules based on biological behavior in nature have been investigated. One of the important strategy update rules is the PSO algorithm, which stems from the study of the foraging behavior of birds. Researchers have applied the PSO algorithm to investigating the evolutionary game [51–54]. There are two main advantages to doing this. First, the combination of PSO algorithm and the evolutionary game makes the evolutionary dynamics closer to reality. Second, the obtained results can provide more valuable guidance for practical problems. In the PSO algorithm, each player is regarded as a particle and has the ability to identify the strategy adopted by the player with the highest utility in its nearest neighbors (swarm) and the most profitable strategy in its past actions. According to the preference coefficient, all players simultaneously update their own strategies at each time step t. We organize the rest of this paper as follows. Section 2 provides specific details of our model. Section 3 comprehensively investigates how the multigame environment and the PSO algorithm will influence the evolution of cooperation in a spatial population by the means of extensive simulations. In this section, we present the main results obtained and reveal the underlying meaning of these results. In section 4, we discuss what we have studied and observed in this paper.
JID:PLA
AID:126165 /SCO Doctopic: Biological physics
[m5G; v1.261; Prn:13/12/2019; 15:34] P.2 (1-7)
X. Wang et al. / Physics Letters A ••• (••••) ••••••
2
(1−ρ ) N
Table 1 Payoff Matrix.
C D
C
D
R T
S P
2. Theoretical model It is generally known that, in the real world, interactions between people are not random, but rather can be characterized by a certain network. To depict the interactions between players in a more realistic way, we use a undirected graph, the square lattice with average degree < k >= 4 and size N, to characterize the structure of a population. On the graph, vertices represent players and links represent the interactions between players. According to the general assumption in previous studies [42–46], each player in the evolutionary game can only choose to be either a cooperator (C) or a defector (D) at each strategy update step and plays game with the payoff matrix (Table 1). In this paper, nevertheless, we make a more realistic assumption that each player can choose its strategy from a continuous set of strategies. Specifically, at every time step, each player in the structured population can choose a strategy value of s, which is in the interval [0, 1]. In the initial (t = 0), every individual is randomly assigned a strategy value of s, which obeys uniform distribution. In particular, we specify that the higher the value of s the higher the cooperativeness level in the spatial population. Here, s = 1 and s = 0 corresponds to fully cooperative strategy and fully defective strategy respectively. Besides, 0.5 < s < 1 indicates predominantly cooperative strategy and 0 < s < 0.5 represents predominantly defective strategy. Without losing the generality, we study pairwise evolutionary game in this paper. In pairwise interaction, let player i and player j respectively having continuous strategy si and s j play a game. If they are both relatively cooperative, say, si > 0.5 and s j > 0.5, they get si s j as the fruit of mutual cooperation. If si is relatively defective, say si < 0.5 and s j is relatively cooperative, say s j > 0.5, player i is given T (1 − si )s j (1 < T < 2) and player j obtains either θ1 for the case of snowdrift game or θ2 for the case of prisoner’s dilemma game. Moreover, if two players are both relatively defective (si < 0.5 and s j < 0.5), they get nothing. In reality, players are not always the same and would perceive their interactions in different ways when compete with their neighbors, leading to adopting different payoff matrices. Motivated by the fact, we define the above-mentioned situation as the multigame environment. Within the multigame environment, we consider weak prisoner’s dilemma game as the core game payoff matrix adopting by players, while there are other players adopting either snowdrift game payoff matrix or prisoner’s dilemma game payoff matrix. Here, we can use two parameters, θ1 and θ2 , to distinguish payoff matrices of different games. To be specific, the positive value 1 > θ1 > 0 corresponds to the sucker’s payoff of snowdrift game, the negative value −1 < θ2 < 0 to that of prisoner’s dilemma game, and 0 to that of weak prisoner’s dilemma game. Taking this into consideration, the structured population is divided into three subpopulations, namely the weak prisoner’s dilemma subpopulation (abbreviated by WPD subpopulation), the snowdrift game subpopulation (abbreviated by SG subpopulation) and the prisoner’s dilemma subpopulation (abbreviated by PD subpopulation). Players belonging to different subpopulations adopt different game payoff matrices. To be specific, players belonging to the SG subpopulation use snowdrift game payoff matrix, players belonging to the PD subpopulation adopt prisoner’s dilemma game payoff matrix. Here, we specify that the number of players belonging to the WPD is ρ N, while the sizes of both the
PD subpopulation and SG subpopulation are in the spatial 2 population. Then, we use the following notation for neighborhood on square lattice, N 1 (i ) denotes all players with distance 1 from player i. We give the underlying sets to classify the strategies of player i’s neighbors.
L 1i : = j |si > 0.5, and s j > 0.5 for all j ∈ N 1 (i ) L 2i : = j |si > 0.5, and s j < 0.5 for all j ∈ N 1 (i ) L 3i : = j |si < 0.5, and s j > 0.5 for all j ∈ N 1 (i ) L 4i : = j |si < 0.5, and s j < 0.5 for all j ∈ N 1 (i )
where i ∈ {1, 2, 3 . . . , N } denotes the vertices on the network. Here, L 1i and L 2i represent the set of cooperative players and the set of defective players interacting with cooperative player i respectively. Similarly, L 3i and L 4i indicate the set of cooperative players and the set of defective players interacting with defective player i respectively. We define ϕ as the type of the subpopulation, ⎧ ⎨ 0, W P D 1, S G . Each player acquires its utility from interactions ϕ= ⎩ 2, P D with its nearest neighbors on the spatial network. During each updating generation of evolutionary process, player i acquires its accumulated utility πϕ (i )(i ∈ [1, . . . , N }) from interactions with its all neighbors,
π0 ( i ) =
si s j +
j ∈ L 1i
π1 ( i ) =
T (1 − si )s j
j ∈ L 3i
si s j +
j ∈ L 1i
π2 ( i ) =
θ1 +
j ∈ L 2i
si s j +
j ∈ L 1i
T (1 − si )s j
j ∈ L 3i
θ2 +
j ∈ L 2i
T (1 − si )s j
j ∈ L 3i
Once acquiring the accumulated utilities πϕ , players start to update the strategies by means of the PSO algorithms. In the PSO algorithm, each player is regarded as a particle and has the ability to identify the strategy sli (t ) adopted by the player with the highest utility in its nearest neighbors (swarm) and the most profitable strategy shi (t ) in its past actions respectively. According to the preference between sli (t ) and shi (t ), player i updates its strategy at each time step. Initially (t = 0), all players in the structured population have the same velocity v i (0) = 0(i ∈ [1, . . . , N }). At each following time step t (t > 0), player i will simultaneously update its velocity vector v i (t ) and its strategy si (t ) according to equation (1) and equation (2),
v i (t + 1) = v i (t ) + ω shi (t ) − si (t ) + (1 − ω) sli (t ) − si (t )
(1)
si (t + 1) = v i (t + 1) + si (t )
(2)
In equation (1), the parameter ω ∈ [0, 1] represents the preference coefficient of player i between sli (t ) and shi (t ). In particular,
ω → 0 implies that player i tends to imitate the strategy sli (t ), while ω → 1 implies that player i is more likely to adopt the strategy shi (t ). In this paper, we assume that the ω values of all players
are homogenous and remain unchanged throughout the entire evolutionary process. To observe how the evolution of cooperation varies under dif
si
ferent conditions, we define the following notation s = i∈{1N,...,N } as the average cooperativeness level of the structured population
and
σ=
2 i ∈{1,..., N } (si −¯s)
n−1
as the degree of scatter of players’ coop-
eration level in the stationary state. Then, we carry out extensive
JID:PLA
AID:126165 /SCO Doctopic: Biological physics
[m5G; v1.261; Prn:13/12/2019; 15:34] P.3 (1-7)
X. Wang et al. / Physics Letters A ••• (••••) ••••••
3
Fig. 1. The stationary average cooperativeness level s decreases with the increase of ρ , while s increases as θ increases from 0 to 1. The values of other parameter are (a) N = 10000, (b) N = 250000, w = 0.9 and T = 1.1 respectively. (For interpretation of the colors in the figure(s), the reader is referred to the web version of this article.)
Fig. 2. (a) Under different values of ρ , the average cooperativeness level s is greater than 0 and decreases significantly as T increases from 1 to 2. Besides, the average cooperativeness level s increases as ρ decreases. (b) The standard deviation of cooperation level σ increases as ρ decreases. The values of other parameter are N = 10000, ω = 0.9 and θ = 0.4 respectively.
Monte Carlo simulations to explore how the multigame environment will influence the evolution of cooperation. 3. Dynamics For ρ = 1, the multigame environment will return to the homogeneous population, that is, all players in the structured population adopt weak prisoner’s dilemma game payoff matrix [51]. In this paper, to determine whether the multigame environment can induce the promotion of cooperation, we set the above-mentioned homogeneous population as the baseline. Meantime, the average overall payoff matrix of the whole population have also to return to the core game payoff matrix. Therefore, we set θ1 = θ and θ2 = −θ (1 > θ > 0) to fulfill θ1 + θ2 = 0. In this part, we will explore in detail how the multigame environment affects the evolution of cooperation. First, we examine that how the average cooperativeness level s varies with parameters ρ and θ under two population sizes. Noticeably, the evolutionary dynamics are robust under different population sizes [54]. Thus, we just need to analyze the evolution of cooperation in the population with size N = 10000 in the following part. Besides, it can be observed in Fig. 1 that the average cooperativeness level s shows a monotonically decreasing trend when ρ increases from 0 to 1. However, the average cooperativeness level s increases with the increase of θ . From the above-mentioned observations, we can conclude that the more heterogeneous the multigame environment the stronger the cooperation. In addition, if players belonging to the SG subpopulation use higher θ1 value, it is more conducive
to the resolution of social dilemma and the elevation of cooperation within the multigame environment. For more details, we proceed by plotting the changing trend of s under different ρ and σ values in Fig. 2. As shown in Fig. 2(a), the results indicate that s declines significantly as the temptation value T increases from 1 to 2. Besides, s is greater than zero for different ρ values even if the temptations of defection are extremely strong ( T → 2) within the multigame environment, which is different from previous work [52]. Fig. 2(a) also suggests that the average cooperativeness level s increases with the increase of ρ . This observation further verifies what we have observed in Fig. 1, namely the more heterogeneous multigame environment supports cooperation level stronger in the structured population. Moreover, in Fig. 2(b), the standard deviation of cooperation level σ shows an increasing trend as ρ increases from 0.1 to 0.9. This trend indicates that, in the stationary state, the higher the average cooperationess level s the more diverse of the strategies adopted by players in the structured population. Putting these observed results together, we can conclude that the multigame environment (ρ = 1) can play a better role than the homogeneous population (ρ = 1) in promoting the average level of cooperativeness. Because the perceptions of interactions are heterogeneous, the cooperative behavior may be different between players. Here, we measure the difference in cooperative behavior between players belonging to different subpopulations. As depicted in Fig. 3, the average cooperativeness level of the SG subpopulation is notably higher than that of the other two subpopulations in all ρ and T values. However, the presented results show that the average coop-
JID:PLA
4
AID:126165 /SCO Doctopic: Biological physics
[m5G; v1.261; Prn:13/12/2019; 15:34] P.4 (1-7)
X. Wang et al. / Physics Letters A ••• (••••) ••••••
erativeness of the PD subpopulation is the lowest among the three different subpopulations. Taken together, if there are less players belonging to the PD subpopulation, the average level of cooperativeness would reach a higher level in the stationary state. Next, we test how the PSO algorithm affects the average cooperativeness level s within the multigame environment. We first plot the stationary distribution of strategies in dependence on ω and ρ in Fig. 4. The presented results in Fig. 4 further corroborate that the lower ρ is conducive to the promotion of cooperation. For example, for ω = 0.9, the average cooperativeness level s increases from 0.51 to 0.76 as ρ decreases from 0.9 to 0.1; for ω = 0.1, s increases from .39 to 0.58 as ρ decreases from 0.9 to 0.1. Interestingly, it can be noted in Fig. 4 that the preference coefficient ω would induce significantly different distribution features of strategies. Fig. 4 shows that, for small ω values (ω = 0.1), where a player tends to update its strategy by imitating the most profitable
Fig. 3. In the stationary state, the average cooperativeness level of the SG subpopulation is the highest, while the average cooperativeness level of the PD subpopulation is the lowest irrespective of the values of parameters ρ and T . Moreover, the average level of cooperativeness between the SG subpopulation and the PD subpopulation is higher than that of WPD subpopulation. The values of other parameter are N = 10000, ω = 0.9 and θ = 0.4 respectively.
strategy in its past actions, a large amount of players with high cooperativeness level (s → 1) gather clusters, which are surrounded by players with low cooperativeness level (s → 0). In this case, we also note that almost all players tend to adopt monotonous strategies, that is, either full defection (s = 0) or full cooperation (s = 1). On the contrary, for large ω values (ω = 0.9), where a player tends to imitate the strategy adopted by the player with the highest utility in its neighborhood, the clusters of players with relatively high cooperativeness level disappear, while the strategies adopted by players become diversified in the stationary state. More interestingly, we observe in Fig. 4 that, within the multigame environment, the average cooperativeness level s shows an upward trend when the value of ω increases from 0.1 to 0.9. The observation is contrary to the conclusion in the previous work of Jianlei Zhang et al. [52]. This different observation can be explained by the following reason. Within the multigame environment, the accumulated utility of player i cannot be improved by imitating the strategy of the player j with the highest utility in player i’s neighborhood when the perceptions of interactions between them are different. Therefore, it would be more rational for a player to obtain higher utility by imitating the most profitable strategy in its actions history rather than the strategy adopted by the player with the highest utility in its neighborhood. To obtain further insight concerning the evolution of cooperation, we depict the spatial distribution of the strategy and strategy update velocities in Fig. 5 and Fig. 6. The results presented in Fig. 5 show that, for ω = 0.05, the strategies adopted by players in the stationary state are mainly distributed around s = 0 (full cooperation) and s = 1 (full defection) under different ρ values. Conversely, for ω = 0.95, the strategies adopted by players become diverse and are distributed throughout the entire interval of s. These observations are consistent with the above-mentioned results obtained in Fig. 1(b) and Fig. 4. Moreover, it is clear in Fig. 6 that the preference coefficient ω plays a crucial role in the distribution of strategy update velocities. At small ω values (ω = 0.1), there is a large amount of players in the population whose strategy update velocities is greater than 0 even in the stationary state. The observation means that players constantly update their own strategies to achieve better utility. At large ω values (ω = 0.9), however, the strategy update velocities of most players are close to 0, and they tend to remain their strategies unchanged in the stationary state.
Fig. 4. For small ω values (ω = 0.1), a large amount of players with high cooperativeness level (s → 1) gather clusters. Conversely, for large ω values (ω = 0.9), the strategies adopted by players become diversified in the stationary state. Moreover, the average cooperativeness level s shows an upward trend when the value of ω increases from 0.1 to 0.9. The values of other parameter are N = 10000, θ = 0.5 and T = 1.1 respectively.
JID:PLA
AID:126165 /SCO Doctopic: Biological physics
[m5G; v1.261; Prn:13/12/2019; 15:34] P.5 (1-7)
X. Wang et al. / Physics Letters A ••• (••••) ••••••
5
Fig. 5. For ω = 0.05, the strategies adopted by players in the stationary state are mainly distributed around s = 0 (full cooperation) and s = 1 (full defection) under different ρ values. Conversely, for ω = 0.95, the strategies adopted by players become diverse and are distributed throughout the entire interval of s. The values of other parameter are N = 10000, θ = 0.5 and T = 1.1 respectively.
Fig. 6. At small ω values (ω = 0.1), there are many players in the population whose strategy update velocities is greater than 0 even in the stationary state. However, at large ω values (ω = 0.9), the strategy update velocities of most players are close to 0. The values of other parameter are N = 10000, θ = 0.5 and T = 1.1 respectively.
JID:PLA
AID:126165 /SCO Doctopic: Biological physics
[m5G; v1.261; Prn:13/12/2019; 15:34] P.6 (1-7)
X. Wang et al. / Physics Letters A ••• (••••) ••••••
6
4. Discussion In reality, people are not always same. Motivated by the fact, in this paper, we have introduced the multigame environment. Within the multigame environment, players perceive their interactions in different ways and consequently adopt different payoff matrices. Beside, we have also assumed that players in the structured population update their strategies by means of the PSO algorithm. In the algorithm, a player updates its strategy according to both the most profitable strategies in their actions history (ω → 1) and the strategy adopted by the player with the highest utility in their neighborhood (ω → 0). Based on the model setting, we have first analyzed how the multigame environment affect the evolution of cooperation. Simulations have presented that the multigame environment is conducive to promoting the level of cooperation. Besides, the more heterogeneous the multigame environment, the stronger the cooperation irrespective of the value of temptation. In addition, we observed that if players belonging to the SG subpopulation use higher θ1 value, the level of cooperation will be higher in the stationary state. Then, we have also discussed the distributions of strategies and strategy update velocities on the square lattice to figure out how the PSO algorithm influences the evolution of cooperation. We have found that when players update their strategies by imitating the most profitable strategy in its past actions (ω → 1), players will adopt either full cooperation or full defection and constantly update their own strategies. Conversely, diverse strategies will be adopted by players if every player updates its strategies by imitating the strategy adopted by the player with the highest utility in its neighborhood (ω → 0). In this case, players tend to remain their strategies unchanged in the stationary state. Moreover, for any player in the multigame environment, imitating the most profitable strategy in its actions history can induce the promotion of cooperation more significantly than imitating the strategy adopted by the player with the highest utility in its neighborhood. Acknowledgement Wenman Chen gratefully acknowledges the generous support by the China Scholarship Council. We thank anonymous reviewers and Dezhuang Hu for insightful advice. This work was supported by the National Natural Science Foundation of China (NNSFC) (Grant No. 71871171, 71871173 and 71701076). The numerical calculations in this paper have been done on the supercomputing system in the Supercomputing Center of Wuhan University. References [1] M.R. Christie, G.G. McNickle, R.A. Frenche, M.S. Blouinf, Life history variation is maintained by fitness trade-offs and negative frequency-dependent selection, Proc. Natl. Acad. Sci. USA 115 (2018) 4441–4446. [2] A.M. Colman, The puzzle of cooperation, Nature 440 (2006) 744–745. [3] R. Axelrod, W.D. Hamilton, The evolution of cooperation, Science 211 (1981) 1390–1396. [4] A. Szolnoki, M. Perc, Z. Danku, Towards effective payoffs in the prisoner’s dilemma game on scale-free networks, Phys. A, Stat. Mech. Appl. 387 (2008) 2075–2082. [5] G. Szabo, C. Toke, Evolutionary prisoner’s dilemma game on a square lattice, Phys. Rev. E 58 (1998) 69–73. [6] E. Akin, S. Plaskacz, J. Zwierzchowska, Smale strategies for the n-person iterated prisoner’s dilemma, Topol. Methods Nonlinear Anal. 53 (2019) 351–361. [7] Y.S. Li, C. Xu, P.M. Hui, An effective intervention algorithm for promoting cooperation in the prisoner’s dilemma game with multiple stable states, Phys. A, Stat. Mech. Appl. 501 (2016) 400–407. [8] W.B. Du, X.B. Cao, M.B. Hu, W.X. Wang, Asymmetric cost in snowdrift game on scale-free networks, Europhys. Lett. 87 (2009) 60004. [9] Z. Wang, M. Jusup, L. Shi, J.H. Lee, Y. Iwasa, S. Boccaletti, Exploiting a cognitive bias promotes cooperation in social dilemma experiments, Nat. Commun. 9 (2018) 2954.
[10] T. Sasaki, I. Okada, Cheating is evolutionarily assimilated with cooperation in the continuous snowdrift game, Biosystems 131 (2015) 51–59. [11] G. Chen, T. Qiu, X.R. Wu, Clustering effect on the evolution of cooperation in a herding snowdrift game, Chin. Phys. Lett. 26 (2009) 3–6. [12] B. Wang, Z. Pei, L. Wang, Evolutionary dynamics of cooperation on interdependent networks with the prisoner’s dilemma and snowdrift game, Europhys. Lett. 107 (2014) 58006. [13] L.H. Shang, X. Li, X.F. Wang, Cooperative dynamics of snowdrift game on spatial distance-dependent small-world networks, Eur. Phys. J. B 54 (2006) 369–373. [14] C. Shen, C. Chu, L. Shi, M. Jusup, M. Perc, Z. Wang, Coevolutionary resolution of the public goods dilemma in interdependent structured populations, Europhys. Lett. 124 (2018) 48003. [15] P.A.I. Forsyth, C. Hauert, Public goods games with reward in finite populations, J. Math. Biol. 63 (2011) 109–123. [16] M.A. Nowak, Five rules for the evolution of cooperation, Science 314 (2006) 1560–1563. [17] J.L. Barker, J.L. Bronstein, M.L. Friesen, E.I. Jones, H.K. Reeve, A.G. Zink, M.E. Frederickson, Synthesizing perspectives on the evolution of cooperation within and between species, Evolution 71 (2017) 814–825. [18] R.L. Trivers, The evolution of reciprocal altruism, Q. Rev. Biol. 46 (1971) 35–57. [19] S. Righi, K. Takacs, Social closure and the evolution of cooperation via indirect reciprocity, Sci. Rep. 8 (2018) 11149. [20] Z. Rong, H.X. Yang, W.X. Wang, Feedback reciprocity mechanism promotes the cooperation of highly clustered scale-free networks, Phys. Rev. E 82 (2010) 047101. [21] H. Ohtsuki, C. Hauert, E. Lieberman, M.A. Nowak, A simple rule for the evolution of cooperation on graphs and social networks, Nature 441 (2006) 502–505. [22] D.G. Rand, M.A. Nowak, J.H. Fowler, N.A. Christakis, Static network structure can stabilize human cooperation, Proc. Natl. Acad. Sci. USA 111 (2014) 17093–17098. [23] M. Taborsky, J.G. Frommen, C. Riehl, The evolution of cooperation based on direct fitness benefits, Philos. Trans. R. Soc. Lond. B, Biol. Sci. 371 (2016) 20150472. [24] Q. Su, A. Li, L. Wang, H.E. Stanley, Spatial reciprocity in the evolution of cooperation, Proc. R. Soc. Lond. B, Biol. Sci. 286 (2019) 78–85. [25] M.A. Nowak, R.M. May, Evolutionary games and spatial chaos, Nature 359 (1992) 826–829. [26] C. Hauert, Replicator dynamics of reward and reputation in public goods games, J. Theor. Biol. 267 (2010) 22–28. [27] J. Andreoni, W. Harbaugh, L. Vesterlund, The carrot or the stick: rewards, punishments, and cooperation, Am. Econ. Rev. 93 (2003) 893–902. [28] M. Milinski, D. Semmann, H.J. Krambeck, Reputation helps solve the ‘tragedy of the commons’, Nature 415 (2002) 424–426. [29] F.P. Santos, F.C. Santos, J.M. Pacheco, Social norm complexity and past reputations in the evolution of cooperation, Nature 555 (2018) 242–245. [30] R. Boyd, P.J. Richersont, Punishment allows the evolution of cooperation (or anything else) in sizable groups, Ethol. Sociobiol. 13 (1992) 171–195. [31] X. Li, M. Jusup, Z. Wang, H. Li, L. Shi, B. Podobnik, H.E. Stanleyf, S. Havlin, S. Boccaletti, Punishment diminishes the benefits of network reciprocity in social dilemma experiments, Proc. Natl. Acad. Sci. USA 115 (2018) 30–35. [32] H. Yang, X. Chen, Promoting cooperation by punishing minority, Appl. Math. Comput. 316 (2018) 460–466. [33] R. Boyd, H. Gintis, S. Bowles, Coordinated punishment of defectors sustains cooperation and can proliferate when rare, Science 328 (2010) 617–620. [34] J.H. Fowler, Altruistic punishment and the origin of cooperation, Proc. Natl. Acad. Sci. USA 102 (2005) 7047–7049. [35] T. Ohdaira, Characteristics of the evolution of cooperation by the probabilistic peer-punishment based on the difference of payoff, Chaos Solitons Fractals 95 (2017) 77–83. [36] Y.N. Genga, C. Shen, K. Hu, L. Shi, Impact of punishment on the evolution of cooperation in spatial prisoner’s dilemma game, Phys. A, Stat. Mech. Appl. 503 (2018) 540–544. [37] T. Yu, S.H. Chen, H. Li, Social norms, costly punishment and the evolution of cooperation, J. Econ. Interact. Coord. 11 (2016) 313–343. [38] A. Szolnoki, M. Perc, Collective influence in evolutionary social dilemmas, Europhys. Lett. 113 (2016) 58004. [39] Z. Wang, M. Jusup, R.W. Wang, L. Shi, Y. Iwasa, Y. Moreno, J. Kurths, Onymity promotes cooperation in social dilemma experiments, Sci. Adv. 3 (2017) e1601444. [40] J. Qin, Y. Chen, Y. Kang, M. Perc, Social diversity promotes cooperation in spatial multigames, Europhys. Lett. 118 (2017) 18002. [41] Y. Fang, T.P. Benko, M. Perc, H. Xu, Dissimilarity-driven behavior and cooperation in the spatial public goods game, Sci. Rep. 9 (2019) 7655. [42] Z. Danku, M. Perc, A. Szolnoki, Knowing the past improves cooperation in the future, Sci. Rep. 9 (2019) 262. [43] Y. Fang, T.P. Benko, M. Perc, Synergistic third-party rewarding and punishment in the public goods game, Proc. R. Soc. A 475 (2019) 17–30. [44] Y. Liu, C. Huang, Q. Dai, Preferential selection based on strategy persistence and memory promotes cooperation in evolutionary prisoner’s dilemma games, Phys. A, Stat. Mech. Appl. 499 (2018) 481–489.
JID:PLA
AID:126165 /SCO Doctopic: Biological physics
[m5G; v1.261; Prn:13/12/2019; 15:34] P.7 (1-7)
X. Wang et al. / Physics Letters A ••• (••••) ••••••
[45] M. Perc, J.J. Jordan, D.G. Rand, Z. Wang, S. Boccaletti, A. Szolnoki, Statistical physics of human cooperation, Phys. Rep. 687 (2017) 1–51. [46] Z. Wanga, C.T. Bauchc, S. Bhattacharyyad, A. d’Onofrioe, P. Manfredif, M. Perc, N. Perra, M. Salathe, D. Zhao, Statistical physics of vaccination, Phys. Rep. 664 (2016) 1–113. [47] A. Szolnoki, M. Perc, Coevolutionary success-driven multigames, Europhys. Lett. 108 (2014) 28004. [48] Z. Wang, A. Szolnoki, M. Perc, Different perceptions of social dilemmas: evolutionary multigames in structured populations, Phys. Rev. E 90 (2014) 032813. [49] Z. Li, D. Jia, H. Guo, Y. Geng, C. Shen, Z. Wang, X. Li, The effect of multigame on cooperation in spatial network, Appl. Math. Comput. 351 (2019) 162–167. [50] F.C. Santos, J.M. Pacheco, T. Lenaerts, Evolutionary dynamics of social dilemmas in structured heterogeneous populations, Proc. Natl. Acad. Sci. 103 (2006) 3490–3494.
7
[51] X. Wang, S. Lv, J. Quan, The evolution of cooperation in the prisoner’s dilemma and the snowdrift game based on particle swarm optimization, Phys. A, Stat. Mech. Appl. 482 (2017) 286–295. [52] J. Zhang, C. Zhang, T. Chu, M. Perc, Resolution of the stochastic strategy spatial prisoner’s dilemma by means of particle swarm optimization, PLoS ONE 6 (2011) e21787. [53] X. Wang, S. Lv, The roles of particle swarm intelligence in the prisoner’s dilemma based on continuous and mixed strategy systems on scale-free networks, Appl. Math. Comput. 355 (2019) 213–220. [54] M. Perc, Stability of subsystem solutions in agent-based models, Eur. J. Phys. 39 (2018) 014001.