Accepted Manuscript Title: Multi-leader PSO (MLPSO): A New PSO Variant for Solving Global Optimization Problems Authors: Penghui Liu, Jing Liu PII: DOI: Reference:
S1568-4946(17)30504-5 http://dx.doi.org/doi:10.1016/j.asoc.2017.08.022 ASOC 4412
To appear in:
Applied Soft Computing
Received date: Revised date: Accepted date:
10-4-2017 25-7-2017 8-8-2017
Please cite this article as: Penghui Liu, Jing Liu, Multi-leader PSO (MLPSO): A New PSO Variant for Solving Global Optimization Problems, Applied Soft Computing Journalhttp://dx.doi.org/10.1016/j.asoc.2017.08.022 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Multi-leader PSO (MLPSO): A New PSO Variant for Solving Global Optimization Problems Penghui Liu, Jing Liu1
Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an 710071, China
1
Corresponding author. For additional information regarding this paper, please contact Jing Liu, e-mail:
[email protected], homepage: http://see.xidian.edu.cn/faculty/liujing.
Graphical abstract
Highlights Weak exploration ability and premature convergence restrict performance of PSO Particles consult more valuable information to adjust its search pattern Leaders enhance diversity of particles' search pattern. Particles dynamically select their leaders based on the game theory The best leader of generations updates itself through a self-learning process Abstract: Particle swarm optimization (PSO) has long been attracting wide attention from researchers in the community. How to deal with the weak exploration ability and premature convergence of PSO remains 1
an open question. In this paper, we modify the memory structure of canonical PSO and introduce the multi-leader mechanism to alleviate these problems. The proposed PSO variant in this paper is termed as multi-leader PSO (MLPSO) within which the modified memory structure provided more valuable information for particles to escape from the local optimum and multi-leader mechanism enhances diversity of particles' search pattern. Under the multi-leader mechanism, particles choose their leaders based on the game theory instead of a random selection. Besides, the best leader refers to other leaders' information to improve its quality in every generation based on a self-learning process. To make a comprehensive analysis, we test MLPSO against the benchmark functions in CEC 2013 and further applied MLPSO to a practical case: the reconstruction of gene regulatory networks based on fuzzy cognitive maps. The experimental results confirm that MLPSO enhances the efficiency of the canonical PSO and performs well in the realistic optimization problem. Keywords: Particle swarm optimization; Modified memory structure; Multi-leader mechanism; Game theory; CEC 2013.
I. Introduction Many real-world problems in different fields can be translated into optimization problems. And therefore, optimization has been a hot topic of research for decades. Moreover, with the optimization problems in reality becoming increasingly complex, there seems to be no end of the desire for a better algorithm and various algorithms have been proposed. Among so many optimization algorithms proposed, particle swarm optimization (PSO), proposed by Kennedy et al. in [1-2] emulating the behavior of animals, has become quite popular and has been applied to different optimization problems [3-8, 44-45]. Even if PSO shares many similarities with evolutionary computational techniques, the canonical PSO does not exchange information between agents through evolution operators such as crossover. Instead, each agent in the swarm updates its search pattern through learning from its own memory and other agents’ memory. Even if PSO is easy to implement and performs well on many optimization problems, researchers still find the exploration capability in canonical PSO is weak and particles may converge prematurely around the local optimum. In this case, how to solve these problems of canonical PSO and thereby enhancing efficiency of algorithm has been a hot topic in the recent decades. In this paper, we modify the memory structure of canonical PSO and introduce the multi-leader mechanism to alleviate these problems. The proposed PSO variant in this paper is termed as multi-leader PSO (MLPSO) within which the modified memory structure provide 2
more valuable information for particles to escape from the local optimum. Meanwhile, multi-leader mechanism takes advantage of global memories (leaders) in the modified memory structure to enhance diversity of particles' search pattern, thereby broadening the search space. To enhance the exploration capability of particles in PSO, particles under the multi-leader mechanism dynamically select their leaders and then adjust their search pattern correspondingly. Moreover, the best leader refers to other leaders' information and updates its quality in every generation based on a self-learning process. To make a comprehensive analysis, we test MLPSO against the benchmark functions in CEC 2013 and further applied MLPSO to a practical case: the reconstruction of gene regulatory networks based on fuzzy cognitive maps. The optimization results and Wilcoxon signed test results show that MLPSO performs well in optimization and is applicable to realistic problems. The rest of the paper is organized as follows: an overview of the PSO and some modified variants are reviewed in Section II. Details of the MLPSO are given in Section III. Section IV contains the experimental results and analysis. Finally, conclusion is provided in Section V.
II. Review of PSO and its modified variants Inspired by the foraging behavior of animals, PSO employs a swarm of particles to represent animals and search the location of food (optimal solution) in the n-dimensional solution space. Each particle i has the individual memory of the best historical position
X ilbest and its objective value Yi lbest . Moreover, the best individual memory X gbest is broadcast across the whole population. Particle adapts its search pattern based on its lbest individual memory X i and the global memory X gbest as follows:
The position and velocity of particle i are represented as:
X i ( xi1 , xi 2 , xi 3 ,..., xiN )
(1)
Vi (vi1 , vi 2 , vi 3 ,..., viN )
(2)
In every generation, particles adapt its search pattern and position in the dth dimension according to (3) and (4) [1-2].
vid (t 1) vid (t ) c1r1d ( xidlbest xid (t )) c2 r2 d ( xdgbest xid (t )) 3
(3)
xid (t 1) vid (t 1) xid (t )
(4)
where c1 and c2 are defined as cognitive and social acceleration coefficients, while X idlbest and X dgbest marks the corresponding value of the individual memory of ith particle and global
memory in the dth dimension. Different sets of c1 and c2 pay different attention to the local search and the global search. r1d and r2d are random numbers produced in the interval of [0, 1]. However, the original PSO characterized by (3) and (4) did not perform desirably. In this case, Shi et al. introduced the inertia weight w to help balance the global and local search ability of particles [9]. They modified the velocity update equation as:
vid (t 1) wvid (t ) c1r1d ( xidlbest xid (t )) c2 r2 d ( xdgbest xid (t ))
(5)
They claimed that a large inertial weight is more appropriate for global search, and a small inertial weight boosts the local search. Shi et al. have also analyzed the influence of the linearly decreasing of w over the course of search in [9] and designed fuzz methods to help adjust w nonlinearly in [10]. Krzeszowski et al. have devoted to an evaluation of selected fuzzy particle swarm optimization algorithm and proposed a modified fuzzy particle swarm optimization method in [46]. Apart from the inertial weight, researchers have contributed to many other methods to help improve the performance of PSO: 1) Modification in topology structures. To maintain the diversity of swarm, different topology structures are introduced. Kennedy claims that a small neighborhood might be more appropriate for PSO to solve complex problems, while PSO with a large neighborhood would perform better on simple problems [11-12]. Suganthan proposed a dynamically growing neighborhood of particles which finally covers the whole population [13]. Hu et al. have introduced a dynamic neighborhood comprised by the current m nearest particles in the solution space [14]. 2) Hybridization with other search approaches. As different optimization approaches may have different strength, it is natural to hybridize them to make a complementation. Evolutionary operators are introduced to help PSO escape from 4
the local optimum and increase the diversity of population [15-17]. Ran et al. recombined the PSO (weak in exploration) with the artificial bee colony (ABC, weak in exploitation) [18]. Gálvez et al. presented a novel hybrid evolutionary approach comprised by genetic algorithm and particle swarm optimization for curve fitting in manufacturing [19]. 3) Cooperative approaches. As the evaluation of the error function is only possible when given a complete n-dimensional vector, the improvement of two components may overrule a degradation in a single component. This may lead particles get stuck into the local optimal and is concluded as “Two Steps Forward, One Step Back” in [20]. To solve this problem, Li et al. partitioned the n-dimensional problems into a number of sub-vectors and employing the cooperative approach to help integrate the improvement process [21]. 4) Modification in learning process. Cheng et al. proposed a pairwise competition mechanism and only the particle loses competition will update their search pattern [22]. Liang et al. used all other particles' historical best information to update the searched pattern of a particle [23]. Saxena et al. in [47] provided dynamicity to stagnated particles when particles' personal best and swarm's global best position do not improve in successive generation. Based on the preceding research results on this subject, we can see dealing with the weak exploration ability of PSO is essential for further improving efficiency of algorithm and has been a hot topic in the recent decades [24-26].
III. MLPSO Particles in canonical PSO usually wander around the global and local best location without fully exploring the whole searching space. Therefore, the prematurity and weak exploration capability become the main bottleneck in the efficiency of canonical PSO. To alleviate these problems, we modify the memory structure in PSO and introduce the multi-leader mechanism. The proposed PSO variant in this paper is termed as multi-leader PSO (MLPSO). Within MLPSO, more positions are memorized and particles are provided with more valuable information to escape from the local optimum. Besides, global memories 5
lead particles to search different region in the solution space, respectively. Under the multi-leader mechanism, particles choose their leaders based on the game theory instead of a random selection. Besides, the best leader refers to other leaders' information to improve its quality in every generation based on a self-learning process. To provide details of MLPSO, three aspects of MLPSO are discussed: (1) How particles adjust their search pattern based on a modified memory structure. (2) How multi-leader mechanism performs to enhance exploration capability of particles. (3) Implementation of MLPSO.
A. Modified memory structure As the evaluation of the error function is only possible when given a complete n-dimensional vector, the improvement of two components may overrule a degradation in a single component. This may lead particles get stuck into the local optimal. Even if preceding research have proposed the cooperative approach to deal with this problem, the corresponding increase in evaluation is undeniable. In this case, we attempt to deal with this problem through a modified memory structure rather than split the problem dimension, as the wisdom of the masses may exceed that of the wisest individual. This can be explained as: Suppose particle i adjust its search pattern based on memory xilbest and x gbest . The probability that particle i learns from a memory and suffers a step back in the dth dimension is p. Thus, the probability for individual and global memories share a step back in the dth dimension should be pmpn, where m (n) stands for the number of individual (global) memories that particle i has consult to. Obviously, if i adjusts its search pattern based on more memories, we can alleviate the "Two Steps Forward, One Step Back". Thus, individual memory and global memory in MLPSO are comprised by more history position compared with the canonical PSO, and the new velocity update equation can be modified as:
vid (t 1) wvid (t ) c1r1d ( xidlbest xid (t )) c2 r2 d ( xdgbest xid (t ))
(6)
where X dlbest and X dgbest are respectively the mean value of individual memories and global memories in the dth dimension. Notably, we restrict the quantitative restrictions of the 6
individual memories and global memories to no more than m and n. Meanwhile, velocity in every dimension is restricted to no larger than vlimit.
B. Multi-leader Mechanism In the previous part, we modify the memory structure in PSO to help particles escape from the local optimum and alleviate the "Two Steps Forward, One Step Back". However, the exploration capability of PSO remains unresolved. Chen et al. in [48] introduced the concept "aging" to serve as challenging mechanism for promoting a suitable leader to lead the swarm. Nebro et al. in [49] analyzed how different leader selection strategies may influence the performance of PSO in multi-objective problems. Apparently, global memory (leader) is essential to the performance of PSO. Huan et al. in [50] introduce a socio-inspired metaheuristic technique referred to as ideology algorithm (IA). Solution space in IA is equally separated and party members are initially sampled from these subspaces respectively. Separating population into different parties effectively enhance exploration of IA in different solution area. Moreover, refer to the good exploration capability of artificial bee colony, employ bees search for a higher quality solution between the food sources. Thus, reasonable methods to distribute the search in different area is essential to exploration of an algorithm. Meanwhile, a single global memory in canonical PSO has greatly influence the search of particles in different areas. To extend the search space of particles and to fully excavate the information of leaders, we attempt to employ global memories (leaders) to guild particles to search different area of space continuously. Therefore, multi-leader mechanism is proposed to combine influence from different leaders and enhance diversity of particles' search pattern. Inspired from the search method of artificial bee colony, multi-leader mechanism regard each global memories (leaders) as the food sources. The area between leaders attracts particles to do further search. Even if IA also introduces the definition of leaders, leaders in our algorithm are simply history memories rather than a specific individual. Besides, IA produces new solutions based on sampling the neighborhood of individuals in the population, while our algorithm produces new solutions based on the move and search of particles given in (6). Finally, similar individuals in IA are separated into groups and the worse individual may change its 7
regulation to other party if the corresponding difference of solutions exceeds a threshold. However, every particle in our algorithm is not attributable and prefer to select leaders with different solutions to its own memories, which is the opposite of IA. Apparently, the search method and definition of leaders in IA and our algorithm are different. To reduce the similarity of particles' search pattern, every particle under the multi-leader mechanism are designed to select some leaders during its search process dynamically. As game theory is capable to optimize the task assignment of agents efficiently subject to dynamic situations, instead of using a random selection, multi-leader mechanism employ game theory to guide particles to make their own choice. To the best of our knowledge, no research has been done to shed light on how game theory may help improve performance of particles. Meanwhile, how to determine the strategic benefits remains unknown. Thus, to effectively extend the search area, multi-leader mechanism can reward a particle more if it selects leaders with dissimilar information to its individual memory. Suppose the reward for a particle t to select the ith leader for adjustment is Pt,i, which can be obtained as: d xidgbest xtdlbest , max j { x gbest xtdlbest } 0 jd gbest lbest d disi max j { x jd xtd } d 0, max j { X jdgbest X tdlbest } 0 d
Pt , i
disi gen gaussian(u 0, 2 1) exp(a ) gen _ max j dis j
(7)
(8)
where a>0 is the noise attenuation coefficient and xidgbest marks the solution value of the ith leader in the dth dimension. A decline Gaussian noise is superimposed upon the calculated payoff to enhance the randomness of particles' initial exploration. In principle, agents in the realistic game process usually compete for limited resource and choose strategy for the final optimal payoff. However, to avoid extra computational cost, we consider leaders can be unrestrictedly selected by particles. Every generation, each particle search under the influence of LN leaders and p percent of particles may reselect their leaders while the others keep their strategy in the last generation be unchanged. Strategy of particles gradually 8
become stable with the increasing of generations (gen): p=(gen_max-gen)/gen_max
(9)
Therefore, the search pattern of particles in MLPSO can be illustrated as in Fig.1: .
As global memories (leaders) have the information relative to the optimization problem
to be solved, an effective self-learning process should be helpful for improving the quality of leaders and accelerate the search. The flow of self-learning operator can be concluded as: Algorithm 1: Self-learning operator Input: L: Leaders. gen: Current generation. gen_max: Maximum number of generations. h: Standard deviation of Gaussian distribution. Output: L': Updated leaders. h': Updated standard deviation of Gaussian distribution. Step 1: Select the best leader from L (Supposed the location of the selected leader as Xr); Step 2: Iterate every dimension of Xr and conduct Gaussian mutation operation; If the evaluation of mutation is better, replace the current solution with the mutation; Step 3: Conduct the DE-based mutation operation upon the whole Xr; If the evaluation of mutation is better, replace the current solution with the mutation; Step 4: Conduct the opposition-based mutation upon the whole Xr; If the evaluation of mutation is better, replace the current solution with the mutation; Step 5:Update the standard deviation of Gaussian distribution and output L', h';
The Gaussian mutation operation can be formulated as: (10)
X ri ( Lmax (i ) Lmin (i ) X ri ) (1 Gaussian (0, h ))
The DE-based mutation operator can be formulated as: X r' X r F ( X s X r ), Fit s Fitr ' X r X r F ( X s X r ), Fit s Fitr
(11)
The opposition-based mutation can be formulated as: (12)
X ri Lmax (i ) Lmin (i ) X ri
The standard deviation of Gaussian distribution is updated as: (13)
h h ( hinit hmin ) / gen _ max
where Lmax(d) and Lmin(d) represent the maximum and minimum value of leaders in the dth dimension, sr, and F (scale factor) is a control parameter. If the evaluation of leader i (Fiti) is better than that of j (Fit j), Fiti> Fitj. Notably, if the mutation exceeds the domain, a new mutation is produced in the same procedure.
C. Implementation of MLPSO 9
In the previous part, we introduce the modified memory structure and multi-leader mechanism in MLPSO. Next, we further summarize the implementation of MLPSO in Algorithm 2 and complement some details. Algorithm 2: MLPSO Input: ps: population size. gen_max: Maximum number of generations. hinit: Initial standard deviation of Gaussian distribution. hmin: Final standard deviation of Gaussian distribution. ux: Upper bound of solution; lx: Lower bound of solution. vinit: Initial speed limit. vmin: Minimum speed limit. dim: Dimension of problem solution. winit: Initial inertial weight. wmin: Minimum of inertial weight. c1: cognitive acceleration coefficients. c2: social acceleration coefficients. F: Scale factor. a: Noise attenuation coefficient. LN: The number of leaders selected by a particle. n: Quantitative restrictions of global memories. m: Quantitative restrictions of a particle's memories. Output: L*: Best memorized solution. Step 1: Randomly initialize the positions of particles; Step 2: Update the basic parameters of current generation; Step 3: Evaluate the current position of particles, and then update particles' individual memories and global memories (leaders); Step 4:.Particles make their selection of leaders (each select LN from the n leaders) and adjust their velocities, and velocity in each dimension can not surpass the current speed limit; Step 5:Particles move along the velocity direction; Step 6:The best leader update their information according to the self-learning operator; Step 7:If the number of generations or evaluations satisfies the threshold, output the current best leader; otherwise, go to Step 2.
Initialization: Basic parameters of MLPSO are set. Location and velocity of particles are initialized according to: xri=Xmin(i)+rnd(Xmax(i)-Xmin(i))
(14)
vri=-vinit(i)+rnd2vinit(i)
(15)
where rnd is a random value in [0,1] and Xmin(i) (Xmax(i)) stands for the lower bound (upper bound) of decision vectors in the ith dimension. Update parameter: Inertial weight and speed limit are updated according to: w=winit-(winit-wmin)(gen/gen_max)
(16)
vlimit=vinit-(vinit-vmin)(gen/gen_max)
(17)
From the above, we can see MLPSO inherits the basic mechanism of canonical PSO. However, the modified memory provides more valuable information for the adjustment of particles' search pattern to escape from the local optimum. Besides, multi-leader mechanism guilds particles search across different region of the solution space and thereby enhancing exploration capability of MLPSO. Meanwhile, the self-learning of leaders within the 10
multi-leader mechanism deeply excavating information of leaders and promoting the quality of leaders.
IV. Experimental Results In this section, the performance of MLPSO is evaluated and the corresponding experiments can be divided into two parts: In the first part, referring to the preceding research, we test MLPSO against the benchmark functions in the CEC 2013 [31] and compare the performance of MLPSO with that of CPSO [9], CLPSO [28], SLPSO [29] and fk-PSO [30]. This experiment is designed to validate the modification of MLSPO in the PSO methodology. In the second part, MLPSO is applied to a practical case: the reconstruction of gene regulatory networks (RGRN) based on fuzzy cognitive maps [32]. To verify that MLPSO can keep pace with the development in this field, comparison of optimization efficiency is made between MLPSO and some currently popular algorithm on this subject, including multi-agent genetic algorithm
(MAGA)
[33],
five
representative
existing
methods
(RCGA)
[34],
divided-and-conquer RCGA (D&C RCGA) [35], Big Bang-Big Crunch (BB-BC) [36], Differential evolution (DE) [37], and ACOR with decomposed approach (ACORD) [38]. This experiment is designed to validate MLPSO is applicable to realistic optimization problems.
A. Experimental studies of MLPSO on CEC 2013. In this part, we employ the CEC 2013 to test the performance of MLPSO. For a fair comparison with the existing PSO variants, we introduce the simulation results of CPSO, CLPSO, SLPSO and fk-PSO provided in [30, 51] from which details of the corresponding parameters setting can be referred. The parameters of MLPSO employed in our experiments are provided in Table. I. The maximum number of function evaluations in all these algorithms is set to 104 D and D=30. The simulation results of different variants of PSO on CEC 2013 are obtained through averaging over 30 independent runs and have been provided in Table. II. Table. II reports the basic summary statistics for the mean distance between the optimum value searched by algorithms and the truth optimum value. As can be seen, MLPSO performs no worse than the others on 18 functions out of the total 28 functions. And MLPSO outperforms the others on 11
16 functions out of the 18 functions. To reasonably analyze the superiority of MLPSO over the contrast algorithms, the statistical comparison using Wilcoxon signed test is conducted and the corresponding results are provided in Table. III. As the p-values provided in Table III are very small, we can conclude the significant outperformance of MLPSO over all other compared PSO algorithms.
B. Experimental studies of MLPSO on the reconstruction of gene regulatory networks based on fuzzy cognitive maps In this section, MLPSO is applied to a practical case: the reconstruction of gene regulatory network based on fuzzy cognitive maps. It is an important task that may provide helpful information for disease treatment design and drug creation purposes or help understand the activity of living creatures on the molecular level. In general, living creatures can be regarded as networks comprised by molecules and connected by some chemical reactions that are subject to the interaction activity of several genes from the gene networks. Researchers in this field introduced the gene regulatory networks (GRNs) to illustrate the interrelation among genes, while its reconstruction from time-series gene expression data remains an open question. To solve this problem, some researchers model GRNs by fuzzy cognitive maps (FCMs) that illustrate fuzzy causal relations among different concepts through directed weighted graphs [39-43]. Each FCM (C) is comprised by N concept nodes (18)
C [c1 , c2 ,......, c N ]
where ci[0,1], i=1, 2,..., N. When employing FCMs to represent GRNs, ci denotes current activation degree of ith gene. The relationship between genes can further defined by a NN weight matrix W: w1,1 w 2,1 W wN ,1
w1,2 w2,2 wN ,1
w1, N w2, N wN , N
(19)
12
where wi,j[-1,1] and represents the relationship between the ith and jth genes. Based on Cit and W, we can obtain the activation degree of the ith gene at the (t+1)th iteration Cit 1 through the following equation: N
Cit 1 g ( w ji C tj )
(20)
j 1
g ( x)
1 1 e x
(21)
where is a parameter used to control the steepness of the function and its widely employed value is 5. With these equations, the reconstruction of GRNs can be formulated as an optimization problem targeting at finding a proper W (reconstruction of GRN) that produces a similar sequence to the available time-series gene expression data. Suppose there are NS response sequences of each gene and the length of each sequence is Nt. The objective function of this optimization problem can be defined as:
Minimize
Data _ Error
1 t (s )) 2 (Cnt ( s) C n N ( Nt 1) N s 1t Nt 1
(22)
1 n N 1 s N s
t ( s ) is the observed and simulation result of activation degree of the nth where Cnt ( s ) and C n gene in the sth sequence in the tth iteration. We employ MLPSO to solve this optimization problem and provide the simulation results of other currently popular methods on this subject as a comparison (Table IV). For a fair comparison, we also provide the evaluation number of each algorithm in Table V where the evaluation number of MLPSO is the smallest in the same problem scale. Therefore, if the performance of MLPSO is much better than the other algorithms, we can safely conclude that MLPSO outperforms the others. Notably, to save the computational cost, the self-learning process of MLPSO's leaders has been adjusted and the probability for each dimension to conduct a Gaussian mutation is set to 0.2. The simulation results in Table IV are obtained through averaging over 30 independent runs except the case with 200 nodes. Out of consideration of the simulation time, 10 independent runs have been conducted for FCMs with 200 nodes. The data error defined in (21) illustrates the difference between the observed gene 13
expression data and the time-sequence produced by the GNR learned by optimization algorithms. As can be seen from simulation results provided in Table. IV, MLPSO successfully reconstruct a GNR that produces a very similar time-sequence to the gene expression data. IEPSO and dMAGAFCM outperforms ACORD RCGA, D&C RCGA, BB-BC and DE in almost all cases. For the cases with 20 and 40 genes, dMAGAFCM and ACORD may provide better optimization results than MLPSO. However, for the cases with 100 and 200 genes, MLPSO always outperforms all the others.
Now, the Wilcoxon signed-rank is employed to statistically validate the outperformance of MLPSO over the other algorithms. Corresponding p-value obtained has been provided in Table VI. As can be seen, the p-value is very small and the corresponding evaluation number of MLPSO is less than those of other algorithms. Therefore, we can conclude that MLPSO outperforms these currently popular optimization algorithms on this subject. In this case, we can see MLPSO is applicable to realistic optimization problems and its performance keeps pace with the development on the reconstruction of gene regulatory networks based on fuzzy cognitive maps.
V. Conclusions In this paper, a new PSO variant named multi-leader PSO (MLPSO) is proposed to alleviate the premature convergence and weak exploration capability of canonical PSO. To help particles overcome "Two Steps Forward, One Step Back" during the search, a modified memory structure is designed to provide more valuable information for particles' adjustment of search pattern. To enhance the exploration capability of particles, multi-leader mechanism based on the modified memory structure is introduced to take advantage of the global memories (leaders) and thereby effectively enhancing diversity of particles' search pattern. Under the multi-leader mechanism, particles choose their leaders based on the game theory instead of a random selection. Besides, the best leader refers to other leaders' information to improve its quality every generation based on a self-learning process. To validate the modification of MLPSO in PSO methodology, we test MLPSO against the benchmark functions in CEC 2013 and compare with other PSO variants in terms of the optimization 14
results. Moreover, to validate the practicability of MLPSO, we further applied MLPSO to a practical case: the reconstruction of gene regulatory networks based on fuzzy cognitive maps. The experimental results confirm that MLPSO actually enhance the efficiency of the canonical PSO and performs well in realistic optimization problems.
Acknowledgements This work is partially supported by the Outstanding Young Scholar Program of National Natural Science Foundation of China (NSFC) under Grant 61522311 and the Overseas, Hong Kong & Macao Scholars Collaborated Research Program of NSFC under Grant 61528205, and in part by the Key Program of Fundamental Research Project of Natural Science of Shaanxi Province, China under Grant 2017JZ017.
15
References [1] Eberhart R. and Kennedy J., A new optimizer using particle swarm theory, Micro Machine and Human Science, pp.39-43, 1995. [2] Kennedy J. and Eberhart R., "Particle swarm optimization", In IEEE Int. Conf. on Neural Networks, vol.4, pp.1942-1948, 1995. [3] Wang S., Zhang Y., Dong Z., Du S., Ji G., Yan J., Yang J., Wang Q., Feng C. and Phillips P., "Feed-forward neural network optimized by hybridization of PSO and ABC for abnormal brain detection," International Journal of Imaging Systems & Technology, vol.25, no.2, pp.153-164, 2015. [4] Mohamad E.T., Armaghani D.J. and Momeni E., "Prediction of the unconfined compressive strength of soft rocks: a PSO-based ANN approach," Bulletin of Engineering Geology and the Environment, vol.74, no.3, pp.745-757, 2015. [5] Renaudineau H., Donatantonio F., Fontchastagner J., Petrone G., Spagnuolo, G., Martin, J. and Pierfederici, S., "A PSO-Based Global MPPT Technique for Distributed PV Power Generation," IEEE Transactions on Industrial Electronics, vol.62, no.2, pp.1047-1058, 2015. [6] Karami A. and Guerrero-Zapata M., "A fuzzy anomaly detection system based on hybrid PSO-Kmeans algorithm in content-centric networks," Neurocomputing, vol.149, pp.1253-1269, 2015. [7] Calvini M., Carpita M., Formentini A. and Marchesoni M., "PSO-Based Self-Commissioning of Electrical Motor Drives," IEEE Transactions on Industrial Electronics, vol.62, no.2, pp.768-776, 2015. [8] Fong S., Wong R. and Vasilakos A.V., "Accelerated PSO Swarm Search Feature Selection for Data Stream Mining Big Data," IEEE Transactions on Services Computing, vol.9, no.1, pp.33-45, 2016 [9] Shi Y. and Eberhart R., "A modified particle swarm optimizer", IEEE International Conference on Evolutionary Computation Proceedings, 1998. IEEE World Congress on Computational Intelligence, pp.69-73, 1998. [10] Shi Y. and Eberhart R.C., "Particle swarm optimization with fuzzy adaptive inertia weight," Proceedings of the workshop on particle swarm optimization, vol.1, pp.101-106, 2001. [11] Kennedy J., "Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance", In Proceedings of Congress on Evolutionary Computation, vol.3, pp.1931-1938, 1999. [12] Kennedy J. and Mendes R., "Population structure and particle swarm performance", In Proceedings of Congress on Evolutionary Computation, vol.2, pp.1671-1676, 2002. [13] Suganthan P.N., "Particle swarm optimizer with neighborhood operator", In Proceedings of Congress on Evolutionary Computation, vol.3, pp.1958-1962, 1999. [14] Hu X. and Eberhart R., "Multiobjective optimization using dynamic neighborhood particle swarm optimization", In Proceedings of Congress on Evolutionary Computation, vol.2, pp.1677-1681, 2002. [15] Angeline P.J., "Using selection to improve particle swarm optimization", In Proceedings of Congress on Evolutionary Computation, pp.84-89, 1998. [16] Lvbjerg M., Rasmussen T.K. and Krink T., "Hybrid Particle Swarm Optimizer with Breeding and Subpopulations", In Proceedings of Congress on Genetic Evolutionary Computation, pp.469-476, 2001. [17] Jordehi A.R., "Enhanced leader PSO (ELPSO): A new PSO variant for solving global optimization problems," Applied Soft Computing, vol.26, no.26, pp.401-417, 2015.
16
[18] Ran M.S. and Mesut, Z., "A recombination-based hybridization of particle swarm optimization and artificial bee colony algorithm for continuous optimization problems," Applied Soft Computing, vol.13, no.4, pp.2188-2203, 2013. [19] Gálvez A. and Iglesias A., "A new iterative mutually coupled hybrid GA–PSO approach for curve fitting in manufacturing," Applied Soft Computing, vol.13, no.3, pp.1491-1504, 2013. [20] Van den Bergh F. and Engelbrecht A.P., "A Cooperative approach to particle swarm optimization," IEEE Transactions on Evolutionary Computation, vol.8, no.3, pp.225-239, 2004. [21] Li X. and Yao X., "Cooperatively Coevolving Particle Swarms for Large Scale Optimization," vol.16, no.2, pp.210-224, 2012. [22] Cheng R. and Jin Y., "A competitive swarm optimizer for large scale optimization," IEEE Transactions on Cybernetics, vol.45, no.2, pp.191-204, 2015. [23] Liang J.J., Qin A.K., Suganthan P.N. and Baskar, S., "Comprehensive learning particle swarm optimizer for global optimization of multimodal functions," IEEE Transactions on Evolutionary Computation, vol.10, no.3, pp.281-295, 2006. [24] Wahab M.N.A., Nefti-Meziani S. and Atyabi A., "A Comprehensive Review of Swarm Optimization Algorithms," Plos One, vol.10, no.5, e0122827, 2015. [25] Banks A., Vincent J. and Anyakoha C., "A review of particle swarm optimization. Part I: background and development," Natural Computing, vol.6, no.4, pp.467-484, 2007. [26] Banks A., Vincent J. and Anyakoha C., "A review of particle swarm optimization. Part II: hybridization, combinatorial, multi-criteria and constrained optimization, and indicative applications," Natural Computing, vol.7, no.1, pp.109-124, 2008. [27] Karaboga D., "An idea based on honey bee swarm for numerical optimization", Technical report-tr06, 2005. [28] Liang, J.J., Qin, A.K., Suganthan, P.N., Baskar, S., "Comprehensive learning particle swarm optimizer for global optimization of multimodal functions," IEEE transactions on evolutionary computation, vol.10, no.3, pp.281-295, 2006. [29] Cheng R. and Jin Y., "A social learning particle swarm optimization algorithm for scalable optimization," Information Sciences, vol.291, pp.43-60, 2015. [30] Nepomuceno F.V. and Engelbrecht A.P., "A self-adaptive heterogeneous pso for real-parameter optimization", Evolutionary Computation (CEC), 2013 IEEE Congress on, pp.361-368, 2013. [31] Liang J. J., Qu B.Y., Suganthan P.N. and Hernández-Díaz A.G., "Problem definitions and evaluation criteria for the CEC 2013 special session on real-parameter optimization," Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou, China and Nanyang Technological University, Singapore, Technical Report, vol.201212, pp.3-18, 2013. [32] Chi, Y. and Liu, J., "Reconstructing gene regulatory networks with a memetic-neural hybrid based on fuzzy cognitive maps," Natural Computing, pp.1-12, 2016. [33] Zhong W., Liu, J., Xue M. and Jiao L. "A multiagent genetic algorithm for global numerical optimization," IEEE Transactions on Systems Man & Cybernetics Part B Cybernetics A Publication of the IEEE Systems Man & Cybernetics Society, vol.34, no.2, pp.1128-1141, 2004. [34] Stach W., Kurgan L., Pedrycz W., and Reformat M., “Genetic learning of fuzzy cognitive maps,” Fuzzy Sets Syst., vol.153, no.3, pp.371–401, 2005. [35] Stach W., Kurgan L., and Pedrycz W., “A divide and conquer method for learning large fuzzy cognitive maps,” Fuzzy Sets Syst., vol.161, no.19, pp.2515–2532, 2010.
17
[36] Yesil E. and Dodurka M.F., “Goal-oriented decision support using big bang-big crunch learning based fuzzy cognitive map: An ERP management case study,” in Proc. IEEE Int. Conf. Fuzzy Syst., pp.1–8, 2013. [37] Papageorgiou E. and Groumpos P., “Optimization of fuzzy cognitive map model in clinical radiotherapy through the differential evolution algorithm,” Biomed. Soft Comput. Human Sci., vol.9, no.2, pp.25–31, 2003. [38] Chen Y., Mazlack L.J., and Lu L.J., “Inferring fuzzy cognitive map models for gene regulatory networks from gene expression data,” in Proc. IEEE Int. Conf. Bioinformat. Biomed., Oct., pp. 1–4, 2012. [39] Kosko B., “Fuzzy cognitive maps,” Int. J. Man-Mach. Stud., vol. 24, pp. 65–75, 1986. [40] Zhou S.,. Liu Z.Q., and Zhang J.Y., “Fuzzy causal networks: General model, inference, and convergence,” IEEE Trans. Fuzzy Syst., vol. 14, no. 3, pp.412–420, 2006. [41] Miao Y., Liu Z., Siew C., and Miao C., “Dynamical cognitive network—An extension of fuzzy cognitive map,” IEEE Trans. Fuzzy Syst., vol. 9, no. 5, pp.760–770, 2001. [42] Song H., Miao C., Roel W., Shen Z., and Catthoor F., “Implementation of fuzzy cognitive maps based on fuzzy neural network and application in prediction of time series,” IEEE Trans. Fuzzy Syst., vol. 18, no. 2, pp.233–250, 2010. [43] Stach W., Kurgan L.A., and Pedrycz W., “Numerical and linguistic prediction of time series with the use of fuzzy cognitive maps,” IEEE Trans. Fuzzy Syst., vol. 16, no.1, pp.61–72, 2008. [44] Kwolek, B., Krzeszowski, T., Gagalowicz, A., Wojciechowski, K. and Josinski, H. "Real-Time Multi-view Human Motion Tracking Using Particle Swarm Optimization with Resampling". Articulated motion and deformable objects, vol.7378, pp.92-101, 2012. [45] Alfi A. and Fateh M. "Intelligent identification and control using improved fuzzy particle swarm optimization". Expert Systems with Applications, vol.38, no.10, pp.12312-12317, 2011. [46] Krzeszowski T. and Wiktorowicz K., "Evaluation of selected fuzzy particle swarm optimization algorithms", In: Computer Science and Information Systems (FedCSIS), 2016 Federated Conference on IEEE, vol.8, pp.571-575, 2016. [47] Saxena N., Tripathi A., Mishra K.K. and Misra A.K., "Dynamic-PSO: An improved particle swarm optimizer," In: Evolutionary Computation (CEC), 2015 IEEE Congress on IEEE, pp.212-219, 2015. [48] Chen W., Zhang J., Lin Y., Chen N., Zhan Z., Chung H.S., Li Y. and Shi Y., "Particle swarm optimization with an aging leader and challengers," IEEE Transactions on Evolutionary Computation, vol.17, no.2, pp.241-258, 2013. [49] Nebro A.J., Durillo J.J., Coello C.A.C., "Analysis of leader selection strategies in a multi-objective particle swarm optimizer", In: Evolutionary Computation (CEC), 2013 IEEE Congress on IEEE, pp.3153-3160, 2013. [50] Huan T.T., Kulkarni A.J., Kanesan J. Huang C.J. and Abraham A., "Ideology Algorithm: A Socio-inspired Optimization Methodology," Neural Computing and Applications, pp.1-32, 2016. [51] Xu C., Huang H. and Lv L., "An Adaptive Convergence Speed Controller Framework for Particle Swarm Optimization Variantsin Single Objective Optimization Problems", In: Systems, Man, and Cybernetics (SMC), 2015 IEEE International Conference on IEEE, pp.2684-2689, 2015.
18
(a)
(b)
(c)
(d)
Fig.1 The influence of leaders upon the search pattern of particles. Hollow circles mark the position of global memories (leaders) and solid circles marks the position of particles. Solid lines mark the influence direction of leaders upon the search pattern of particles. (a) The influence direction of one leader. (b) The equilibrium influence direction of two leaders. (c) The influence direction when particles selects different series of leaders. (d) The influence direction when particle change its selection of leaders. Apparently, when particles search around the neighborhood of only one leader like in (a), particles are likely to search redundantly. When particles consult to more global memories (leaders) to adjust their search pattern like in (b) and select their leaders differently like in (c), they are more likely to search the space between different leaders and different region in the solution space. When individual memories of particles are similar to the currently selected leaders, particles adjust their selection based on the game theory and change strategy like in (d). Therefore, leaders under the multi-leader mechanism can help effectively enhance diversity of particles's search pattern.
19
TABLE I PARAMETERS IN MLPSO. c1
c2
winit
wmin
ps
vinit
vmin
gen_max
F
a
hinit
hmin
LN
n
m
0.5
2.5
1
0.1
200
20
0.2
1300
0.5
0.5
0.5
0.01
4
20
4
TABLE II THE RESULTS OF MLPSO, CPSO, CLPSO, SLPSO AND FK -PSO CONDUCTED ON CEC'2013 BENCHMARK FUNCTIONS IN THE INDEX "MEAN (STANDARD DEVIATION)". Algorithm
F1
F2
F3
F4
F5
F6
F7
MLPSO
0.00E+00
4.73E+05
3.10E+06
1.20E+03
0.00E+00
1.80E+01
3.00E+00
(0.00E+00)
(1.79E+05)
(6.21E+06)
(5.44E+02)
(0.00E+00)
(1.46E+01)
(4.28E+00)
2.34E+02
3.12E+07
3.98E+09
5.01E+04
4.58E+01
9.56E+01
1.34E+02
(1.36E+03)
(1.26E+07)
(3.74E+09)
(1.44E+04)
(2.75E+01)
(5.32E+01)
(3.94E+01)
0.00E+00
2.59E+07
1.37E+09
4.82E+04
7.70E-07
4.64E+01
8.36E+01
(0.00E+00)
(5.85E+06)
(4.77E+08)
(7.46E+03)
(1.84E-07)
(7.00E+00)
(8.89E+00)
0.00E+00
5.49E+05
2.13E+07
6.74E+03
0.00E+00
1.66E+01
4.80E+00
(0.00E+00)
(2.49E+05)
(1.76E+07)
(2.67E+03)
(0.00E+00)
(5.30E+00)
(3.43E+00)
0.00E+00
1.59E+06
2.40E+08
4.78E+02
0.00E+00
2.99E+01
6.39E+01
(0.00E+00)
(8.03E+05)
(3.71E+08)
(1.96E+02)
(0.00E+00)
(1.76E+01)
(3.09E+01)
Algorithm
F8
F9
F10
F11
F12
F13
F14
MLPSO
2.10E+01
9.61E+00
5.64E-02
3.82E-01
3.25E+01
6.96E+01
4.74E+02
(4.38E-02)
(2.41E+00)
(2.92E-02)
(6.24E-01)
(2.95E+01)
(3.44E+01)
(1.93E+02)
2.09E+01
3.08E+01
4.93E+01
2.27E+02
2.33E+02
2.90E+02
3.18E+03
(5.68E-02)
(4.08E+00)
(2.25E+01)
(6.72E+01)
(7.99E+01)
(7.20E+01)
(6.51E+02)
2.10E+01
2.89E+01
1.09E+01
6.83E-04
1.53E+02
1.78E+02
1.06E+02
(5.61E-02)
(1.58E+00)
(2.87E+00)
(3.14E-04)
(1.95E+01)
(1.51E+01)
(2.14E+01)
2.09E+01
1.03E+01
2.72E-01
1.53E+01
1.62E+02
1.60E+02
7.09E+02
(5.76E-02)
(2.48E+00)
(1.21E-01)
(4.35E+00)
(1.02E+01)
(8.58E+00)
(3.60E+02)
2.09E+01
1.85E+01
2.29E-01
2.36E+01
5.64E+01
1.23E+02
7.04E+02
(6.28E-02)
(2.69E+00)
(1.32E-01)
(8.76E+00)
(1.51E+01)
(2.19E+01)
(2.38E+02)
Algorithm
F15
F16
F17
F18
F19
F20
F21
MLPSO
2.49E+03
1.83E+00
3.08E+01
1.28E+02
1.27E+00
1.12E+01
3.22E+02
(9.40E+02)
(4.09E-01)
(1.89E-01)
(3.26E+01)
(2.37E-01)
(1.75E+00)
(6.50E+01)
4.57E+03
1.10E+00
2.84E+02
3.07E+02
2.59E+01
1.40E+01
3.64E+02
(7.33E+02)
(3.72E-01)
(7.44E+01)
(7.18E+01)
(1.25E+01)
(6.96E-01)
(8.90E+01)
5.54E+03
2.35E+00
3.81E+01
2.22E+02
1.99E+00
1.41E+01
3.02E+02
(2.70E+02)
(3.06E-01)
(1.09E+00)
(1.21E+01)
(3.79E-01)
(4.46E-01)
(1.20E+01)
4.53E+03
2.46E+00
1.63E+02
1.94E+02
3.49E+00
1.36E+01
2.97E+02
(2.50E+03)
(2.50E-01)
(1.35E+01)
(8.34E+00)
(6.74E-01)
(1.35E+00)
(7.67E+01)
3.42E+03
8.48E-01
5.26E+01
6.81E+01
3.12E+00
1.20E+01
3.11E+02
(5.16E+02)
(2.20E-01)
(7.11E+00)
(9.68E+00)
(9.83E-01)
(9.26E-01)
(7.92E+01)
CPSO
CLPSO
SLPSO
fk-PSO
CPSO
CLPSO
SLPSO
fk-PSO
CPSO
CLPSO
SLPSO
fk-PSO
20
Algorithm
F22
F23
F24
F25
F26
F27
F28
MLPSO
9.40E+02
2.78E+03
2.00E+02
2.30E+02
2.40E+02
3.60E+02
2.90E+02
(8.33E+02)
(7.10E+02)
(1.04E+01)
(2.75E+01)
(4.86E+01)
(5.81E+01)
(3.92E+01)
3.86E+03
5.62E+03
2.97E+02
3.12E+02
3.00E+02
1.12E+03
9.02E+02
(9.24E+02)
(9.78E+02)
(1.22E+01)
(1.29E+01)
(8.94E+01)
(1.30E+02)
(1.24E+03)
2.80E+02
6.26E+03
2.76E+02
2.99E+02
2.02E+02
6.99E+02
3.02E+02
(5.09E+01)
(4.01E+02)
(4.69E+00)
(4.16E+00)
(5.54E-01)
(3.04E+02)
(5.46E-01)
5.83E+02
3.66E+03
2.23E+02
2.54E+02
2.51E+02
4.59E+02
3.00E+02
(2.14E+02)
(2.58E+03)
(9.98E+00)
(6.10E+00)
(5.75E+01)
(1.19E+02)
(0.00E+00)
8.59E+02
3.57E+03
2.48E+02
2.49E+02
2.95E+02
7.76E+02
4.01E+02
(3.10E+02)
(5.90E+02)
(8.11E+00)
(7.82E+00)
(7.06E+01)
(7.11E+01)
(3.48E+02)
CPSO
CLPSO
SLPSO
fk-PSO
21
TABLE III WILCOXON SIGNED-RANK TEST BETWEEN MLPSO AND EXISTING ALGORITHMS. MLPSO vs
CPSO
CLPSO
p_value
5.2564e-06
5e-03
22
SLPSO 6.3542e-04
fk-PSO 6.30E-03
Table IV The comparison of Data_Error on synthetic data (Average±Standard Deviation). Data_Error #Nodes
Density
20%
Ns-Nt MLPSO
dMAGAFCM
ACORD
D&C RCGA
BB-BC
DE
RCGA
1-20
0.001±0.002
0.014±0.004
0.004±0.003
0.143±0.014
0.055±0.014
0.068±0.011
0.086±0.054
5-4
0.001±0.001
0.010±0.005
0.008±0.004
0.186±0.027
0.096±0.038
0.198±0.041
0.197±0.120
40-10
0.009±0.006
0.006±0.007
0.000±0.000
0.251±0.050
0.082±0.005
0.083±0.026
0.147±0.110
1-20
0.002±0.003
0.002±0.002
0.003±0.002
0.047±0.021
0.085±0.009
0.117±0.012
0.098±0.052
5-4
0.001±0.002
0.025±0.005
0.009±0.005
0.239±0.102
0.098±0.085
0.214±0.172
0.139±0.010
40-10
0.012±0.006
0.012±0.006
0.000±0.000
0.104±0.006
0.105±0.089
0.184±0.021
0.100±0.043
1-20
0.005±0.006
0.015±0.010
0.033±0.002
0.070±0.002
0.048±0.054
0.157±0.032
0.102±0.054
5-4
0.002±0.002
0.056±0.009
0.144±0.011
0.290±0.140
0.153±0.036
0.249±0.147
0.207±0.120
40-10
0.026±0.011
0.096±0.018
0.107±0.009
0.141±0.018
0.139±0.025
0.193±0.054
0.181±0.058
1-20
0.003±0.004
0.132±0.012
0.023±0.008
0.109±0.047
0.031±0.010
0.176±0.064
0.150±0.148
5-4
0.005±0.011
0.109±0.015
0.073±0.005
0.274±0.102
0.109±0.062
0.293±0.165
0.168±0.130
40-10
0.030±0.014
0.025±0.009
0.040±0.006
0.149±0.024
0.152±0.024
0.203±0.018
0.113±0.089
1-20
0.010±0.015
0.105±0.012
0.283±0.008
0.101±0.015
0.132±0.015
0.230±0.167
0.180±0.051
5-4
0.008±0.004
0.130±0.013
0.170±0.012
0.255±0.120
0.253±0.126
0.346±0.124
0.196±0.125
40-10
0.078±0.015
0.119±0.011
0.300±0.021
0.205±0.182
0.178±0.058
0.215±0.102
0.165±0.136
1-20
0.011±0.020
0.087±0.018
0.105±0.032
0.158±0.180
0.195±0.039
0.218±0.163
0.175±0.120
5-4
0.009±0.017
0.193±0.015
0.057±0.028
0.211±0.176
0.214±0.158
0.305±0.216
0.247±0.203
40-10
0.089±0.027
0.153±0.010
0.170±0.011
0.188±0.024
0.409±0.325
0.244±0.175
0.154±0.120
1-20
0.060±0.025
0.178±0.020
0.255±0.014
0.209±0.114
0.225±0.116
0.349±0.185
0.229±0.182
5-4
0.024±0.007
0.145±0.016
0.175±0.018
0.295±0.210
0.403±0.205
0.382±0.146
0.261±0.157
40-10
0.163±0.025
0.188±0.015
1.764±0.241
0.226±0.158
0.401±0.190
0.265±0.172
0.251±0.210
1-20
0.057±0.023
0.184±0.013
0.114±0.014
0.252±0.098
0.410±0.298
0.341±0.120
0.223±0.158
5-4
0.019±0.005
0.385±0.118
0.047±0.011
0.327±0.291
0.302±0.168
0.391±0.215
0.284±0.182
40-10
0.140±0.026
0.181±0.010
0.933±0.017
0.203±0.154
0.432±0.258
0.327±0.184
0.295±0.202
20 40%
20% 40 40%
20% 100 40%
20% 200 40%
TABLE V NUMBER OF EVALUATIONS OF ALGORITHMS SUBJECT TO DIFFERENT PROBLEM DIMENSIONS. #Genes
20
40
MLPSO
116,193
236,190
600,000
800,000
dMAGAFCM-GRN
337,500
450,000
600,000
800,000
ACORD
350,000
650,000
655,000
865,000
D&C RCGA
1,000,000
1,000,000
1,000,000
1,000,000
BB-BC
400,000
600,000
800,000
1000,000
DE
450,000
600,000
750,000
860,000
RCGA
1,000,000
1,000,000
1,000,000
1,000,000
23
100
200
TABLE VI WILCOXON SIGNED-RANK TEST BETWEEN OTHER CURRENTLY POPULAR ALGORITHMS ON THIS SUBJECT. MLPSO vs
dMAGAFCM-GRN
ACORD
D&C RCGA
BB-BC
DE
RCGA
p_value
6.078E-05
8.038E-05
1.820E-05
1.820E-05
1.820E-05
1.816E-05
24