Teaching and peer-learning particle swarm optimization

Teaching and peer-learning particle swarm optimization

Applied Soft Computing 18 (2014) 39–58 Contents lists available at ScienceDirect Applied Soft Computing journal homepage: www.elsevier.com/locate/as...

1MB Sizes 1 Downloads 173 Views

Applied Soft Computing 18 (2014) 39–58

Contents lists available at ScienceDirect

Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc

Teaching and peer-learning particle swarm optimization Wei Hong Lim, Nor Ashidi Mat Isa ∗ Imaging and Intelligent System Research Team (ISRT), School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Penang, Malaysia

a r t i c l e

i n f o

Article history: Received 13 March 2013 Received in revised form 30 September 2013 Accepted 15 January 2014 Available online 24 January 2014 Keywords: Teaching and peer-learning particle swarm optimization (TPLPSO) Teaching–learning-based optimization (TLBO) Metaheuristic search Global optimization

a b s t r a c t Most of the recent proposed particle swarm optimization (PSO) algorithms do not offer the alternative learning strategies when the particles fail to improve their fitness during the searching process. Motivated by this fact, we improve the cutting edge teaching–learning-based optimization (TLBO) algorithm and adapt the enhanced framework into the PSO, thereby develop a teaching and peer-learning PSO (TPLPSO) algorithm. To be specific, the TPLPSO adopts two learning phases, namely the teaching and peer-learning phases. The particle firstly enters into the teaching phase and updates its velocity based on its historical best and the global best information. Particle that fails to improve its fitness in the teaching phase then enters into the peer-learning phase, where an exemplar is selected as the guidance particle. Additionally, a stagnation prevention strategy (SPS) is employed to alleviate the premature convergence issue. The proposed TPLPSO is extensively evaluated on 20 benchmark problems with different features, as well as one real-world problem. Experimental results reveal that the TPLPSO exhibits competitive performances when compared with ten other PSO variants and seven state-of-the-art metaheuristic search algorithms. © 2014 Elsevier B.V. All rights reserved.

1. Introduction Particle swarm optimization (PSO) algorithm is initially introduced by Kennedy and Eberhart in 1995 [1], to emulate the collaborative behavior of bird flocking and fish schooling in searching for foods [1–4]. In PSO, each individual (namely particle) represents the potential solution of the optimization problem, while the location of food source is the global optimum solution. Being a population-based metaheuristic search (MS) algorithm, PSO simultaneously evaluates many points in the search space. Besides searching for the food independently and stochastically, each particle collaborates and shares information with each other, to ensure all of them move toward the optimal solution of the problem and eventually leads to the convergence [2,3]. Since the introduction of the PSO, it becomes an overwhelming choice of optimization technique due to its simplistic implementation and excellent performance on various benchmark and engineering design problems [4–9]. Despite having the competitive performance, PSO has some undesirable dynamical properties that degrade its searching ability. One of the most important issues is the premature convergence,

∗ Corresponding author. Tel.: +60 45996093; fax: +60 45941023. E-mail addresses: [email protected] (W.H. Lim), [email protected] (N.A. Mat Isa). http://dx.doi.org/10.1016/j.asoc.2014.01.009 1568-4946/© 2014 Elsevier B.V. All rights reserved.

where the particles tend to be trapped in the local optima solution, due to the rapid convergence and diversity loss of the swarm [10]. Another issue is regarding the ability of PSO to balance the exploration/exploitation search. Overemphasize of the exploration prevents the swarm convergence, while too much exploitation has high tendency to cause the premature convergence of swarm [11]. Although extensive amounts of works [11–29] are reported to address the aforementioned issues, most of the current existing PSO variants do not provide the alternative learning strategies to particles when they fail to update their fitness during the searching process. This problem inevitably limits the algorithms’ searching capabilities. Recently, Rao et al. [30,31] proposed a teaching–learning-based optimization (TLBO) algorithm, inspired by the philosophy of teaching and learning. The process of TLBO is divided into two parts, namely the teacher phase and the learner phase, where the individuals can learn from the teacher and the interaction of other individuals, respectively. Motivated by these two facts, we propose a teaching and peer-learning PSO (TPLPSO). To be specific, we improve the current existing TLBO framework and adapt this enhanced framework into the PSO. Similar with TLBO, the TPLPSO adapts two learning phase, namely the teaching and peer-learning phases. Each particle first enters into the teaching phase and updates its velocity according to its historical best and the global best information. Particle that fails to improve its fitness in the teaching phases then enters into the peer-learning phase, where an exemplar particle is selected as the

40

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

guide for the particle to search for a better solution. The roulette wheel selection technique is employed to ensure fitter particle has higher probability to be selected as the exemplar, thereby provide a more promising searching direction toward the global optimum. To resolve the premature convergence issue, we employ a stagnation prevention strategy (SPS) module that will be triggered when the PSO swarm fails to improve the global best fitness in m successive function evaluations (FEs). The remainder of this paper is organized as follows. Section 2 briefly presents some related works. Section 3 details out the methodologies of the TPLPSO. Section 4 provides the experimental settings and results, respectively. Finally, Section 5 concludes the work done. 2. Related works In this section, we discuss the mechanism of the basic PSO. Next, the state-of-art PSO variants are reviewed. For self-completeness purpose, we also provide the brief description of TLBO. 2.1. Basic particle swarm optimization (PSO) algorithm In the basic PSO, the PSO swarm consists of a group of particles with negligible mass and volume that roam through the D-dimensional problem hyperspace. Each particle i represents a potential solution of the problem and it is associated with two vectors, namely the position vector Xi = [Xi1 , Xi2 , . . ., XiD ] and the velocity vector Vi = [Vi1 , Vi2 , . . ., ViD ] to indicate its current state. One salient feature of PSO that distinguishes it from other MS algorithms is the capability of particle to remember its personal best experience, that is, the best position that it ever achieves. During the searching process, each particle of the population stochastically adapts its trajectory through its personal best experience and the group best experience [1,2]. Specifically, the d-th dimension of particle i’s velocity, Vi,d (t + 1) and position Xi,d (t + 1) at (t + 1)-th iteration of the searching process are updated as follows: Vi,d (t + 1) = ωVi,d (t) + c1 r1 (Pi,d (t) − Xi,d (t)) + c2 r2 (Pg,d (t) − Xi,d (t)) (1)

Xi,d (t + 1) = Xi,d (t) + Vi,d (t + 1)

(2)

where i = 1, 2, . . ., S is the particle’s index; S is the population size; Pi = [Pi1 , Pi2 , . . ., PiD ] represents the particle i’s personal best experience; Pg = [Pg1 , Pg2 , . . ., PgD ] is the group best experience found by all of the particles so far; c1 and c2 are the acceleration coefficients that control the influences of personal and group best experiences, respectively; r1 and r2 are two random numbers that generated with the uniform distribution in the range of [0,1]; and ω is the inertia weight that is used to balance the global/local searches of particles [11]. The implementation of basic PSO is illustrated in Fig. 1. 2.2. State-of-the-art PSO variants Substantial amount of researches are performed to improve the PSO’s performance. Among these works, parameter adaptation strategy has become one of the promising approaches. Shi and Eberhart [11] proposed a PSO with linearly decreasing inertia weight (PSO-LDIW) by introducing a parameter called inertia weight ω into the basic PSO. Accordingly, the parameter ω is linearly decreased to balance the exploration/exploitation search of PSO. Based on a thorough theoretical study on the convergence properties of PSO swarm, Clerc and Kennedy [12] proposed a constriction factor  into basic PSO to prevent the swarm explosion,

thereby developed the Constricted PSO (CPSO). Ratneweera et al. [15] proposed a time varying acceleration coefficient (TVAC) strategy into PSO, where the acceleration coefficients of c1 and c2 are decreased and increased linearly with time, to regulate the exploration/exploitation behaviors of swarm. In [15], two variants of PSO-TVAC, namely the PSO-TVAC with mutation (MPSO-TVAC) and Self-Organizing Hierarchical PSO-TVAC (HPSO-TVAC) were developed. Tang et al. [23] proposed a Feedback Learning PSO with quadratic inertia weight (FLPSO-QIW) by introducing a fitness feedback mechanism into the TVAC scheme. The particle’s fitness is incorporated into the modified TVAC to adaptively determine the c1 and c2 values. By proposing an evolutionary state estimation (ESE) module, Zhan et al. [21] developed an Adaptive PSO (APSO) that is capable to identify the swarm’s evolutionary states. The outputs of the ESE module are then used to adaptively adjust the particles’ ω, c1 and c2 . Leu and Yeh [27] proposed a Grey PSO, by employing the grey relational analysis to tune the particles’ ω, c1 and c2 . Hsieh et al. [19] developed an efficient population utilization strategy for PSO (EPUS-PSO). Accordingly, a population manager is proposed to adaptively adjust the population size according to the population’s searching status. Population topology is another crucial factor that determines the PSO performance as it decides the information flow rate of the best solution within the swarm [32,33]. In [32,33], different topologies with different connectivity, such as the fully connected, ring, and wheel topologies, were studied. Carvalho and Bastos-Filho [17] developed a clan topology according to the social behavior of clan. In the Clan PSO, the PSO population is divided into several clans. Each clan will first perform the search and the particle with best fitness is selected as the clan leader. A conference is then performed among the leaders to adjust their position. Bastos-Filho et al. [18] proposed a Dynamic Clan PSO by employing a migration mechanism into the clan topology. This improvement allows particles in one clan migrate to another clan. Meanwhile, Pontes et al. [22] hybridized the concept of clan topology into the APSO [21] to produce the ClanAPSO. Based on the evolutionary state of each clan, ClanAPSO enables different clans to employ different search operations. Parsopoulos and Vrahatis [14] proposed a Unified PSO (UPSO) to balance the exploration/exploitation search. Mendes et al. [13] advocated that each particle’s movement is influenced by all its topological neighbors and thereby proposed the fully informed PSO (FIPSO). Montes de Oca [20] integrated the concepts of time-varying population topology, FIPSO’s velocity updating mechanism [13], and decreasing ω [11], to develop the Frankenstein PSO (FPSO). Initially, the particles in FPSO are connected with fully connected topology. The topology connectively is then reduced over time with certain pattern. Another area of research is to explore the PSO’s learning strategies. Liang et al. [16] proposed the Comprehensive Learning PSO (CLPSO). Accordingly, each particle is allowed to learn from its or other particle’s historical best position in each dimension, to ensure a larger search space is explored. Wang et al. [24] proposed a CLPSO variant by employing a generalized opposition-based learning to the CLPSO. Motivated by a social phenomenon where multiple of good exemplars assist the crowd to progress better, Huang et al. [26] proposed an Example-based Learning PSO (ELPSO). Instead of a single Pg particle, an example set of multiple global best particles is employed to update the particles’ position in ELPSO. Noel [28] hybridized the PSO with a gradient-based local search algorithm, to combine the strengths of stochastic and deterministic optimization schemes. Zhou et al. [25] introduced the Random Position PSO (RPPSO) by proposing a probability P(f). A random position is used to guide the particle, if the randomly generated number is smaller than P(f). Jin et al. [29] advocated to update the particles’ velocities and positions in certain dimensions and thus proposed the PSO with dimension selection methods. A total of three approaches,

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

41

Basic PSO 1: Generate initial swarm and set up parameters for each particle; 2: while t < max_generation do 3: for each particle i do 4: Update the velocity V i and position X i using Eqs. (1) and (2) respectively;

5: 6: 7: 8:

Fitness evaluation is performed on the updated X i of particle i;

if f ( X i ) < f (Pi ) then Pi = X i ; f ( Pi ) = f ( X i ) ; if f ( X i ) < f (Pg ) then

9:

Pg = X i ; f ( Pg ) = f ( X i ) ;

10: end if end if 11: 12: end for t = t +1 ; 13: 14: end while Fig. 1. Basic PSO algorithm.

namely random, heuristic and distance-based dimension selection methods are developed in [29]. 2.3. Teaching–learning-based optimization (TLBO) algorithm Inspired by the philosophy of the classical school teaching and learning process, Rao et al. [30,31] proposed the TLBO, a populationbased MS algorithm that consists of a group of learners. Each D-dimensional individual Xi (i.e. learner) within the population represents the possible solution to an optimization problem, where D in the TLBO context represents the number of subjects offered to the learners. The TLBO attempts to improve the knowledge (represented by fitness) of each learner through two learning phases, namely the Teacher phase and the Learner phase. The learners will be replaced if the new solutions produced during the Teacher or Learner phases have better fitness. This algorithm will be repeated until the termination criteria are met. During the Teacher phase, each learner is learning from the teacher Xteacher , that is the best individual in the population. To be specific, the learner will move their position toward the position of Xteacher , by taking into account the current mean value of the learners (Xmean ) that represents the average qualities of all learners in the population. Mathematically, the learner Xi updates his/her position Xnew,i during the Teacher phase as follows [30,31,34]: Xnew,i = Xi + r · (Xteacher − (TF · Xmean ))

(3)

where r is a random number ranges from 0 to 1; TF is a teaching factor that is used to emphasize the importance of the learner’s average qualities (Xmean ) and it can be either 1 or 2. For the Learner phase, each learner attempts to improve its knowledge through the interaction with other learners. To be specific, the learner Xi will first randomly select a peer learner Xj (where i = / j). If Xj has better fitness than Xi , the latter is moved toward the former as shown in Eq. (4). Meanwhile, Xi is moved away from Xj (Eq. (5)), if the latter has worse fitness than the former. Xnew,i = Xi + r · (Xj − Xi )

(4)

Xnew,i = Xi + r · (Xi − Xj )

(5)

The implementation of TLBO is illustrated in Fig. 2. 3. Teaching and peer-learning PSO (TPLPSO) In this section, we introduce the proposed TPLPSO algorithm in details. To be specific, we modify the current existing TBLO framework and adapt it into the basic PSO, in order to guide the particles during the searching process. Similar with the TBLO, which the

knowledge transfer mechanism is inspired by the classical school learning process, our modified learning framework also consists of two phases, namely the teaching phase and the peer-learning phase. In TPLPSO, each particle is considered as a student, while the global best particle Pg is assigned as the teacher in the classroom. During the teaching phase, each particle improves its knowledge (represented by fitness) based on its self-cognitive (represented by its personal best experience Pi ) and the knowledge delivered by the teacher. Nevertheless, the successfulness of the teacher to improve the knowledge of each student is not always guaranteed. Some students are capable to understand the knowledge imparted by the teacher and thus successfully improve their knowledge. On the contrary, there is also a possibility where some students fail to improve their knowledge from their teacher. In the latter case, a peer-learning phase is offered. The students that fail to improve their knowledge during the teaching phase may approach to their classmates or peers and learn from them. To enhance the chance of knowledge improvement, these students tend to select peers that have better knowledge than them and stay away from those that worse than them. In the following subsection, the interaction and implementation of the TPLPSO particles in the teaching and peer-learning phases are described in details. Additionally, the SPS module that employed to prevent the premature convergence is also presented. 3.1. Teaching phase of TPLPSO In the proposed TPLPSO, the initial population is first randomly generated. For the minimization problem, particle with the lower fitness value carries better knowledge (better solution) and it is more desirable. As the Pg particle has the lowest fitness value in the population, it is recognized as the most knowledgeable particle and thereby assigned as the teacher particle. Meanwhile, the remaining particles in the population are the student particles which attempt to improve their fitness based on their personal best knowledge Pi and their teacher’s knowledge Pg . In the teaching phase, each student particle i updates its velocity Vi and position Xi using Eqs. (1) and (2) respectively. To constrain the maximum velocity of each particle, a parameter Vmax is defined for the velocity clamping. The values of Vmax is set to the half of the search range of the optimization problem, that is Vmax,d = (1/2)(Xmax,d − Xmin,d ), where Xmax,d and Xmin,d represent the upper and lower search boundaries of the d-th dimension of problem, respectively. The fitness of the updated position of particle i, f (Xi ) is then evaluated and compared with the fitness of its personal best position, f (Pi ). If the updated position has lower fitness than f (Pi ), it implies

42

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

TLBO 1: Initialize population and evaluate the fitness of each learner Xi; 2: while t < max_generation do 3: for each learner i do 4: /*Teacher Phase*/ 5: Select the best learner as teacher Xteacher and calculate Xmean; 6: Calculate Xnew,i for learner i using Eq. (3); 7: Perform fitness evaluation on Xnew,i; 8: if f ( X new,i ) < f ( X i ) then

9: 10: 11: 12: 13:

X i = X new,i ; f ( X i ) = f (X new,i ) ; end if /*Learner Phase*/ Randomly select a learner Xj from the population, where i ≠ j ; if f ( X j ) < f ( X i ) then

14: 15:

Calculate Xnew,i for learner i using Eq. (4); else /* f ( X i ) < f X j */

16: 17: 18: 19:

Calculate Xnew,i for learner i using Eq. (5); end if Perform fitness evaluation on Xnew,i; if f ( X new,i ) < f ( X i ) then

( )

X i = X new,i ; f ( X i ) = f (X new,i ) ; 20: 21: end if 22: end for 23: t = t + 1; 24: end while

Fig. 2. TLBO algorithm.

that the student particle successfully improves its personal knowledge in the teaching phase and the updated Xi then replaces its Pi . Similarly, the updated Xi replaces the Pg if the former has lower fitness than the latter, that is f (Xi ) < f (Pg ). In this scenario, the student particle with the improved fitness becomes more knowledgeable than the teacher particle. Thus, the former one is now promoted as the teacher particle, while the latter one is demoted as the student particle. To mitigate the premature convergence issue, a SPS module, which will be described in detail in the following subsection, is applied to the teacher particle when its knowledge f (Pg ) is not improved for m successive fitness evaluations (FEs). One must not be confused with the concept of the number of iteration and the number of FEs. The former is updated when all particles in the population have updated their position and performed the fitness evaluation. Meanwhile, the latter is updated when a single particle has updated its position and performed the fitness evaluation. Thus, the number of FEs consumed in the optimization problems is generally higher than the number of iteration. In TPLPSO, a failure counter fc is employed to record the number of times where the Pg particle fails to improve its fitness. To be specific, the fc is increased by one when a student particle fails to replace the teacher particle. If the student particle is successfully promoted as the teacher particle, the fc is then reset to zero. The procedure of the teaching phase is described by Algorithm 1, as illustrated in Fig. 3.

3.2. Peer-learning phase of TPLPSO As mentioned earlier, not all student particles in the teaching phase are able to improve their knowledge from the teacher particle. Thus, an alternative phase, namely the peer-learning phase is offered to those student particles that fail to improve their fitness during the teaching phase. In the peer-learning phase, the student particles are allowed to select an exemplar particle Pe among the peer student particles. For each particle i, the personal best positions of all the student particles are eligible as the exemplar candidates, except for the Pi of the particle i itself and the teacher particle Pg . As a better fitness exemplar is more likely to improve the particle’s fitness, particle i tends to select the peer particle j that has more successful personal experience, that is f(Pj ) < f(Pi ). Thus, when particle i enters into

the peer-learning phase, it employs the roulette wheel selection technique to select its exemplar particle Pei from the candidates, based on the personal best fitness criterion. Each of the exemplar candidate k is assigned with a weightage value, Wk as shown: Wk =

fmax − f (Pk ) , fmax − fmin

∀k ∈ [1, K]

(6)

where fmax and fmin represent the maximum (worst) and minimum (best) personal best fitness values exist among the exemplar candidates, respectively; K represents the number of exemplar candidates that available for the exemplar selection. From Eq. (6), the exemplar candidate k with lower fitness is assigned with larger Wk value, implying that it has greater probability to be selected as the exemplar. The procedure to select the exemplar particle Pei particle i is described by Algorithm 2, as shown in Fig. 4. As the exemplar particle is selected through a probabilistic mechanism, two possible outcomes are expected, i.e. (1) the exemplar particle has lower personal best fitness than the particle i, that is f (Pei ) < f (Pi ) or (2) the exemplar particle has higher personal best fitness than the particle i, that is f (Pei ) > f (Pi ). Two different velocity updating strategies are thus employed in response to these two scenarios. For scenario 1, the particle i is encouraged to attract toward the exemplar particle, as the latter one is more knowledgeable and thus offers higher chance to improve the fitness of the former one. For particle i that enters into the peer-learning phase and meets the scenario 1, its velocity Vi is updated as follows: Vi = ωVi + cr3 (Pei − Xi )

(7)

where c represents the acceleration coefficient and it is set as c = 2; r3 is a random number in the range of [0,1]. Meanwhile for scenario 2, the selected exemplar particle is less knowledgeable than the particle i. It is unlikely for the former one to contribute on the fitness improvement of the latter one. Thus, the particle i is encouraged to repel away from the exemplar particle. This strategy maintains the diversity of particle i as it prevents the particle i to converge on the exemplar particle with inferior performance. The velocity update strategy for particle i in scenario 2 is shown as follows: Vi = ωVi − cr4 (Pei − Xi )

(8)

where r4 is a random number in the range of [0,1]. Similar with the teaching phase, the updated position of particle Xi is also evaluated. If the updated Xi has smaller fitness than the

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

43

Algorithm 1: Teaching_Phase (particle i, fes, fc )

1:

Update the velocity V i and position X i of particle i using Eqs. (1) and (2) respectively;

2:

Fitness evaluation is performed on the updated X i of particle i;

3: 4: 5: 6:

fes = fes + 1 ; where fes represents the number of FEs consumed so for. if f ( X i ) < f (Pi ) then Pi = X i ; f ( Pi ) = f ( X i ) ; if f ( X i ) < f (Pg ) then Pg = X i ; f ( Pg ) = f ( X i ) ;

7:

8: fc = 0 ; 9: else 10: fc = fc + 1 ; 11: end if 12: else 13: fc = fc + 1 14: end if Fig. 3. Teaching phase of the TPLPSO algorithm.

Algorithm 2: Exemplar_Selection (particle i, Pg) 1: Identify the index of the target particle and global best particle as i and g, respectively; 2: By excluding the Pi and Pg, construct an array to store the exemplar candidates, that is ECi = [P1, P2, …PK]; 3: Identify fmax and fmin from ECi; 4: for each exemplar candidate k do 5: calculate Wk for each exemplar candidate k using Eq. (6); 6: end for 7: Construct an array, WCi = [W1, W2, …, WK] to store the weight contributions of each exemplar candidate; 8: Perform the roulette wheel selection based on the WCi to select the exemplar particle; 9: Return Pei; Fig. 4. Exemplar selection in the peer-learning phase of the TPLPSO algorithm.

Pi and Pg solutions, or f (Xi ) < f (Pi ) and f (Xi ) < f (Pg ), Xi replaces both of the Pi and Pg solutions. Meanwhile, the content of the fc is also updated by resetting it to zero or increasing it by one, depending on the fitness comparisons between the updated Xi , Pi and Pg solutions. The procedure of the peer-learning phase is implemented in Algorithm 3, as illustrated in Fig. 5. 3.3. Stagnation prevention strategy (SPS) of TPLPSO Due to the rapid convergence characteristic of the basic PSO, the particles tend to be trapped in the local optima during the earlier stage of optimization. To encounter this issue, we employ a SPS module in the TPLPSO to provide a jumping-out mechanism to the teacher particle (Pg ) when it is not improved for m successive FEs.

The SPS module aims to provide the fresh momentum to the Pg particle and thus enhance its exploration capability. Too large or too small values of m are not desirable, as the former tends to waste computation resources (due to excessive perturbation on Pg ), while the latter degrades the algorithm’s convergence speed (due to the stagnation of Pg at local optima for too long time). In this paper, we set m = 5. In SPS module, one of the d-th dimension of the Pg particle, Pgd is first randomly selected and it is then perturbed by a normal distribution as follows: per

Pgd = Pgd + sgn(r5 ) · r6 · (Xmax,d − Xmin,d ) per Pgd

where is the perturbed Pgd ; Xmax,d and Xmin,d represent the lower and upper bounds of the problem space in d-th dimension,

Algorithm 3: Peer_Learning_Phase (particle i, fes, fc, Pg, Pi) 1: Pei = Exempler_Selection (particle i, Pg) ; 2: if f (Pei) < f (Pi) then 3: Update the velocity Vi of particle i using Eq. (7); 4: else 5: Update the velocity Vi of particle i using Eq. (8); 6: end if 7: Update position Xi of particle i using Eq. (2); 8: Fitness evaluation is performed on the updated X i of particle i; 9: fes = fes + 1; 10: if f ( X i ) < f (Pi ) then 11: Pi = X i ; f ( Pi ) = f ( X i ) ; 12: if f ( X i ) < f (Pg ) then

13:

(9)

Pg = X i ; f ( Pg ) = f ( X i ) ;

14: fc = 0 ; 15: else 16: fc = fc + 1 ; 17: end if 18: else 19: fc = fc + 1 ; 20: end if Fig. 5. Peer-learning phase in the TPLPSO algorithm.

44

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

respectively; r5 is a random number in the range of [0,1], generated by the uniform distribution; r6 is a random number generated by a normal distribution, N ∼ (,  2 ) with the mean value of  = 0 and standard deviation of  = R. R is defined as the perturbation range which is linearly decreased with the number of FEs as shown: R = Rmax − (Rmax − Rmin )

fes FEmax

(10)

where Rmax = 1 and Rmin = 0.1 represent the maximum and minimum perturbation range, respectively; fes is the number of FEs consumed; FEmax is the maximum FEs defined by the user. A perturbed Pg particle, Pg per is thus produced through the SPS module. It will replace the Pg particle if it has better fitness than the Pg , that is f (Pg per ) < f (Pg ). The procedure of SPS module is described in Algorithm 4, as shown in Fig. 6. 3.4. Complete framework of TPLPSO Together with the aforementioned components, the implementation of the TPLPSO is summarized in the main algorithm block, as presented in Fig. 7. As shown in the main algorithm block, stagnation check will be performed as soon as the teaching or peerlearning phases are completed. The main purpose of this procedure is to detect if the threshold m is exceeded, each time after a fitness evaluation (FEs) is consumed by the particle in teaching or peer-learning stages. As soon as the threshold m is reached, the SPS module (Algorithm 4) will be revoked immediately to assist the teacher particle Pg escapes from the local optima. This strategy prevents the Pg particle to be trapped in the local optima for excessive amounts of FEs, which could lead to the poor searching accuracy and convergence speed of the algorithm. 3.5. Comparison of TPLPSO with TLBO As mentioned earlier, we modify the current existing TLBO to enable it more accurately reflects the scenario of the classical classroom learning process. The enhanced TLBO framework is then adapted into the basic PSO, thereby develop the TPLPSO. In this section, we will highlight the modification made in our approach. Firstly, as shown in the Teacher phase of the original TLBO (Eq. (3)), each learner attempts to improve its knowledge according to the knowledge of teacher (Xteacher ) and the average knowledge or the mainstream knowledge (Xmean ) [35] of the learners. In our approach, one could notice that the learning strategy adopted by the student particle in teaching phase is in fact same with the strategy used by the basic PSO. This is because when a student attempts to improve his/her knowledge during the teaching process, he/she will learn based on the knowledge imparted by his/her teacher (Pg ) as well as his/her self-cognition (Pi ), which is well represented by Eq. (2). Next, during the Learner phase of the original TLBO, the learner will randomly select another peer to learn from, without considering the knowledge level of the peer. This is somewhat contradict with the real world scenario as we tend to look for someone with better knowledge to learn from. To alleviate this issue, we modify the peer-learning phase in TPLPSO so that the student particles employ the roulette wheel selection technique to select its exemplar. This selection technique enables the more knowledgeable peer has higher probability to be selected as the guidance particle, thereby provide a more promising searching direction toward the global optimum. Finally, as shown in the original framework of TLBO (Fig. 2), all learners will go through the Learner phase regardless of the fact that some of them have successfully updated their knowledge in the previous Teacher phase. However, in our TPLPSO’s implementation (Fig. 7), only student particles that fail to improve their knowledge

in the teaching stage are allowed to enter into the peer-learning stage. We opine that the successfulness of a student particle to improve its fitness during the teaching stage implies the particle is in the right track to locate the optimum solution. Thus, we omit the peer-learning stage to prevent the intervention of the peers on the trajectory of this particle. Additionally, such implementation could save the computation sources, as not all student particles will enter into the peer-learning stage, thereby require less number of FEs consumed. 4. Experimental setup and simulation results In this section, we first describe the 20 benchmark functions [36–38] that is used to investigate the algorithm’s searching performance. All the benchmarks used are scalable and we perform the evaluation with 50 variables, that is D = 50. Next, we provide the details of the simulation settings for all involved PSO variants. Finally, we present the experimental results. 4.1. Benchmark functions The 20 benchmark functions employed for the performance evaluation are presented in Tables 1 and 2, respectively. These tables consist of the brief descriptions of the benchmark’s formulae, their feasible search range S, their fitness value at the global minimum Fmin , and their accuracy level ε. From Tables 1 and 2, we categorize these 20 benchmarks into four classes, namely (1) traditional problems, (2) rotated problems, (3) shifted problems, and (4) complex problems. Each function in the traditional problems (F1–F10) consists of different characteristics which allow us to examine the algorithm’s capabilities in various criteria. For example, function F1 is used to test the algorithm’s convergence speed as it can be solved easily. Meanwhile, functions F5, F6, F9, and F10 are multimodal functions that consist of huge number of local optima in high dimensional case. These functions are used to evaluate the algorithm’s ability to escape from the local optima. Certain traditional problems (e.g. F1, F4, F6, F7, and F10) are separable and can be solved using the D one dimension search. To prevent the D one dimension search, the rotated problems (F11–F14) are developed by multiplying the original Xi variable with an orthogonal matrix M [39] to produce a rotated variable Zi , that is Zi = M × Xi . Any changes occur in Xi will affect all dimension in Zi and the rotated problems thus become nonseparable (Notes: compare F1 vs. F11, F6 vs. F13, and F7 vs. F14 in Table 2). For shifted problems (F15–F17), a vector o = [o1 , o2 , . . ., oD ] is defined to adjust the global optima of a traditional problem to the new location, that is Zi = Xi − o. The complex problems (F18–F20) consist of the shifted and rotated problems (F18–F19) and the expanded problem (F20). The former combines both of the rotating and shifting operation into the traditional problems, i.e. Zi = (Xi − o) × M, while the latter is generated by taking the two dimensional Rosenbrock function (F5) as the input argument of the Griewank function (F8) as shown in [38]. 4.2. Parameter settings for the involved PSO variants In this paper, we employ ten well-established PSO variants for the thorough comparison with the TPLPSO. These ten PSO variants are PSO-LDIW [11], CPSO [12], MPSO-TVAC [15], APSO [21], UPSO [14], FIPSO [13], CLPSO [16], FLPSO-QIW [23], FPSO [20], and RPPSO [25]. The parameter settings for all PSO variants are extracted from their corresponding literatures and are described in Table 3. It is worth mentioning that the parameter settings for all these ten PSO variants are set by their corresponding authors and these settings are the optimized ones. For our TPLPSO, the choice of parameter

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

45

Algorithm 4: SPS ( Pg , fc , m , R max , Rmin , fes , FE max )

1: if 2: 3: 4: 5: 6: 7: 7:

fc ≥ m then Pgper = Pg; Randomly select a dimension, d to perform perturbation; Calculate the range of perturbation R using Eq. (10); Perform the perturbation on Pgd using Eq. (9); Fitness evaluation is performed on the Pgper particle; fes = fes + 1; if f ( Pgper ) < f ( Pg ) then

8:

Pg = Pgper ; f ( Pg ) = f ( Pgper ) ;

9: end if 10: fc = 0; 11:end if Fig. 6. SPS module in the TPLPSO algorithm.

Main Algorithm Block: TPLPSO Algorithm 1: Generate initial swarm and set up parameters for each particle; 2: Reset fes = 0 ; fc = 0 ; 3: while fes < FE max do 4: for each particle i do 5: previous _ f (i ) = f ( Pi ) ; 6: Perform Teaching_Phase (particle i, fes, fc );

7:

Check for SPS ( Pg , fc , m , R max , Rmin , fes , FE max );

8: 9:

if previous _ f (i ) < f ( Pi ) then Perform Peer_Learning_Phase (particle i, fes, fc, Pg , Pi );

10:

Check for SPS ( Pg , fc , m , R max , Rmin , fes , FE max );

end if 11: 12: end for 13: end while Fig. 7. Complete framework of the TPLPSO algorithm. Table 1 Benchmark functions used. (Note. M denotes the orthogonal matrix; o denotes the shifted global optimum; fbiasj,∀j∈[1,6] denotes the shifted fitness value applied to the corresponding functions). No.

Function Name

Formulae

F1

Sphere

F1 (Xi ) =

F2

Schwefel 2.22

F3

Schwefel 1.2

F3 (Xi ) =

F4

Schwefel 2.21

F4 (Xi ) = maxd=1,...,D Xi,d 

F5

Rosenbrock

F5 (Xi ) =

F6

Rastrigin

F7

Non-continuous Rastrigin

F8

Griewank

F9

Ackley

D−1 2 2 2 (100(Xi,d − Xi,d+1 ) + (Xi,d − 1) ) d=1 D 2 F6 (Xi ) = (Xi,d − 10 cos(2Xi,d ) + 10) d=1 D 2 F7 (Xi ) = d=1  (Yi,d − 10 cos(2Yi,d) + 10) Xi,d  < 0.5 Xi,d ,   where Yi,d = round(2Xi,d )/2, Xi,d  ≥ 0.5 D 2 D √ F8 (Xi ) = X /4000 − cos(Xi,d / d) + 1 d=1 i,d d=1 D D 2

F10

Weierstrass

F10 (Xi ) =

F11

Rotated Sphere

a = 0.5, b = 3, k max = 20 F11 (Xi ) = F1 (Zi ), Zi = M × Xi

F12

Rotated Schwefel 1.2

F12 (Xi ) = F3 (Zi ), Zi = M × Xi

F13

Rotated Rastrigin

F13 (Xi ) = F6 (Zi ), Zi = M × Xi

F14

Rotated Noncontinuous Rastrigin

F14 (Xi ) = F7 (Zi ), Zi = M × Xi

F15

Shifted Rastrigin

F15 (Xi ) = F6 (Zi ) + fbias1 , Zi = Xi − o, fbias1 = −330

F16

Shifted Noncontinuous Rastrigin

F16 (Xi ) = F7 (Zi ) + fbias2 , Zi = Xi − o,fbias2 = −330

F17

Shifted Griewank

F17 (Xi ) = F8 (Zi ) + fbias3 , Zi = Xi − o, fbias3 = −180

F18

Shifted Rotated Weierstrass

F18 (Xi ) = F10 (Zi ) + fbias4 , Zi = (Xi − o) × M, fbias4 = 90

F19

Shifted Rotated High Conditioned Elliptic

F19 (Xi ) =

F20

Shifted Expanded Griewank’s Plus Rosenbrock

F20 = F8 (F5 (Zi,1 , Zi,2 )) + F8 (F5 (Zi,2 , Zi,3 )) + · · · + F8 (F5 (Zi,D−1 , Zi,D )) + F8 (F5 (Zi,D , Zi,1 )) + fbias6 Zi = Xi − o; fbias6 = −130

D 2 Xi,d d=1 D   D F2 (Xi ) = |Xi,d | + Xi,d d=1  d=1 2 D d   Xi,j d=1

j=1

F9 (Xi ) = −20 exp





−0.2

D k max d=1

D

d=1

(

k=0

(106 )

d=1

Xi,d /D

− exp

[ak cos(2bk (Xi,d + 0.5))] − D

(d−1)/(D−1)

d=1

cos(2Xi,d )/D + 20 + e

k max k=0

[ak cos(bk )]

2 Zi,d + fbias5 , Zi = (Xi − o) × M, fbias5 = −450

46

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

Table 2 Experimental details and features of the 20 benchmark functions (Note. “Md” denotes modality; “U” denotes unimodal; “M” denotes multimodal; “Sp” denotes separable; “Rt” denotes rotated; “Sf” denotes shifted; “Y” denotes yes; “N” denotes no). Function no.

S

Fmin

ε

Features Md

Sp

Rt

Sf

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16 F17 F18 F19 F20

[− 100, 100]D [− 10, 10]D [− 100, 100]D [− 100, 100]D [− 2.048, 2.048]D [− 5.12, 5.12]D [− 5.12, 5.12]D [− 600, 600]D [− 32, 32]D [− 0.5, 0.5]D [− 100, 100]D [− 100, 100]D [− 5.12, 5.12]D [− 5.12, 5.12]D [− 5.12, 5.12]D [− 5.12, 5.12]D [− 600, 600]D [− 0.5, 0.5]D [− 100, 100]D [− 5, 5]D

0 0 0 0 0 0 0 0 0 0 0 0 0 0 fbias1 = −330 fbias2 = −330 fbias3 = −180 fbias4 = 90 fbias5 = −450 fbias6 = −130

1.0e − 6 1.0e − 6 1.0e − 6 1.0e − 6 1.0e − 2 1.0e − 2 1.0e − 2 1.0e − 2 1.0e − 2 1.0e − 2 1.0e − 6 1.0e − 2 1.0e − 2 1.0e − 2 1.0e − 2 1.0e − 2 1.0e − 2 1.0e − 2 1.0e − 6 1.0e − 2

U U U U M M M M M M U U M M M M M M U M

Y N N Y N Y Y N N Y N N N N Y Y N N N N

N N N N N N N N N N Y Y Y Y N N N Y Y N

N N N N N N N N N N N N N N Y Y Y Y Y Y

m is justified theoretically and it is not tailored for the benchmarks employed. We first examine the possible scenarios when the extreme values of m are set (that is m = 1 and 10). We observe that either too large or too low values of m tend to compromise the algorithm’s performances, in terms of the computation cost and the convergence speed, respectively. Hence, we set m as the mean of these extreme values, that is m = 5, to balance the algorithm’s performance. To ensure the fair assessment between TPLPSO with its peers, all PSO variants are run independently 30 times on the 20 benchmarks employed. We use the maximum number of fitness evaluation, FEmax as the termination criterion for all algorithms. In addition, the calculations are stopped if the exact solution X* is found. The population size and FEmax used in the D = 50 case are 30 and 3.00E+05 respectively [38]. 4.3. Comparison of TPLPSO with other well-established PSO variants In this paper, we assess the PSO’s performance based on three criteria, namely accuracy, reliability, and efficiency through the mean fitness value (Fmean ), success rate (SR), success performance (SP), and mean computational time (tmean ) [38]. Fmean is defined as the mean value of the differences between the best (lowest) fitness found by the algorithm and the fitness at global optima (Fmin ). Smaller Fmean is desirable as it implies the algorithm has better searching accuracy. SR evaluates the consistency of an algorithm to achieve the successful run, that is, when the algorithm achieves

the solution at the predefined accuracy level ε within the FEmax fitness evaluation. The algorithm with larger SR value is more reliable as it can consistently solve the problem with the predefined ε. Meanwhile, the algorithm’s convergence speed to attain the solution with predefined ε can be measured by either the SP or tmean values. Smaller values of SP and tmean imply the algorithm requires less computation resource to achieve the solutions with acceptable accuracy levels. To thoroughly compare the TPLPSO with its peers, we perform a two tailed t-test [23] with 58 degree of freedom at a 0.05 level of significant. Results of the Fmean , standard deviation (SD), and the t-test (h) achieved by all the involved algorithms are listed in Table 4. Boldface text in the tables indicates the best results among the algorithms. We summarize the Fmean and h comparison results among TPLPSO and other algorithms as “w/t/l” and “+/=/−” in the last two rows of the tables, respectively. “w/t/l” means that TPLPSO wins in w functions, ties in t functions, and loses in l functions respectively, compared with its peers. Meanwhile, “+/=/−” gives the number of functions that TPLPSO performs significantly better, almost the same as, and significantly worse than its contender, respectively. Additionally, for each function, we first rank the algorithms from the lowest Fmean to the highest one. Then we average the ranks over the number of functions available to obtain the average rank. Finally, we order the average rank and get the overall rank. 4.3.1. Comparison among the Fmean results From Table 4, we observe that TPLPSO has the best searching accuracy in the traditional problems (F1–F10), as it successfully

Table 3 Parameter settings of the involved PSO algorithms. Algorithm

Year

Population topology

Parameter settings

PSO-LDIW [11] CPSO [12] MPSO-TVAC [15] APSO [21] UPSO [14] FIPSO [13] CLPSO [16] FLPSO-QIW [23] FPSO [20] RPPSO [25] TPLPSO

1998 2002 2004 2009 2004 2004 2006 2011 2009 2011 –

Fully connected Fully connected Fully connected Fully connected Fully connected and local ring Local URing Comprehensive learning Comprehensive learning Adaptive Random Fully connected

ω : 0.9 − 0.4, c1 = c2 = 2.0  = 0.729, c1 = c2 = 1.49445 ω : 0.9 − 0.4, c1 : 2.5 − 0.5, c2 : 0.5 − 2.5 ω : 0.9 − 0.4, c1 + c2 : [3.0, 4.0], ı = [0.05, 0.1],  max = 1.0,  min = 0.1  = 0.729, c 1 = c2 = 1.49445, u = [0, 1]  = 0.729, ci = 4.1 ω : 0.9 − 0.4, c = 2.0, m = 7 ω : 0.9 − 0.2, c1 : 2 − 1.5, c2 : 1 − 1.5, m = 1, Pi = [0.1, 1], K1 = 0.1, K2 = 0.001,  1 = 1,  2 = 0 ω : 0.9 − 0.4, ci = 4.1 ω : 0.9 − 0.4, clarge = 6, csmall = 3 ω : 0.9 − 0.4, c1 = c2 = c = 2.0, m = 5, Rmax = 1.0, Rmin = 0.1

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

47

Table 4 Fmean , SD, and h results for 50-D problems (F1–F20). Function

APSO

CLPSO

CPSO

FLPSO-QIW

FPSO

FIPSO

MPSO-TVAC

RPPSO

PSO-LDIW

UPSO

TPLPSO

F1

Fmean SD h Rank

2.50E−01 1.81E−01 + 6

3.29E−47 1.28E−46 = 4

3.43E+03 2.80E+03 + 9

2.90E−81 5.97E−81 + 3

7.02E+01 6.98E+01 + 8

2.96E−01 8.06E−01 + 7

0.00E+00 0.00E+00 = 1

1.28E−02 2.98E−02 + 5

4.67E+03 7.30E+03 + 10

8.80E+03 1.69E+03 + 11

0.00E+00 0.00E+00

F2

Fmean SD h Rank

7.51E−02 3.39E−02 + 5

2.95E−29 2.16E−29 + 3

1.70E+01 1.37E+01 + 8

3.98E−57 1.01E−56 + 2

2.83E+00 1.81E+00 + 7

9.05E−02 1.54E−01 + 6

7.94E−10 1.59E−09 + 4

2.20E+01 1.60E+01 + 9

2.90E+01 1.75E+01 + 10

3.70E+01 1.05E+01 + 11

0.00E+00 0.00E+00

F3

Fmean SD h Rank

1.46E+03 4.82E+02 + 6

5.13E+03 1.00E+03 + 8

2.82E+04 1.01E+04 + 11

2.63E+02 8.90E+01 + 5

3.44E+03 1.33E+03 + 7

8.13E+00 2.47E+01 = 3

2.54E−02 2.83E−02 + 2

9.12E+01 4.21E+01 + 4

2.08E+04 1.59E+04 + 10

1.58E+04 4.79E+03 + 9

0.00E+00 0.00E+00

F4

Fmean SD h Rank

1.75E+01 3.85E+00 + 9

4.57E−01 2.10E−01 + 5

2.82E+01 4.40E+00 + 11

5.95E+00 2.25E+00 + 8

5.00E+00 3.48E+00 + 7

4.97E−01 7.87E−01 + 6

0.00E+00 0.00E+00 = 1

0.00E+00 0.00E+00 = 1

0.00E+00 0.00E+00 = 1

2.24E+01 3.09E+00 + 10

0.00E+00 0.00E+00

F5

Fmean SD h Rank

4.62E+01 1.53E+00 + 5

4.35E+01 1.83E−01 = 3

2.33E+02 1.61E+02 + 10

4.21E+01 2.40E−01 − 1

5.68E+01 7.08E+00 + 8

4.77E+01 8.44E−01 + 7

4.34E+01 5.10E−01 = 2

4.76E+01 4.30E−01 + 6

2.10E+02 4.34E+02 + 9

4.30E+02 1.29E+02 + 11

4.35E+01 1.23E+00

F6

Fmean SD h Rank

5.81E−01 6.29E−01 + 3

9.10E+01 1.08E+01 + 8

2.84E+02 4.22E+01 + 10

2.60E+00 1.52E+00 + 5

1.85E+01 1.02E+01 + 7

1.57E+00 3.71E+00 + 4

3.02E−15 6.47E−15 + 2

9.25E+00 1.55E+01 + 6

1.15E+02 7.78E+01 + 9

2.94E+02 3.99E+01 + 11

0.00E+00 0.00E+00

Fmean SD h Rank

3.60E−02 3.22E−02 + 3

8.10E+01 9.76E+00 + 8

2.95E+02 4.75E+01 + 11

5.58E+00 2.36E+00 + 5

1.60E+01 9.56E+00 + 7

5.70E−01 8.65E−01 + 4

2.61E−15 3.72E−15 + 2

1.25E+01 1.94E+01 + 6

1.14E+02 5.81E+01 + 9

2.26E+02 3.86E+01 + 10

0.00E+00 0.00E+00

F8

Fmean SD h Rank

1.70E−01 8.21E−02 + 6

3.39E−11 1.73E−10 = 3

3.78E+01 3.78E+01 + 9

5.75E−04 2.20E−03 = 4

1.86E+00 9.28E−01 + 8

1.93E−01 3.47E−01 + 7

0.00E+00 0.00E+00 = 1

7.10E−03 1.85E−02 + 5

3.92E+01 7.00E+01 + 10

7.49E+01 1.93E+01 + 11

0.00E+00 0.00E+00

F9

Fmean SD h Rank

6.60E−02 2.57E−02 + 5

1.15E−14 2.59E−15 + 3

1.45E+01 1.51E+00 + 11

3.43E−14 1.07E−14 + 4

1.80E+00 1.10E+00 + 8

1.70E−01 3.38E−01 + 6

0.00E+00 0.00E+00 = 1

7.47E−01 9.17E−01 + 7

1.21E+01 5.99E+00 + 9

1.28E+01 1.04E+00 + 10

0.00E+00 0.00E+00

F10

Fmean SD h Rank

5.44E−01 1.88E−01 + 6

0.00E+00 0.00E+00 = 1

4.83E+01 4.94E+00 + 11

1.88E−05 8.29E−05 = 3

3.35E+00 2.35E+00 + 8

9.80E−01 9.54E−01 + 7

1.50E−01 4.58E−01 = 4

4.69E−01 1.25E+00 + 5

8.09E+00 5.90E+00 + 9

3.96E+01 4.33E+00 + 10

0.00E+00 0.00E+00

F11

Fmean SD h Rank

2.01E−01 1.17E−01 + 6

2.56E−46 1.25E−45 = 4

3.66E+03 3.98E+03 + 9

1.15E−80 4.42E−80 = 3

6.28E+01 6.96E+01 + 8

4.97E−01 1.06E+00 + 7

0.00E+00 0.00E+00 = 1

4.98E−03 1.58E−02 = 5

4.67E+03 5.71E+03 + 10

9.33E+03 2.50E+03 + 11

0.00E+00 0.00E+00

Fmean SD h Rank

1.26E+03 3.22E+02 + 6

5.77E+03 9.90E+02 + 8

2.79E+04 1.39E+04 + 11

2.62E+02 7.62E+01 + 5

3.23E+03 1.79E+03 + 7

8.45E+00 2.24E+01 = 3

1.08E−01 1.93E−01 = 2

9.09E+01 3.77E+01 + 4

2.57E+04 1.75E+04 + 10

1.82E+04 5.66E+03 + 9

0.00E+00 0.00E+00

F13

Fmean SD h Rank

1.83E+02 5.61E+01 + 8

3.33E+02 2.34E+01 + 9

3.51E+02 4.07E+01 + 10

1.26E+02 1.76E+01 + 5

1.80E+02 5.01E+01 + 7

2.65E+01 3.39E+01 = 2

7.95E+01 5.80E+01 + 4

4.25E+01 4.64E+01 = 3

1.70E+02 7.41E+01 + 6

3.80E+02 2.79E+01 + 11

2.47E+01 5.69E+01

F14

Fmean SD h Rank

2.59E+02 6.15E+01 + 8

3.21E+02 2.52E+01 + 10

3.63E+02 8.17E+01 + 11

1.28E+02 2.13E+01 + 5

1.53E+02 3.74E+01 + 6

4.15E+01 5.13E+01 + 2

1.13E+02 7.17E+01 + 4

8.00E+01 5.30E+01 + 3

2.00E+02 7.58E+01 + 7

2.90E+02 3.51E+01 + 9

2.26E+01 4.36E+01

F15

Fmean SD h Rank

5.91E−01 7.76E−01 + 3

6.85E+01 1.01E+01 + 5

5.16E+02 8.59E+01 + 10

5.88E+00 2.51E+00 + 4

2.08E+02 4.59E+01 + 8

1.31E+02 2.93E+01 + 6

2.99E−01 4.64E−01 + 2

1.62E+02 4.08E+01 + 7

2.93E+02 5.76E+01 + 9

5.13E+02 5.41E+01 + 11

6.30E−03 3.10E−03

F16

Fmean SD h Rank

7.20E−03 1.06E−02 = 2

6.99E+01 7.32E+00 + 5

5.28E+02 8.47E+01 + 11

1.20E+01 3.16E+00 + 4

1.63E+02 2.82E+01 + 7

1.49E+02 3.94E+01 + 6

3.01E−01 4.67E−01 + 3

2.09E+02 5.21E+01 + 8

3.14E+02 6.18E+01 + 9

4.32E+02 6.15E+01 + 10

5.23E−03 2.36E−03

F7

F12

1

1

1

1

3

1

1

1

1

1

1

1

1

1

1

1

48

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

Table 4 (Continued) Function

APSO

CLPSO

CPSO

FLPSO-QIW

FPSO

FIPSO

MPSO-TVAC

RPPSO

PSO-LDIW

UPSO

TPLPSO

F17

Fmean SD h Rank

0.00E+00 0.00E+00 = 1

4.24E−09 1.35E−08 = 6

7.69E+02 4.09E+02 + 10

2.05E−03 3.49E−03 + 7

1.46E+03 4.63E+02 + 11

0.00E+00 0.00E+00 = 1

0.00E+00 0.00E+00 = 1

0.00E+00 0.00E+00 = 1

7.21E+02 4.54E+02 + 9

7.21E+01 1.75E+02 + 8

0.00E+00 0.00E+00

F18

Fmean SD h Rank

6.45E+01 4.05E+00 + 8

5.70E+01 2.25E+00 + 6

6.74E+01 3.21E+00 + 10

4.91E+01 2.25E+00 + 2

5.05E+01 4.12E+00 + 3

5.27E+01 4.12E+00 + 4

5.64E+01 5.48E+00 + 5

6.76E+01 3.70E+00 + 11

5.73E+01 4.75E+00 + 7

6.55E+01 3.27E+00 + 9

4.57E+01 8.46E+00

F19

Fmean SD h Rank

1.32E+07 4.09E+06 + 4

5.19E+07 8.32E+06 + 7

5.78E+08 2.46E+08 + 10

1.89E+07 4.92E+06 + 6

1.03E+08 8.26E+07 + 8

1.02E+07 3.44E+06 + 3

5.35E+06 2.55E+06 + 2

1.49E+07 1.13E+07 + 5

6.11E+08 6.42E+08 + 11

3.89E+08 1.18E+08 + 9

3.55E+06 1.15E+06

F20

Fmean SD h Rank

4.13E+00 1.19E+00 + 4

2.03E+01 1.65E+00 + 5

7.28E+03 1.84E+04 + 10

4.01E+00 1.46E+00 + 3

4.35E+01 8.59E+00 + 7

2.76E+01 5.79E+00 + 6

2.18E+00 6.84E−01 + 2

4.79E+01 2.19E+01 + 8

2.64E+04 6.86E+04 + 11

2.23E+03 1.60E+03 + 9

1.68E+00 3.94E−01

5(5.20) 19/1/0 18/2/0

7(5.55) 18/2/0 14/6/0

11(10.15) 20/0/0 20/0/0

3(4.20) 19/0/1 16/3/1

8(7.35) 20/0/0 20/0/0

4(4.85) 19/1/0 16/4/0

2(2.30) 13/6/1 11/9/0

6(5.45) 18/2/0 16/4/0

9(8.75) 19/1/0 19/1/0

10(10.00) 20/0/0 20/0/0

1(1.10)

Overall rank w/t/l +/=/−

finds the global optima (Fmean = 0) of nine problems, except for the function F5. To be specific, TPLPSO is the only algorithm that successfully solves the functions of F2, F3, F6, and F7. None of the involved algorithms are able to reach to the global optimum of function F5 (Rosenbrock), as the global optimum of this function is located in a long narrow parabolic shaped flat valley, in order to test the ability of an algorithm to navigate the flat regions with small gradient. It is easily for most algorithms to find the valley, but hardly to converge toward the global optimum. Although the Fmean value produced by TPLPSO in function F5 is larger than the CLPSO, FLPSO-QIW, and MPSO-TVAC, the performance deviation is relatively small. For rotated problems (F11–F14), we observe that majority of the involved algorithms experience performance degradation as the Fmean values produced in the rotated problems are larger (worse) than the unrotated counterparts. The rotating operation imposed on the traditional problems increase the problems’ complexity, thereby making the rotated problems become more challenging. From Table 4, we observe that the TPLPSO has more robust searching accuracy than its contenders in the rotated problems. Particularly, TPLPSO is the only algorithm that successfully achieves the global optima in both of the unimodal rotated functions F11 and F12. Meanwhile, the multimodal rotated functions F13 and F14 impose greater challenges to all involved algorithms, as none of them are able to solve these problems completely. Nevertheless, our TPLPSO produces the best (smallest) Fmean values in functions F13 and F14, implying that it is least susceptible to the rotation operation on the multimodal problems. From Table 4, we observe similar performance deterioration in the shifted problems (F15–F17), as all involved algorithms fail to find the global optima for all functions, except for function 17. Function F17 is the easiest shifted problem since besides the TPLPSO, other competitors such as APSO, FIPSO, MPSO-TVAC and RPPSO also solve this problem completely. For functions F15 and F16, our TPLPSO achieves the best Fmean values and it is the only algorithm to achieve the accuracy level of 10−3 . This implies that our TPLPSO is least sensitive to the shifting of the global optimum to a random location. Another significant finding is, compared to the multimodal rotated problems (F13 and F14), the fitness landscape of the multimodal shifted problems (F15 and F16) is less challenging to TPLPSO, as the Fmean values produced by it in functions F15 and F16 (with accuracy = 10−3 ) are smaller than those in functions F13 and F14 (with accuracy = 101 ).

1

1

1

1

For complex problems (F18–F20), all involved algorithms experience further performance degradations. The inclusion of both rotating and shifting operations (F18 and F19), as well as the expanded operation (F20) have significantly increased the problems’ complexities; thereby impose greater challenges to the algorithms in locating the global optima of these problems. Among all the 11 algorithms, the TPLPSO has the most superior searching accuracy in the complex problems as it produces the smallest Fmean values in functions F18 to F20. To be specific, TPLPSO achieves the Fmean values at the accuracy level of 106 in function F19, while the majority of the PSO variants achieve the accuracy levels of 107 or 108 in the same function. 4.3.2. Comparisons among the t-test results Generally, the TPLPSO has the best searching accuracy among the 11 algorithms in comparison on traditional, rotated, shifted, and complex problems. This observation is further validated by the t-test results, as the h values reported in Table 4 are largely in line with the Fmean values. From the last row of Table 4 (the row with “+/=/−”), we observe that the number of problems where TPLPSO performs significantly better than its peer (h = “+”) is much larger than the number of problems where the former achieves significantly worse results than the latter (h = “−”). Particularly, the TPLPSO significantly outperforms all of its peers in the functions F2, F6, F7, F14, F15, F18, F19, and F20. Also, the t-test results reveal that some performance differences between the TPLPSO and the compared peers are insignificant, despite the fact that these two algorithms have large different Fmean values. For example, the performance difference between TPLPSO and FLPSO-QIW is insignificant in function F8, albeit the Fmean values produced by both algorithms are 0.00E+00 and 5.75E−04, respectively. Similar observation could be made on FLPSO-QIW and MPSO-TVAC in function F10, as well as RPPSO in function F11. This scenario happens when the algorithm such as FLPSO-QIW runs with a predefined number of independent runs, there is a small probability for it to be trapped into the local optima, thereby produces a large fitness value. The large fitness value produced tends to jeopardize the overall Fmean value of FLPSO-QIW. As shown in the following subsection, despite having relatively large Fmean value in function F11, the FLPSO-QIW successfully solves the problem completely, that is with SR = 100%. 4.3.3. Comparisons among the SR results We present the SR and SP values produced by all algorithms in Table 5, in order to compare the algorithms’ reliability and

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

49

Table 5 SR and SP results for 50-D benchmark problems (F1–F20). Function

APSO

CLPSO

CPSO

FLPSO-QIW

FPSO

FIPSO

MPSO-TVAC

RPPSO

PSO-LDIW

UPSO

TPLPSO

F1

SR SP

0.00 Inf

100.00 1.25E+05

0.00 Inf

100.00 6.04E+04

13.33 9.68E+04

80.00 9.86E+04

100.00 4.49E+03

73.33 1.52E+04

63.33 1.63E+04

0.00 Inf

100.00 6.65E+02

F2

SR SP

0.00 Inf

100.00 1.41E+05

0.00 Inf

100.00 7.11E+04

3.33 3.44E+05

50.00 1.65E+05

100.00 1.02E+05

10.00 3.11E+05

0.00 Inf

0.00 Inf

100.00 9.96E+02

F3

SR SP

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

70.00 1.62E+05

10.00 2.66E+05

0.00 Inf

0.00 Inf

0.00 Inf

100.00 1.13E+05

F4

SR SP

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

10.00 1.13E+05

60.00 1.39E+05

100.00 5.78E+03

100.00 1.17E+04

100.00 1.18E+04

0.00 Inf

100.00 9.49E+02

F5

SR SP

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

F6

SR SP

0.00 Inf

0.00 Inf

0.00 Inf

6.67 3.46E+06

0.00 Inf

40.00 3.56E+05

100.00 8.57E+04

70.00 4.19E+04

3.33 4.70E+05

0.00 Inf

100.00 1.32E+03

F7

SR SP

6.67 2.60E+06

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

33.33 2.66E+05

100.00 1.08E+05

60.00 2.50E+04

3.33 6.04E+05

0.00 Inf

100.00 2.02E+03

F8

SR SP

0.00 Inf

100.00 1.11E+05

0.00 Inf

100.00 5.00E+04

6.67 1.60E+05

70.00 1.05E+05

100.00 4.54E+03

86.67 1.34E+04

73.33 1.13E+04

0.00 Inf

100.00 7.03E+02

F9

SR SP

0.00 Inf

100.00 1.05E+05

0.00 Inf

100.00 4.79E+04

3.33 3.36E+05

53.33 1.07E+05

100.00 5.64E+03

36.67 1.25E+05

16.67 6.96E+04

0.00 Inf

100.00 8.34E+03

F10

SR SP

0.00 Inf

100.00 1.42E+05

0.00 Inf

100.00 6.67E+04

0.00 Inf

16.67 4.59E+05

90.00 1.73E+05

83.33 3.15E+04

16.67 2.11E+05

0.00 Inf

100.00 7.83E+02

F11

SR SP

0.00 Inf

100.00 1.26E+05

0.00 Inf

100.00 6.04E+04

13.33 8.24E+04

73.33 9.70E+04

100.00 4.49E+03

86.67 1.35E+04

56.67 2.21E+04

0.00 Inf

100.00 6.83E+02

F12

SR SP

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

73.33 1.62E+05

20.00 3.13E+05

3.33 1.69E+06

3.33 7.47E+05

0.00 Inf

100.00 1.13E+05

F13

SR SP

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

53.33 2.60E+05

20.00 3.17E+04

40.00 3.95E+04

6.67 3.84E+05

0.00 Inf

80.00 3.16E+03

F14

SR SP

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

43.33 2.84E+05

16.67 1.83E+05

20.00 9.26E+04

0.00 Inf

0.00 Inf

76.67 5.60E+03

F15

SR SP

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

70.00 2.65E+05

0.00 Inf

0.00 Inf

0.00 Inf

83.33 3.41E+05

F16

SR SP

86.67 2.51E+05

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

70.00 2.74E+05

0.00 Inf

0.00 Inf

0.00 Inf

93.33 3.00E+05

F17

SR SP

100.00 1.20E+04

100.00 7.44E+04

6.67 9.05E+03

100.00 4.88E+04

0.00 Inf

100.00 2.47E+03

100.00 4.42E+03

100.00 9.09E+02

13.33 4.54E+03

80.00 3.78E+03

100.00 1.13E+03

F18

SR SP

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

F19

SR SP

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

F20

SR SP

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

0.00 Inf

convergence rate, respectively. Table 5 reveals that the TPLPSO has more superior searching reliability than its peers, as the SR values produced are generally higher than its peers in all categories of problems. To be specific, the TPLPSO completely solve (SR = 100%) nine out of ten traditional problems at the predefined ε. The excellent performance of TPLPSO could also be observed in the rotated problems and the shifted problems, as it is able to solve these aforementioned problems completely or partially (0% < SR < 100%) at predefined ε. It is worth mentioning that TPLPSO is the only algorithm to solve the function F12 with SR = 100%. Additionally, the SR values produced by the TPLPSO is at least 1.5 times better than the second rank algorithm FIPSO, in functions F13 and F14. Meanwhile, none of the involved algorithms are able to solve the complex problems completely or partially with the predefined ε, that is, SR = 0%. Nevertheless, the TPLPSO is proven the best as it produces the smallest Fmean values in the functions F18 to F20, as reported in Table 4.

4.3.4. Comparisons among the SP results The SP values tabulated in Table 5 and the convergence curves in Fig. 8 are used to evaluate the algorithm’s convergence speed quantitatively and qualitatively, respectively. Since the SP value indicates the computation cost required by an algorithm to solve the problem with predefined ε, it is impossible to obtain the SP value if the algorithm never solves the problem (SR = 0%). In such case, we assign the SP value as infinity, namely “Inf”. To conserve the space, we only present eight convergence curves, i.e. two from traditional, rotated, shifted, and complex problems, respectively. From Table 5, we observe that the TPLPSO achieves the best (smallest) SP values in all of the traditional and rotated problems, except for the function F5. This implies that the TPLPSO requires the least computation cost to solve the traditional and rotated problems with acceptable ε. The rapid convergence characteristic of TPLPSO is also reflected in the convergence curves as shown in Fig. 8(a)–(d). To be specific, we observe that the convergence curve

50

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

Fig. 8. Convergence curves of 50 dimensional test functions: (a) F9, (b) F10, (c) F12, (d) F14, (e) F15, and (f) F17, (g) F18 and (h) F20.

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

Mean Computational Time (s)

1.60E+03 1.40E+03

APSO

1.20E+03

CLPSO CPSO

1.00E+03

FLPSO-QIW

8.00E+02

FPSO

6.00E+02

FIPSO

4.00E+02

MPSO-TVAC

2.00E+02

RPPSO PSO-LDIW

0.00E+00 F1

F2

F3

F4 F5 F6 F7 Benchmark Function

F8

F9

F10

TPLPSO

51

PSO-LDIW and TPLPSO are the lowest in a majority of the employed functions. More details, the proposed TPLPSO achieves 15 best and 2 second best tmean values, whereas PSO-LDIW and CPSO together record 5 best and 13 second best values out of the 20 employed benchmarks. In most of the employed benchmarks, the differences between tmean values produced by TPLPSO, PSO-LDIW, and CPSO are relatively insignificant, which suggests that our proposed work does not incur excessive computational overhead as compared to the well-established APSO, CLPSO, and FLPSO-QIW. The excellent performance of TPLPSO in terms of tmean and previously reported SP values confirms that the proposed algorithm is indeed more computationally efficient than its peers.

(a) Mean Computational Time (s)

6.00E+03

4.4. Effect of different strategies APSO

5.00E+03

CLPSO 4.00E+03

CPSO FLPSO-QIW

3.00E+03

FPSO FIPSO

2.00E+03

MPSO-TVAC 1.00E+03

RPPSO PSO-LDIW

0.00E+00 F11

F12

F13

F14 F15 F16 F17 Benchmark Function

F18

F19

F20

TPLPSO

(b) Fig. 9. Mean computational time (in seconds) for 50-D functions: (a) F1 to F10 and (b) F11 to F20.

of TPLPSO in functions F1–F3 (Fig. 8(a)–(c)) drop sharply at one point, usually at the early or middle stages of optimization. This observation reveals the capability of the TPLPSO to escape from the problems’ local optima. For shifted problems, TPLPSO records the promising SP values, i.e. two second best SP values in functions F15 and F17. The SP values produced by the TPLPSO in the shifted problems are slightly inferior, due to its slow convergence speed at the earlier stage of searching. This is shown in the convergence curve of function F15 (Fig. 8(e)), where the TPLPSO initially converges with slower speed and its convergence speed is then increased in later stage of the optimization. For complex problems, none of the involved algorithms are able to solve them with predefined ε, thereby no SP values are available. Nevertheless, the convergence curves for functions F18 and F20 (Fig. 8(g) and (h)) reveals that the TPLPSO has the competitive convergence speed among its peers in the complex problems. To be specific, the convergence speed of the TPLPSO in function F18 is significantly faster than its peers in the middle stage of the optimization. This ensures the TPLPSO to surpass all of its peers although the former initially converges slower than the latter. 4.3.5. Comparison of mean computational time The SP analysis in previous subsection reveals that TPLPSO is more computationally efficient than its peer algorithms. To further verify this finding, we compute the mean computational time (tmean ) of all involved PSO variants on the employed 20 benchmarks. Similar with previous experiments, the dimensionality level of 50 was considered. The tmean for all the algorithms were measured on a PC Intel Core 2 Duo 2.13 GHz with 3.50GM RAM that runs Windows XP with Matlab implementation. The results are summarized in Fig. 9. As shown in Fig. 9, the involved PSO variants exhibit diverse tmean values. In general, APSO, CLPSO, and FLPSO-QIW appear to have higher computational overhead with respect to the other algorithms, as these three algorithms produce 5, 8, and 7 worst tmean values out of the 20 employed benchmarks, respectively. On the contrary, we observe that the computational overhead of the CPSO,

The proposed TPLPSO employs two strategies, namely the teaching and peer-learning (TPL) framework that improved from the TLBO and the SPS module. To study the contribution on performance improvement and the computational overhead incurred by each of these strategies, we investigate the performance of (1) TPLPSO without the peer-learning mechanism (TPLPSO1), (2) TPLPSO without the SPS module (TPLPSO2), (3) TPLPSO that adopts the original TLBO framework (TPLPSO3), and (4) the complete TPLPSO. For TPLPSO3, we replace the Pi solution in Eq. (1) with the Xmean and randomly select the exemplar during the peer-learning stage. We keep the SPS module in TPLPSO3, to ensure any performance differences observed between the TPLPSO3 and TPLPSO are due to the types of learning framework adopted (i.e. original and improved TLBO frameworks). 4.4.1. Comparison among the Fmean results and percentage improvement In this subsection, we compare the Fmean values produced by TPLPSO1, TPLPSO2, TPLPSO3, and TPLPSO with those produced by the PSO-LDIW. The comparison results are expressed in terms of the percentage improvement (%Improve) calculated as follows [40]: %Improve =

Fmean (PSO − LDIW ) − Fmean ( )

  Fmean (PSO − LDIW )

× 100%

(11)

where denotes TPLSO1, TPLSO2, TPLPSO3, or TPLPSO. If has better performance (that is smaller Fmean ) than PSO-LDIW, the %Improve is positive. Otherwise, negative value will be assigned. The Fmean and %Improve values of all involved algorithms are presented in Table 6. As shown in the average %Improve values in Table 6, we observe that all of the TPLPSO variants have achieved performance improvement against the PSO-LDIW, implying that the employment of any strategies, namely the TPL framework or the SPS module, indeed helps to improve the PSO’s searching accuracy. Among all these TPLPSO variants, the TPLPSO produces the largest average %Improve value, followed by the TPLPSO3, TPLPSO2, and TPLPSO1. From Table 6, although the TPLPSO1 successfully solves seven out of the 20 benchmarks, we observe some performance deteriorations in the functions F4, F13, F14, and F18, as the Fmean values produced by the TPLPSO1 in these functions are worse than the PSO-LDIW (i.e. negative %Improve cases are obtained). This implies that the absence of the peer-learning stage could lead to the poor searching accuracy of TPLPSO in certain benchmarks. Meanwhile, we also notice that, unlike the other TPLPSO variants, TPLPSO2 fails to locate the global optima of functions F1, F2, F8 to F11, and F17. This observation suggests that without the SPS module, TPLPSO2 is more likely to be trapped in the local optima, as compared to the other TPLPSO variants. From this observation, we could deduce that the contribution of SPS module in enhancing the TPLPSO searching performance is equally important, if not

52

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

Table 6 Fmean and %Improvement values achieved by PSO-LDIW and TPLPSO variants in 50-D problems. Function

Fmean (%Improve) PSO-LDIW

TPLPSO1

TPLPSO2

TPLPSO3

TPLPSO

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16 F17 F18 F19 F20

4.67E+03 (–) 2.90E+01 (–) 2.08E+04 (–) 0.00E+00 (–) 2.10E+02 (–) 1.15E+02 (–) 1.14E+02 (–) 3.92E+01 (–) 1.21E+01 (–) 8.09E+00 (–) 4.67E+03 (–) 2.57E+04 (–) 1.70E+02 (–) 2.00E+02 (–) 2.93E+02 (–) 3.14E+02 (–) 7.21E+02 (–) 5.73E+01 (–) 6.11E+08 (–) 2.64E+04 (–)

0.00E+00 (100.0000) 0.00E+00 (100.0000) 3.34E+02 (98.3886) 6.11E−02 (−610.6963) 4.33E+01 (79.3205) 5.02E−04 (99.9996) 6.25E−04 (99.9994) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 4.89E+02 (98.0930) 2.06E+02 (−21.3404) 2.53E+02 (−26.7332) 3.62E−03 (99.9988) 5.21E−03 (99.9983) 0.00E+00 (100.0000) 5.84E+01 (−1.9887) 7.01E+06 (98.8539) 2.06E+00 (99.9922)

1.67E+03 (64.2857) 4.00E+00 (86.2069) 1.95E+04 (5.9244) 0.00E+00 (0.0000) 4.73E+01 (77.4279) 2.59E+01 (77.3849) 3.17E+01 (72.1031) 6.02E+00 (84.6423) 1.59E+01 (−31.5659) 1.60E+00 (80.2128) 3.33E+02 (92.8571) 1.22E+04 (52.2829) 1.53E+01 (90.9945) 1.94E+01 (90.3049) 2.43E+02 (17.3133) 1.93E+02 (38.5628) 6.00E+02 (16.6715) 5.56E+01 (2.8533) 5.73E+08 (6.3588) 1.74E+03 (93.3892)

0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (0.0000) 4.43E+01 (78.8437) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 6.34E+01 (62.7335) 1.13E+02 (43.4991) 4.89E−03 (99.9983) 5.74E−03 (99.9982) 0.00E+00 (100.0000) 4.47E+01 (21.9109) 4.19E+06 (99.3144) 1.76E+00 (99.9933)

0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (0.0000) 4.35E+01 (79.2348) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 0.00E+00 (100.0000) 2.47E+01 (85.4713) 2.26E+01 (88.6748) 6.27E−03 (99.9979) 5.23E−03 (99.9983) 0.00E+00 (100.0000) 4.57E+01 (20.1114) 3.55E+06 (99.4190) 1.68E+00 (99.9936)

w/t/l Average %Improve

19/1/0 –

10/7/3 45.6943

17/1/2 50.9105

6/12/2 85.3146

– 88.6451

more than the peer-learning stage. Although there is no significant trend in Table 6 to indicate which strategy performs particularly well on certain type of benchmarks, we observe the integration of both of the TPL framework and the SPS module into the TPLPSO has significantly enhanced the algorithm’s searching accuracy. This is validated by the simulation results in Table 6, as the average %Improve values produced by both of the TPLPSO3 and TPLPSO are almost twice better than the TPLPSO1 and TPLPSO2. Finally, we also observe that our improved TPL framework performs better than the original TLBO framework, as the former outperforms the latter in six benchmarks, while the latter only performs better than the former in two benchmarks. Moreover, the average %Improve value achieved by the TPLPSO is better than the TPLPSO3.

4.4.2. Comparison between the mean computational times of TPLPSO variants In this subsection, we investigate the computational overhead cost of adding the TPL framework and the SPS module into the PSO, by computing the tmean values of each TPLPSO variants in solving the 20 employed benchmarks in 50-D. The experimental results are summarized in Fig. 10. Two noteworthy observations could be made from Fig. 10. First, TPLPSO1 (with teaching stage and SPS module) produces higher tmean values than TPLPSO2 (with teaching and peer-learning stages) in majority of the employed benchmarks. This implies that SPS module tends to incur more computational overhead as compared with the peer-learning stage. Second, TPLPSO3 (with original TLBO framework) and TPLPSO (with modified TLBO framework) consume similar amount of computational resources, since no significant differences could be observed on the tmean values produced by these two algorithms in the 20 employed benchmarks. As mentioned earlier, both of the TPLPSO3 and TLPSO are equipped with the SPS module. The only difference between these two algorithms is the latter one employs roulette wheel selection in choosing the peer particles during the peer-learning stage. By comparing the tmean produced by the TPLPSO3 and TPLPSO, we conjecture that the roulette wheel selection technique does not consume significantly more computational resources than the random selection technique (which is employed in the original TLBO framework). The simulation results in Table 6 and Fig. 10 show that our proposed

TPL framework outperforms the original TLBO framework, without incurring excessive amount of computational overheads. 4.5. Comparison with state-of-the-art metaheuristic search algorithms In this section, we compare the TPLPSO with some state-ofthe-art metaheuristic search (MS) algorithms, as these algorithms are also developed to solve the optimization problems. We first perform the details comparison between the TPLPSO and TLBO algorithms [30,31]. Next, the TPLPSO is compared with seven other well-established MS algorithms. 4.5.1. Performance comparisons between TPLPSO and TLBO Since the TPLPSO is inspired from the recent proposed TLBO, it is worth to study if the former could outperform the latter. To perform this investigation, we made the own coded simulation comparison between TLBO and TPLPSO with 20 benchmark problems as presented in Table 1. We perform the evaluation with 30 variables, D = 30. Both of the TPLPSO and TLBO are run independently 30 times to reduce the random discrepancy. Similar with previous experiments, the algorithms are terminated if (1) the exact solution has been found, or (2) the maximum number of fitness evaluation FEmax is reached. The population size S and FEmax used in D = 30 cases are 20 and 1.00E+05, respectively [38]. The mean fitness Fmean , standard deviation (SD), and t-test results (h) obtained by the TPLPSO and TLBO are summarized in Table 7. Additionally, we also present the convergence curves of these two algorithms in some selected problems in Fig. 11. As shown in Table 7, the proposed TPLPSO achieves the best Fmean values in all of the 20 employed benchmarks, implying that TPLPSO has more superior searching accuracy than TLBO. The excellent performance of TPLPSO is also validated by the t-test, whereby the TPLPSO significantly outperforms (with h = ‘+’) the TLBO in 17 of the 20 employed functions. Meanwhile, the convergence curves as shown in Fig. 11 reveal that TPLPSO has more competitive convergence speed than the TLBO. Specifically, the convergence curves of the TPLPSO tend to drop off at one point, usually at the early stage [functions F9, F10, and F17, as illustrated in Fig. 11(a), (b), and (f), respectively] or middle stage [function F12 as illustrated by Fig. 11(c)] of the optimization. These observations reveal the ability

7.00E+02 6.00E+02 5.00E+02 4.00E+02 3.00E+02 2.00E+02 1.00E+02 0.00E+00 F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 Benchmark Function PSO-LDIW TPLPSO1 TPLPSO3

Mean Computational Time (s)

Mean Computational Time (s)

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

TPLPSO2

1.20E+03 1.00E+03 8.00E+02 6.00E+02 4.00E+02 2.00E+02 0.00E+00 F11 F12 F13 F14 F15 F16 F17 F18 F19 F20 Benchmark Function PSO-LDIW TPLPSO1

TPLPSO

TPLPSO3

(a)

53

TPLPSO2

TPLPSO

(b)

Fig. 10. Comparison of the mean computational time for the TPLPSO variants in 50-D problems.

of the TPLPSO to locate the global optima by consuming a significantly small amount of FEs, as compared to TLBO. For functions F14, F15, F18, and F20 [as illustrated by Fig. 11(d), (e), (g), and (h), respectively], we observe that TPLPSO converges significantly faster than TLBO in the early stage of optimization. The promising convergence characteristic of TPLPSO enables it to exploit the promising regions of search space earlier than the TLBO. Thus, TPLPSO has better chance to achieve better solution as compared to the TLBO. 4.5.2. Comparison of mean computational time between TPLPSO and TLBO In this section, we compare the computational overhead of TPLPSO and TLBO in solving the 20 employed benchmarks in 30D search space. The mean computational time (tmean ) required by both algorithms in solving the involved benchmarks are presented in Fig. 12. The results in Fig. 12 show that the TPLPSO produces higher tmean values than the TLBO in 16 out of the 20 employed benchmarks. This observation implies that TPLPSO in general exhibits higher computational overhead with respect to the TLBO. We opine that such observation is reasonable, as the proposed TPLPSO could be considered as a hybrid algorithm which combines the TLBO and the conventional PSO with some modifications such as (1) incorporation of roulette wheel selection in the peer-learning phase of TPLPSO, and (2) proposed stagnation prevention strategy (SPS) to reduce the premature convergence. These modifications tend to incur extra computational overhead of TPLPSO. Despite consuming slightly more computation resources, previous experimental results reveal that TPLPSO significantly outperforms TLBO, in term of searching accuracy and convergence speed. These

findings suggest that our TPLPSO is able to achieve notable performance improvement against the TLBO, without incurring excessive computational resources. In other words, the proposed TPLPSO achieves better tradeoff between the performance improvement and the computational overhead consumed. 4.5.3. Performance comparisons between TPLPSO and other metaheuristic search algorithm In this section, we compare our TPLPSO with the real-coded chemical reaction optimization (RCCRO) [40], group search optimization (GSO) [41], real-coded biogeography-based optimization (RCBBO) [42], covariance matrix adaptation evolution strategy (CMAES) [43], generalized generation gap model with generic parent-centric recombination operation (G3PCX) [44], fast evolutionary programming (FEP) [36], and fast evolutionary search (FES) [45]. RCCRO is a real coded version of chemical reaction optimization (CRO) [46], which is inspired from the chemical reaction. GSO simulates the animal’s searching behavior according to the producer-scrounger (PS) model. RCBBO is the real-coded version of biogeography-based optimization, which developed based on the geographical distribution of biological organism. CMAES integrates the restart and increasing population size mechanisms into the classical evolutionary strategy (CES) [47]. G3PCX is a genetic algorithm (GA) [48] variant that is improved with the elite-preserving and scale model, as well as the parent–centric recombination operator. FEP and FES are improved variants of classical evolutionary programming (CEP) [49] and CES [47]. We compare the performance of TPLPSO with the seven MS algorithm across ten 30-D traditional problems. The parameter values of the involved MS algorithms are selected according to

Table 7 Fmean , SD, and h values in 30-D problems. Fmean ± SD (h)

Fmean ± SD (h)

Fmean ± SD (h)

Fmean ± SD (h)

Algorithm TLBO TPLPSO

F1 7.10E+02 ± 1.57E+03 (+) 0.00E+00 ± 0.00E+00

F2 7.50E+01 ± 2.03E+01 (+) 0.00E+00 ± 0.00E+00

F3 4.50E−10 ± 1.72E−09 (=) 0.00E+00 ± 0.00E+00

F4 6.22E+01 ± 6.57E+00 (+) 0.00E+00 ± 0.00E+00

Algorithm TLBO TPLPSO

F5 3.92E+01 ± 1.50E+01 (+) 2.51E+01 ± 7.04E−01

F6 2.22E+02 ± 2.20E+01 (+) 0.00E+00 ± 0.00E+00

F7 1.91E+02 ± 2.17E+01 (+) 0.00E+00 ± 0.00E+00

F8 7.27E+00 ± 1.19E+01 (+) 0.00E+00 ± 0.00E+00

Algorithm TLBO TPLPSO

F9 6.39E+00 ± 2.38E+00 (+) 0.00E+00 ± 0.00E+00

F10 3.27E+01 ± 2.64E+00 (+) 0.00E+00 ± 0.00E+00

F11 8.67E+02 ± 1.32E+03 (+) 0.00E+00 ± 0.00E+00

F12 2.98E−04 ± 1.63E−03 (=) 0.00E+00 ± 0.00E+00

Algorithm TLBO TPLPSO

F13 2.81E+02 ± 4.14E+01 (+) 6.74E+00 ± 2.07E+01

F14 2.74E+02 ± 3.86E+01 (+) 2.33E+01 ± 4.28E+01

F15 2.73E+02 ± 3.97E+01 (+) 1.29E−02 ± 6.75E−03

F16 2.69E+02 ± 5.01E+01 (+) 1.56E−02 ± 8.01E−03

Algorithm TLBO TPLPSO

F17 4.88E+01 ± 1.41E+02 (=) 0.00E+00 ± 0.00E+00

F18 3.46E+01 ± 1.74E+00 (+) 2.47E+01 ± 4.22E+00

F19 7.58E+06 ± 6.05E+06 (+) 3.12E+06 ± 1.56E+06

F20 5.59E+02 ± 1.13E+03 (+) 1.30E+00 ± 2.72E−01

54

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

Fig. 11. Convergence curves of 30 dimensional test functions: (a) F9, (b) F10, (c) F12, (d) F14, (e) F15, and (f) F17, (g) F18 and (h) F20.

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

3.00E+02 Mean Computational Time (s)

Mean Computational Time (s)

2.50E+02

55

2.00E+02 1.50E+02 1.00E+02 5.00E+01 0.00E+00

2.50E+02 2.00E+02 1.50E+02 1.00E+02 5.00E+01 0.00E+00

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 Benchmark Function TLBO TPLPSO

F11 F12 F13 F14 F15 F16 F17 F18 F19 F20 Benchmark Function TLBO TPLPSO

(a)

(b)

Fig. 12. Mean computational times of TPLPSO and TLBO in 30-D problems.

Table 8 Maximum fitness evaluation number (FEmax ) of the involved algorithms in 30-D problems. Function

RCCRO

GSO

RCBBO

CMAES

G3PCX

FEP

FES

TPLPSO

Sphere Schewefel 2.22 Schewefel 1.2 Schewefel 2.21 Rosenbrock Step Quartic Rastrigin Ackley Griewank

1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05 1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05

1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05 1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05

1.50E+05 2.00E+05 5.00E+05 5.00E+05 5.00E+05 1.50E+05 3.00E+05 3.00E+05 1.50E+05 3.00E+05

1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05 1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05

1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05 1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05

1.50E+05 2.00E+05 5.00E+05 5.00E+05 2.00E+06 1.50E+05 3.00E+05 5.00E+05 1.50E+05 2.00E+05

1.50E+05 2.00E+05 5.00E+05 5.00E+05 2.00E+06 1.50E+05 3.00E+05 5.00E+05 1.50E+05 2.00E+05

1.00E+05 1.00E+05 1.00E+05 1.00E+05 1.00E+05 1.00E+05 1.00E+05 1.00E+05 1.00E+05 1.00E+05

the recommendations of their respective authors [31,40–44,50]. The maximum number of fitness evaluation (FEmax ) and population size (S) of all algorithms for all functions are summarized in Tables 8 and 9, respectively. It is important to mention that the proposed TPLPSO is evaluated with the least FEmax in all functions, whereas the results for other MS algorithms are obtained with more FEmax , as their data are acquired from the published results [31,40–44,50]. For each function, we run TPLPSO 100 times and the Fmean values produced are tabulated in Table 10. From Table 10, we observe that although the TPLPSO is assigned with the lowest FEmax and relatively small population size, it yields the best searching performance in almost all tested functions. Specifically, our TPLPSO produces the lowest Fmean values in eight out of ten problems. TPLPSO is the only algorithm that is able to find the global optima for Sphere, Schwefel 2.22, Schwefel 1.2, Schewefel 2.21, and Ackley functions. Although the CMAES, G3PCX, and FEP outperform the TPLPSO on the Rosenbrock and Quartic functions, our TPLPSO performs better than them in remaining functions.

solution. The formal statement of the problem can be defined as follows [51]: Global min f (x) = max{ϕ1 (X), . . ., ϕ2m (X)}

(12)

where X = {(x1 , . . ., xD ) ∈ RD |0 ≤ xj ≤ 2} and m = 2D − 1, with

ϕ2i−1 (X) =

D 



cos ⎝

xk ⎠ ,

k=|2i−j−1|−1

j=i

ϕ2i (X) = 0.5 +



j 

D  j=i+1



cos ⎝

j 

i = 1, 2, . . ., D

⎞ xk ⎠ , i = 1, 2, . . ., D − 1

(13)

k=|2i−j−1|−1

ϕm+i (X) = −ϕi (X),

i = 1, 2, . . ., m

In this paper, we consider the radar polyphase code design problem for D = 20. All of the PSO variants that involved in the previous experiments are tested on this design problem, with the same parameter settings as shown in Table 3. The population size and FEmax used in this radar polyphase code design problem are 20 and 2.00E+05, respectively. The results over 30 runs are shown in Table 11, which consist of the Fmean , SD, and h values. From Table 11, we observe that our TPLPSO is the best optimizer in solving the radar polyphase code design problem, as it produces the smallest Fmean value. To be specific, the TPLPSO is the only algorithm that achieves the Fmean value with the accuracy level of 10−1 , while the rest of its competitors achieve the solutions with

4.6. Application to spread spectrum radar polyphase code design problem In this section, we investigate the performance of our TPLPSO over one real-world problem, namely the spread spectrum radar polyphase code design problem [51]. This design problem is widely used in the radar system design and it has no polynomial time Table 9 Population size (S) of the involved algorithms in 30-D problems. Algorithm S a

RCCRO a

10

GSO

RCBBO

CMAES

48

100

4 + 3 ln(D)





Except for Rosenbrock, Rastrigin, and Griewank functions, where the population size is S = 20.

G3PCX

FEP

FES

TPLPSO

100

100

100

20

56

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

Table 10 Comparisons between TPLPSO and other metaheuristic search algorithms in 30-D problems. Function Sphere Schwefel 2.22 Schwefel 1.2 Schwefel 2.21 Rosenbrock Step Quartic Rastrigin Ackley Griewank

Fmean SD Fmean SD Fmean SD Fmean SD Fmean SD Fmean SD Fmean SD Fmean SD Fmean SD Fmean SD

w/t/l

RCCRO

GSO

RCBBO

CMAES

G3PCX

FEP

FES

TPLPSO

6.43E−07 (2.09E−07) 2.19E−03 (4.34E−04) 2.97E−07 (1.15E−07) 9.32E−03 (3.66E−03) 2.71E+01 (3.43E+01) 0.00E+00 (0.00E+00) 5.41E−03 (2.99E−03) 9.08E−04 (2.88E−04) 1.94E−03 (4.19E−04) 1.12E−02 (1.62E−02)

1.95E−08 (1.16E−08) 3.70E−05 (8.62E−05) 5.78E+00 (3.68E+00) 1.08E−01 (3.99E−02) 4.98E+01 (3.02E+01) 1.60E−02 (1.33E−01) 7.38E−02 (9.26E−02) 1.02E+00 (9.51E−01) 2.66E−05 (3.08E−05) 3.08E−02 (3.09E−02)

1.39E−03 (5.50E−04) 7.99E−02 (1.44E−02) 2.27E+01 (1.03E+01) 3.09E−02 (7.27E−03) 5.54E+01 (3.52E+01) 0.00E+00 (0.00E+00) 1.75E−02 (6.43E−03) 2.62E−02 (9.76E−03) 2.51E−02 (5.51E−03) 4.82E−01 (8.49E−02)

6.09E−29 (1.55E−29) 3.48E−14 (4.03E−15) 1.51E−26 (3.64E−27) 3.99E−15 (5.31E−16) 5.58E−01 (1.39E+00) 7.00E−02 (2.93E−01) 2.21E−01 (8.65E−02) 4.95E+01 (1.23E+01) 4.61E+00 (8.73E+00) 7.40E−04 (2.39E−03)

6.40E − 79 (1.25E − 78) 2.80E+01 (1.01E+01) 1.06E−76 (1.53E−76) 4.54E+01 (8.09E+00) 3.09E+00 (1.64E+01) 9.46E+01 (5.97E+01) 9.80E−01 (4.63E−01) 1.74E+02 (3.20E+01) 1.35E+01 (4.82E+00) 1.13E−02 (1.31E−02)

5.70E−04 (1.30E−04) 8.10E−03 (7.70E−04) 1.60E−02 (1.40E−02) 3.00E−01 (5.00E−01) 5.06E+00 (5.87E+00) 0.00E+00 (0.00E+00) 7.60E−03 (2.60E−03) 4.60E−02 (1.20E−02) 1.80E−02 (2.10E−02) 1.60E−02 (6.76E−02)

2.50E−04 (6.80E−04) 6.00E−02 (9.60E−03) 1.40E−03 (5.30E−04) 5.50E−03 (6.50E−04) 3.328E+01 (4.313E+01) 0.00E+00 (0.00E+00) 1.20E−02 (5.80E−03) 1.60E−01 (3.30E−01) 1.20E−02 (1.80E−03) 3.70E−02 (5.00E−02)

0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 2.50E+01 (9.63E−01) 0.00E+00 (0.00E+00) 8.76E+00 (5.14E−01) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00)

8/1/1

9/0/1

8/1/1

8/0/2

8/0/2

8/1/1

8/1/1

the accuracy level of 100 . The superior performance of the TPLPSO is further verified by the t-test, as the h values in Table 11 indicate that the searching accuracy of our TPLPSO is statistically better than the other ten PSO variants.

to the Pg particle to jump out from the local optima. The benefit of SPS module in enhancing the TPLPSO searching performance is clearly indicated in Table 6. Being the only TPLPSO variant where the SPS module is absent, TPLPSO2 exhibits its vulnerability toward the premature convergence issue, as it is only able to find the global optimum of one benchmark problem. In contrast, the remaining TPLPSO variants, which are equipped with the SPS module, successfully locate the global optima of at least seven benchmark problems. The experimental results in Table 6 show that SPS module is indeed an important mechanism in preventing the TPLPSO to be trapped in the local optima of the search space. In current study, our main motivation is to investigate if the recent proposed TLBO framework could be adapted into the original PSO and then enhances the latter’s searching performance. Similar with the original TLBO framework, only one teacher particle is employed in the proposed TPLPSO. As the TPLPSO relies upon the single teacher particle for determining the global optimum, there is a risk involved that if the teacher particle is trapped into local optima, then all the other student particles may also be trapped. In other words, the total reliance of TPLPSO on the single teacher particle adds the local optima trapping tendency, which may then lead to the poor searching performance of algorithm. In fact, such performance deterioration could be observed in Table 6, where the TPLPSO2, which is only equipped with the TPL framework, fails to locate the global optima of most benchmark problems. There are two possible approaches could be employed to eradicate the risk of local optima trapping. The first possible way is to introduce the concept of multiple teacher particles into the TPLPSO. This strategy allows the TPLPSO to search for optimal solutions on the basis of multiple best-so-far found locations within search-space in parallel and hence, may reduce the possibility of TPLPSO being trapped in local-optima. The second possible approach is to introduce the perturbation mechanism into the TPLPSO, once the teacher particle is found to be trapped in local optima. The proposed SPS module is categorized as the second approach and its effectiveness is proven in Table 6. Accordingly, the TPLPSO variants which are equipped

4.7. Discussion Based on the simulation results of the benchmark and realworld problems, we observe that our proposed TPLPSO has more superior searching accuracy, searching reliability, and convergence speed than the other ten well-established PSO variants and seven metaheuristic search (MS) algorithms. As shown in the simulation results as presented in Table 6, the excellent performance of the TPLPSO is contributed by the two strategies adopted, namely the TPL framework and the SPS module. The TPL framework offers a better control of the exploration and exploitation of swarm during the optimization process. Specifically, both of the teaching phase and scenario 1 in the peer-learning phase encourage the exploitation process and thereby enhance the algorithm’s convergence speed, as the student particles are attracted toward the particles with better fitness. Meanwhile, the scenario 2 in the peer-learning phase encourages the exploration process, as the student particles are repelled away from the particles with inferior fitness. This mechanism preserves the swarm diversity and thus prevents the premature convergence occurs. The outperformance of the TPL framework over the original TLBO framework is proven in Table 6. Accordingly, the TPLPSO (with TPL framework) has better searching accuracy across the 20 employed benchmarks and it is able to achieve higher performance improvement against the original PSO, as compared to the TPLPSO3 (with original TLBO framework). Despite having better searching performance, the computational overhead analysis (as illustrated in Fig. 12) reveals that the TPL framework outperforms the original TLBO framework, without incurring excessive computational resources. Meanwhile, the SPS module further increases the exploration capability of the swarm by providing the fresh momentum Table 11 Fmean , SD, and h results for 20-D spread spectrum radar polyphase code design problem.

Fmean SD h

APSO

CLPSO

CPSO

FLPSO-QIW

FPSO

FIPSO

MPSO-TVAC

RPPSO

PSO-LDIW

UPSO

TPLPSO

1.33E+00 1.92E−01 +

1.08E+00 7.81E−02 +

1.72E+00 2.82E−01 +

1.02E+00 6.88E−02 +

1.13E+00 1.30E−01 +

1.04E+00 1.47E−01 +

1.03E+00 1.70E−01 +

1.10E+00 1.73E−01 +

1.21E+00 1.70E−01 +

1.40E+00 2.06E−01 +

9.28E−01 1.41E−01

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

with the SPS module (namely TPLPSO1, TPLPSO3, and TPLPSO) are able to locate more global optima than the TPLPSO2 (without SPS module). The experimental findings in Table 6 imply that the perturbation mechanism in SPS module has successfully made up the demerit of TPLPSO in total reliance on single teacher particle, and thus offer the TPLPSO a strong capability to fend-off the local optima traps. It is important to mention that, although the proposed TPLPSO exhibited superior performance in the previously reported experiments, it is applicable only to the problems with single global optimum in continuous search space. More works needs to be done to further extend the applicability of TPLPSO to a more general class of optimization problems, including those with discrete and mixed search spaces as well as multimodal problems or the problems with multiple global optima. This is because these problems have a rather different perspective compared with the problem with single global optimum. For example, multimodal optimization task amounts to find multiple optimal solutions and not just one single optimum, as it is done in a typical optimization study. Note that this is against the natural tendency of TPLPSO, which will always tend to converge toward the best solution or a suboptimal solution. In general, there are two main challenging tasks of using the TPLPSO to solve multimodal optimization problems. First, the TPLPSO needs to simultaneously locate the multiple peaks which could be far from each other. Second, once the multiple peaks have been detected, TPLPSO needs to preserve these multiple solutions over the entire optimization process. Both of these tasks are essential to produce multiple good solutions at termination of TPLPSO, rather than only the best one. Various niching techniques [52–58] could be integrated with TPLPSO to make it suitable for multimodal optimization. Additionally, we opine that the multiple teacher particles approach as discussed earlier also emerges as another possible alternative in adapting the TPLPSO as a multimodal optimization algorithm. More specifically, different teacher particles could be employed to locate different global peaks in the multimodal problems. For more discussions of the extension of an optimization algorithm to facilitate its application in the multimodal optimization problem, refer to [59,60].

5. Conclusion In this paper, a TPLPSO consisting of two learning phases, teaching and peer-learning, is proposed to solve the global optimization problems. The employment of the TPL framework ensures the better control of swarm exploration/exploitation searches. For the teaching phase, it encourages the exploitation and thereby enhances the algorithm’s convergence speed. Meanwhile, the peerlearning phase and the SPS module encourage exploration and thus increase the algorithm’s robustness toward the premature convergence issue. Extensive experiments are conducted to investigate the performance of the TPLPSO in various benchmark and real-word problems, as well as the contribution of each employed strategy in improving the algorithm’s performance. Experimental results reveal that the proposed TPLPSO significantly outperforms its competitors in terms of the searching accuracy, searching reliability, and computation cost. Additionally, the simulation results also indicate that the TPL framework and the SPS module are integrated effectively in the TPLPSO, as none of the contributions of these strategies are compromised when the TPLPSO is used to solve different types of problems. In our future works, we will extend the applicability of TPLPSO to a diverse class of optimization problems, such as discrete, mixed, multimodal, and multi-objective search spaces. In addition, we will also investigate if the multiple teacher particles approach could be the alternative

57

of SPS module in alleviating the local optima trapping tendency of TPLPSO.

Acknowledgments The authors express their sincere thanks to the associate editor and the reviewers for their significant contributions to the improvement of the final paper. This research was supported by Universiti Sains Malaysia (USM) Postgraduate Fellowship Scheme and the Postgraduate Research Grant Scheme (PRGS) entitled “Development of PSO Algorithm with Multi-Learning Frameworks for Application in Image Segmentation”

References [1] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, vol. 1944, 1995, pp. 1942–1948. [2] R.C. Eberhart, Y. Shi, Particle swarm optimization: developments, applications and resources, in: Evolutionary Computation, 2001. Proceedings of the 2001 Congress on, vol. 81, 2001, pp. 81–86. [3] A. Banks, J. Vincent, C. Anyakoha, A review of particle swarm optimization. Part I: Background and development, Natural Computing 6 (2007) 467–484. [4] Y. del Valle, G.K. Venayagamoorthy, S. Mohagheghi, J.C. Hernandez, R.G. Harley, Particle swarm optimization: basic concepts, variants and applications in power systems, IEEE Transactions on Evolutionary Computation 12 (2008) 171–195. [5] M.P. Wachowiak, R. Smolikova, Z. Yufeng, J.M. Zurada, A.S. Elmaghraby, An approach to multimodal biomedical image registration utilizing particle swarm optimization, IEEE Transactions on Evolutionary Computation 8 (2004) 289–301. [6] Y. Song, Z. Chen, Z. Yuan, New chaotic PSO-based neural network predictive control for nonlinear process, IEEE Transactions on Neural Networks 18 (2007) 595–601. [7] A. Banks, J. Vincent, C. Anyakoha, A review of particle swarm optimization. Part II: Hybridisation, combinatorial, multicriteria and constrained optimization, and indicative applications, Natural Computing 7 (2008) 109–124. [8] C.-J. Lin, C.-H. Chen, C.-T. Lin, A hybrid of cooperative particle swarm optimization and cultural algorithm for neural fuzzy networks and its prediction applications, IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 39 (2009) 55–68. [9] K.D. Sharma, A. Chatterjee, A. Rakshit, A hybrid approach for design of stable adaptive fuzzy controllers employing Lyapunov theory and particle swarm optimization, IEEE Transactions on Fuzzy Systems 17 (2009) 329–342. [10] F. van den Bergh, A.P. Engelbrecht, A cooperative approach to particle swarm optimization, IEEE Transactions on Evolutionary Computation 8 (2004) 225–239. [11] Y. Shi, R. Eberhart, A modified particle swarm optimizer, in: Proceedings of IEEE World Congress on Computational Intelligence, 1998, pp. 69–73. [12] M. Clerc, J. Kennedy, The particle swarm – explosion, stability, and convergence in a multidimensional complex space, IEEE Transactions on Evolutionary Computation 6 (2002) 58–73. [13] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: simpler, maybe better, IEEE Transactions on Evolutionary Computation 8 (2004) 204–210. [14] K.E. Parsopoulos, M.N. Vrahatis, A unified particle swarm optimization scheme, in: Proceedings of the International Conference of Computational Methods in Sciences and Engineering, VSP International Science Publishers, Zeist, The Netherlands, 2004. [15] A. Ratnaweera, S.K. Halgamuge, H.C. Watson, Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients, IEEE Transactions on Evolutionary Computation 8 (2004) 240–255. [16] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Transactions on Evolutionary Computation 10 (2006) 281–295. [17] D.F. Carvalho, C.J.A. Bastos-Filho, Clan particle swarm optimization, in: IEEE Congress on Evolutionary Computation (CEC 2008), 2008, pp. 3044–3051. [18] C.J.A. Bastos-Filho, D.F. Carvalho, E.M.N. Figueiredo, P.B.C. de Miranda, Dynamic clan particle swarm optimization, in: Ninth International Conference on Intelligent Systems Design and Applications (ISDA ‘09) Pisa, 2009, pp. 249–254. [19] S.-T. Hsieh, T.-Y. Sun, C.-C. Liu, S.-J. Tsai, Efficient population utilization strategy for particle swarm optimizer, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 39 (2009) 444–456. [20] M.A. Montes de Oca, T. Stutzle, M. Birattari, M. Dorigo, Frankenstein’s PSO: a composite particle swarm optimization algorithm, IEEE Transactions on Evolutionary Computation 13 (2009) 1120–1132. [21] Z.-H. Zhan, J. Zhang, Y. Li, H.S.H. Chung, Adaptive particle swarm optimization, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 39 (2009) 1362–1381. [22] M.R. Pontes, F.B.L. Neto, C.J.A. Bastos-Filho, Adaptive clan particle swarm optimization, in: IEEE Symposium on Swarm Intelligence (SIS), 2011, pp. 1–6.

58

W.H. Lim, N.A. Mat Isa / Applied Soft Computing 18 (2014) 39–58

[23] Y. Tang, Z. Wang, J.-A. Fang, Feedback learning particle swarm optimization, Applied Soft Computing 11 (2011) 4713–4725. [24] W. Wang, H. Wang, S. Rahnamayan, Improving comprehensive learning particle swarm optimiser using generalised opposition-based learning, International Journal of Modelling, Identification and Control 14 (2011) 310–316. [25] D. Zhou, X. Gao, G. Liu, C. Mei, D. Jiang, Y. Liu, Randomization in particle swarm optimization for global search ability, Expert Systems with Applications 38 (2011) 15356–15364. [26] H. Huang, H. Qin, Z. Hao, A. Lim, Example-based learning particle swarm optimization for continuous optimization, Information Sciences 182 (2012) 125–138. [27] M.-S. Leu, M.-F. Yeh, Grey particle swarm optimization, Applied Soft Computing 12 (2012) 2985–2996. [28] M.M. Noel, A new gradient based particle swarm optimization algorithm for accurate computation of global minimum, Applied Soft Computing 12 (2012) 353–359. [29] X. Jin, Y. Liang, D. Tian, F. Zhuang, Particle swarm optimization using dimension selection methods, Applied Mathematics and Computation 219 (2013) 5185–5197. [30] R.V. Rao, V.J. Savsani, D.P. Vakharia, Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems, Computer-Aided Design 43 (2011) 303–315. [31] R.V. Rao, V.J. Savsani, D.P. Vakharia, Teaching–learning-based optimization: an optimization method for continuous non-linear large scale problems, Information Sciences 183 (2012) 1–15. [32] J. Kennedy, Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance, in: Proceedings of IEEE Congress on Evolutionary Computation, vol. 1933, 1999, p. 1938. [33] J. Kennedy, R. Mendes, Population structure and particle swarm performance, in: Proceedings of IEEE Congress on Evolutionary Computation (CEC ‘02), 2002, pp. 1671–1676. ˇ [34] M. Crepinˇ sek, S.-H. Liu, L. Mernik, A note on teaching–learning-based optimization algorithm, Information Sciences 212 (2012) 79–93. [35] J. Sun, W. Xu, B. Feng, A global search strategy of quantum-behaved particle swarm optimization, in: IEEE Conference on Cybernetics and Intelligent Systems, 2004, pp. 111–116. [36] X. Yao, Y. Liu, G. Lin, Evolutionary programming made faster, IEEE Transactions on Evolutionary Computation 3 (1999) 82–102. [37] C.-Y. Lee, X. Yao, Evolutionary programming using mutations based on the Levy probability distribution, IEEE Transactions on Evolutionary Computation 8 (2004) 1–13. [38] P.N. Suganthan, N. Hansen, J.J. Liang, K. Deb, Y.P. Chen, A. Auger, S. Tiwari, Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real Parameter Optimization, Technical Report, Nanyang Technological University, Singapore, 2005. [39] R. Salomon, Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions, Biosystems 39 (1996) 263–278. [40] A.Y.S. Lam, V.O.K. Li, J.J.Q. Yu, Real-coded chemical reaction optimization, IEEE Transactions on Evolutionary Computation 16 (2012) 339–353. [41] S. He, Q.H. Wu, J.R. Saunders, Group search optimizer: an optimization algorithm inspired by animal searching behavior, IEEE Transactions on Evolutionary Computation 13 (2009) 973–990.

[42] W. Gong, Z. Cai, C.X. Ling, H. Li, A real-coded biogeography-based optimization with mutation, Applied Mathematics and Computation 216 (2010) 2749–2758. [43] N. Hansen, A. Ostermeier, Completely derandomized self-adaptation in evolution strategies, Evolutionary Computation 9 (2001) 159–195. [44] K. Deb, A. Anand, D. Joshi, A computationally efficient evolutionary algorithm for real-parameter optimization, Evolutionary Computation 10 (2002) 371–395. [45] X. Yao, Y. Liu, Fast evolution strategies, in: Proceedings of the 6th International Conference on Evolutionary Programming VI, Springer-Verlag, 1997, pp. 151–162. [46] A.Y.S. Lam, V.O.K. Li, Chemical-reaction-inspired metaheuristic for optimization, IEEE Transactions on Evolutionary Computation 14 (2010) 381–399. [47] H.-G. Beyer, H.-P. Schwefel, Evolution strategies: a comprehensive introduction, Natural Computing 1 (2002) 3–52. [48] M. Melanie, An Introduction to Genetic Algorithms, MIT Press, Cambridge, MA, 1999. [49] L.J. Fogel, A.J. Owens, M.J. Walsh, Artificial Intelligence Through Simulated Evolution, Wiley, New York, 1966. [50] W. Gao, S. Liu, L. Huang, A novel artificial bee colony algorithm based on modified search equation and orthogonal learning, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics (2012) 1–14. [51] S. Das, P.N. Suganthan, Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems, Nanyang Technol. University, Singapore, 2010. [52] S.W. Mahfoud, A comparison of parallel and sequential niching methods, in: Proceedings of 6th International Conference on Genetic Algorithms, July, 1995, pp. 136–143. ˜ [53] J.E. Vitela, O. Castanos, A real-coded niching memetic algorithm for continuous multimodal function optimization, in: Proceedings of IEEE Congress on Evolutionary Computation, IEEE, 2008, pp. 2170–2177. [54] O.J. Mengshoel, D.E. Goldberg, Probabilistic crowding: deterministic crowding with probabilistic replacement, in: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-99), 1999, pp. 409–416. [55] B. Sareni, L. Krahenbuhl, Fitness sharing and niching methods revisited, IEEE Transactions on Evolutionary Computation 2 (1998) 97–106. [56] X. Li, Niching without niching parameters: particle swarm optimization using a ring topology, IEEE Transactions on Evolutionary Computation 14 (2010) 150–169. [57] O.M. Shir, M. Emmerich, T. Bäck, Adaptive niche radii and niche shapes approaches for niching with the CMA-ES, Evolutionary Computation 18 (2010) 97–126. [58] C. Stoean, M. Preuss, R. Stoean, D. Dumitrescu, Multimodal optimization by means of a topological species conservation algorithm, IEEE Transactions on Evolutionary Computation 14 (2010) 842–864. [59] B.Y. Qu, P.N. Suganthan, J.J. Liang, Differential evolution with neighborhood mutation for multimodal optimization, IEEE Transactions on Evolutionary Computation 16 (2012) 601–614. [60] B.Y. Qu, P.N. Suganthan, S. Das, A distance-based locally informed particle swarm model for multimodal optimization, IEEE Transactions on Evolutionary Computation 17 (2013) 387–402.