Applied Mathematics and Computation 256 (2015) 666–701
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
An ideal tri-population approach for unconstrained optimization and applications Kedar Nath Das ⇑, Raghav Prasad Parouha Department of Mathematics, NIT, Silchar, Assam, India
a r t i c l e
i n f o
Keywords: Elitism Non Redundant Search Unconstrained benchmark functions
a b s t r a c t The hybridization of Differential Evolution (DE) and Particle Swarm Optimization (PSO) have been well preferred over their individual effort in solving optimization problems. The way of applying DE and PSO in the hybridization process is a big deal to achieve promising solutions. Recently, they have been used simultaneously (i.e. in parallel) on different sub-populations of the same population, instead of applying them alternatively in series over the generation. An attempt is made in this paper to hybrid DE and PSO in parallel, under a ‘tri-population’ environment. Initially, the whole population (in increasing order of fitness) is divided into three groups – inferior group, mid group and superior group. Based on their inherent ability, DE is employed in the inferior and superior groups whereas PSO is used in the mid-group. This proposed method is named as DPD as it uses DE–PSO– DE on the sub-populations of the same population. Two more strategies namely Elitism (to retain the best obtained values so far) and Non Redundant Search (to improve the solution quality) have been incorporated in DPD cycle. The paper is designed with three major aims: (i) investigation of suitable DE-mutation strategies to support DPD, (ii) performance comparison of DPD over state-of-the-art algorithms through a set of benchmark functions and (iii) application of DPD to real life problems. Numerical, statistical and graphical analysis in this paper finally concludes the robustness of the proposed DPD. Ó 2015 Elsevier Inc. All rights reserved.
1. Introduction Optimization is a ubiquitous and spontaneous process and frequently appears in the real world problems like stock prediction [1], bankruptcy prediction [2], medical image processing [3], well placement problem [4] etc. In the world of optimization Evolutionary Algorithms (EAs) have been treated as the successful alternatives, since last few decades. Among all EAs, Differential Evolution (DE) [5] and Particle Swarm Optimization (PSO) [6] are two formidable population based optimizers. In order to achieve the very close optimal solution to optimization problems, they have crossed many success-milestones yet. Quite a good number of improved versions and applications of PSO [7–12] and DE [13–18] has been proposed in order to enhance their individual performance. However, due to their individual shortcomings, either the solution leads to a premature convergence or stacks at some local optima. In order to overcome these shortcomings hybrid techniques are now more preferred over their individual efforts.
⇑ Corresponding author. E-mail addresses:
[email protected] (K.N. Das),
[email protected] (R.P. Parouha). http://dx.doi.org/10.1016/j.amc.2015.01.076 0096-3003/Ó 2015 Elsevier Inc. All rights reserved.
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
667
The first hybridization between DE and PSO was reported by Hendtlass [19] which is successfully applied in solving unconstrained optimization problems. Yet, many hybrid methods of DE and PSO has been proposed [3,4,20–31] with a lot of branched out applications like well placement problem [3], medical image processing problem [4], discrete optimization problem [22], image classification problem [24], engineering design problem [26–28], power systems optimization [29,30], dynamic optimization [31] etc. In such hybridizations, DE and PSO are used in the alternative generations during simulation. All in all hybridization of DE and PSO [32–34] is to take advantage of both algorithms for providing better solution, simultaneously. Over the years there has been a continuous modification in the operators and/or the way of applying them. A gradual improvement in the robustness of hybrid DE–PSO is being closely observed in the literature. However, recently ‘parallel’ employment of DE and PSO is well preferred over the ‘series’. Here ‘parallel’ means DE and PSO are used simultaneously on the sub-population of the same population with breaking the population into more than one part. The use of DE and PSO alternatively over the generations is termed as ‘series’. In this study, the concept of ‘parallel’ application of DE and PSO is being used. The updated reviews of similar works are presented below. Kordestani et al. [35] has broken the population in two parts and proposed a bi-population based hybrid approach (namely CDEPSO) for dynamic optimization problems. In their paper, the first population uses CDE (Crowding-based Differential Evolution) and the second population uses PSO. The first population is responsible for locating multiple promising areas of the search space and preserving a certain level of diversity throughout the run. The second population is exploiting to useful information in the vicinity of the best position found by the first population. Elsayed et al. [36] presented two methods (i) Self-Adaptive Multi-Operator Genetic Algorithm (namely SAMO-GA) and (ii) Self-Adaptive Multi-Operator Differential Evolution (namely SAMO-DE). Both methods start with a random initial population which is divided into four subpopulations of equal sizes and applied to solve CEC2010, CEC2006 Test Problems. Authors concluded that the concept of population division retains stronger diversity and performance of SAMO-DE is better than SAMO-GA. Zhang et al. [37] presents a hybrid approach (namely DETPS) based on a tissue P system and DE. In DETPS, initial population is divided into five groups. Each Five groups of individuals are put inside five cells of tissue P systems respectively with specify five variants of DE algorithms. It produces good balance between exploration and exploitation. DETPS is successfully applied to solve constrained optimization problems. Yadav and Deep [38] proposed a new co-swarm PSO (namely CSHPSO) by hybridizing the shrinking hypersphere PSO (SHPSO) with DE. Initially, the total swarm is subdivided into two sub-swarms. The first sub-swarms use SHPSO and second sub-swarms uses DE. CSHPSO is tested on benchmark problems and later applied on power system optimization problem with valve point effects. Cagnina et al. [39,40] proposed a dual population based technique (namely CPSO-shake) which is able to overcome premature convergence and successfully applied to solve constrained optimization problems. In order to maintain a good balance between local and global search ability, Han et al. [41] proposed a dynamic group-based Differential Evolution (namely GDE) using a self-adaptive strategy (1/5th rule). It is based on partitioning the population in two parts in order to applied two different mutation strategies of DE. They claimed that GDE has both exploitation and exploration abilities. Wang et al. [42] proposed a hybrid technique (namely DEDEPSO) with a tri-break-up population based mechanism. It works on three sub-populations uses with two different variants of DE and classical PSO respectively. In DEDEPSO, three subpopulations share the global best solution during simulation and applied to solve unconstrained global optimization. The impact of population break-up mechanism on the solution quality is sincerely observed in the above review. In order to improve the robustness further, the tri-break up of population (called as ‘tri-population’) and the way of applying DE and PSO in it, is present in this paper. The reason behind chosen these two concepts is discussed later in Section 3. Thus a new hybridized algorithm namely DPD (DE–PSO–DE) is proposed in this study for solving unconstrained optimization problems. Rest part of this paper is organized as follows. Section 2 contains the basic concepts in DE and PSO. Section 3 presents the proposed algorithm (DPD) and investigates the best suit mutation operators for both DEs employed in it. The performance comparison of DPD with the state-of-the-art algorithms is in Section 4. Section 5 presents statistical validation of proposed DPD results. Further in Section 6, DPD is applied on some real life problems like PID controller, Lennar-Jones cluster optimization and chemical optimization problems. Finally, the conclusion is drawn in Section 7 with some future scopes. 2. Basics of DE and PSO The basic idea of Differential Evolution (DE) and Particle Swarm Optimization (PSO) are described in this section as follows. 2.1. Differential Evolution (DE) The DE algorithm is pioneered by Storn and Price in 1995 [5]. Over the time, it is well established as a reliable and versatile function optimizer. It is a population based algorithm that uses crossover, mutation and selection in each of its cycle. Unlike Genetic Algorithm (which relies on crossover), DE relies on mutation operator. First an initial population (called target vectors) is created by generating and encoding the decision variables at random, within a prescribed range. Each chromosome (candidate) is then evaluated. A perturbation on each candidate is given with a fixed probability through Mutation and the mutant vectors are produced. Using Crossover, the trial vectors are generated from both target and mutant vectors
668
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
under a certain probability. By Selection operator, next population will be filled by the winner of the pair wise competition between the respective individuals of current and initial population. The major steps of DE are explained in detail as follows. DE starts with the initialization a population of NP target vectors (parents) X i ¼ ðx1i ; x2i ; . . . ; xDi Þ; i ¼ 1; 2; . . . ; NP is randomly generated within user-defined bounds, where D is the dimension of the optimization problem. This population undergoes with the cyclic processes of mutation, crossover and selection which is defined as follows. Mutation: Let X ti ¼ ðxt1i ; xt2i ; . . . ; xtDi Þ be the ith individual at the current generation t. A mutant vector tþ1 tþ1 ¼ V i ¼ ðv tþ1 V tþ1 i 1i ; v 2i ; . . . ; v Di Þ can be computed by applying the mutation operator. It is the fact that ‘Mutation’ is a vital operator in DE and its many varieties exist in the literature. A set of well-known mutation operators proposed in [43,44] are listed below. M1: DE/rand/1
V i ¼ xr1 þ F ðxr2 xr3 Þ M2: DE/rand/2
V i ¼ xr1 þ F ðxr2 xr3 Þ þ F ðxr4 xr5 Þ M3: DE/best/1
V i ¼ xbest þ F ðxr2 xr3 Þ M4: DE/best/2
V i ¼ xbest þ F ðxr2 xr3 Þ þ F ðxr4 xr5 Þ M5: DE/rand-to-best/1
V i ¼ xr1 þ F ðxbest xr1 Þ þ F ðxr2 xr3 Þ M6: DE/current-to-best/1 ‘‘or’’ DE/target-to-best/1
V i ¼ xi þ F ðxbest xi Þ þ F ðxr2 xr3 Þ M7: DE/rand-to-best/2
V i ¼ xr1 þ F ðxbest xr1 Þ þ F ðxr2 xr3 Þ þ F ðxr4 xr5 Þ M8: DE/current-to-best/2 ‘or’ DE/target-to-best/2
V i ¼ xi þ F ðxbest xi Þ þ F ðxr2 xr3 Þ þ F ðxr4 xr5 Þ where F 2 ½0; 1 is the mutation factor, r1 – r2 –r3 – r4 – r5 – i; i ¼ 1; 2; . . . ; NP, NP is the population size or number of individuals, xbest is the best individual in the current generation (t). Crossover: In order to increase the diversity of the target vectors, the crossover operator is being applied on the target and mutant vectors. According to the target vector X ti and the mutant vector V tþ1 , a new trial vector (offspring) i tþ1 tþ1 tþ1 U tþ1 ¼ u ; u ; . . . ; u is created as follows. i 1i 2i Di
(
U tþ1 ji
¼
V tþ1 ji
if ðrandð0; 1Þ 6 CRÞ or j ¼ randðiÞ
X tji
if ðrandð0; 1Þ > CRÞ and j – randðiÞ
where j 2 f1; 2; . . . ; Dg, CR 2 ½0; 1 is the crossover constant, randðiÞ 2 ½1; 2; . . . ; D is a randomly chosen index (which ensures that U tþ1 gets at least one parameter from V tþ1 [5]). i i Selection: The generated trial vector U tþ1 from the crossover operation will be compared with the target vector X ti based i on better fitness values. The fittest between these two will survive for the next generation. Therefore the selection criteria in DE are defined as follows.
( X tþ1 i
¼
U itþ1
if f ðU tþ1 Þ 6 f ðX ti Þ i
X ti
otherwise
The cyclic implementation of mutation, crossover and selection is continued till it meets with the pre-defined stopping criterion. 2.2. Particle Swarm Optimization (PSO) The PSO algorithm is originated by Kennedy and Eberhart [6]. It is inspired by various species like birds, fish, termites, ants and even human beings. PSO has become one of the most promising optimizer for solving optimization problems. PSO relies on the exchange of information between individuals (called particles) of the population (called swarm). In PSO, each particle adjusts its trajectory stochastically towards the positions of its own previous best performance (pbest) and
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
669
the overall best previous performance (gbest). The basic concept in PSO is to make fly the potential solutions through hyperspace and accelerating towards ‘better and better’ solutions. The mechanism of PSO is based on two repeated consequences like velocity update and position update of particles in each cycle. Like DE, PSO also starts with a fixed number of randomly initialized particles in a D-dimensional solution space. Let the ‘ith’ particle at generation ‘t’ has a position vector X ti ¼ ðxt1i ; xt2i ; . . . ; xtDi Þ and a velocity vector V ti ¼ v t1i ; v t2i ; . . . ; v tDi , where i = 1 . . .. NP is swarm size. The best solution achieved by ‘ith’ particle until the current generation (t) is defined as t t t t t t pbest i ¼ pbest 1i ; pbest 2i ; . . . ; pbestDi . The best among pbesti is denoted as the global best i.e. gbest . Each particle approaches to its better position with randomly weighted acceleration by collectively using its current velocity, previous experience and the experience of other particles. Then the particle updates its velocity and position repeatedly by the following equations respectively.
v tþ1 ¼ ji
t t wv tji þ c1 r 1 pbestji xtji þ c2 r 2 gbest j xtji |ffl{zffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
Inertia component
Cognitive Component
ð1Þ
Social Component
xtþ1 ¼ xtji þ v tþ1 ji ji
ð2Þ
where j 2 f1; 2; . . . ; Dg, c1 and c2 are positive constants, called ‘acceleration coefficients’, r1 and r2 are random numbers that are uniformly distributed in [0, 1] and w is called the ‘inertia weight’ that controls the impact of the previous velocity of the particle on its current one. The inertia weight w can be calculated using the following equation.
w ¼ wmax
wmax wmin iter iter max
where wmax and wmin are the maximum and minimum weight respectively, ‘itermax’ is the maximum number of iterations and ‘iter’ is the current iteration number. Some authors kept the value of inertia weight fixed [34,45]. The population of particles is then allowed to move by using (1) and (2), and tends to cluster together with approaching from different directions. 3. Proposed DPD and selection of DE-mutation-strategies for DPD The major contributions in this paper are as follows. (i) Proposition of the algorithm based on a ‘tri-population’ environment (sub Section 3.1). (ii) Investigation of suitable strategy of mutation for DEs to support (i) (sub Sections 3.2 and 3.4). 3.1. Proposed method: DPD (DE–PSO–DE) Till date, DE and PSO are hybridized and applied on a population in series (i.e. one after another over the generations) and in parallel (i.e. simultaneously on sub-populations of the same population) during simulation. The related works have been reviewed in the earlier and later part of Section 1 for series and parallel applications, respectively. The parallel implementation is preferred over the series one (in literature) as discussed in Section 1. However, the parallel implementations of DE– PSO have been cited in the literature mainly in two ways as follow. (i) On Bi-breakup of population. (ii) On Tri-breakup of population. The greater the number of breakups the smaller the sub-population size is. Therefore, the further/higher break up of population is not recommended due to the failure of neighborhood topology [39,40]. However, it is worth to note that the authors considered only 10 particles as a swarm size in their experiment. But in this paper, since the population size (=51) is larger, instead of bi-population the tri-population concept is considered. Also, as reported in [42,46,47], the tripopulation enhances the memory of Evolutionary Algorithms and maintains the balance between exploration and exploitation. It also helps to increase the diversity in the population [32]. Hence, the ‘tri-break up of population’ is chosen in this present study and it is recalled as ‘tri-population’. The proposed work on this tri-population environment is presented as follows. Step 1: Assume a population (of size a multiple of 3). Step 2: Sort the string according to increasing order of fitness. Step 3: Allow the population to be divided into three different groups as follows: (A) Inferior group (first 1/3rd of the population). (B) Mid group (middle 1/3rd of the population). (C) Superior group (last 1/3rd of the population).
670
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
A similar work of hybridizing DE and PSO on tri-population has been done in [42]. However, it is experimentally observed that this algorithm seems to have its own disadvantages such as, premature convergence, rapid loss of diversity. It is majorly due to the following two reasons. (i) Inappropriate ordering of the implementation of DE and PSO on the tri-population (explained below). (ii) Blind selection of mutation operators in DE used in the system (rectified in Sections 3.2 and 3.4). Particularly, on a tri-population system there exist 4 permutations of applying DE and PSO. They are DE–PSO–DE, PSO–DE–PSO, DE–DE–PSO and PSO–PSO–DE. In this paper, the order of using DE and PSO is chosen as DE–PSO–DE (shortly, DPD Algorithm). The motivation behind such assumption is due to other methods in [35–42] where the population breakup concept is already used successfully. The works done in their paper are briefed in Section 1. DPD is based on the information sharing mechanism between DE and PSO. The reason of selecting such order of DE and PSO is explained below. The clear concept of DPD is demonstrated in Fig. 1 and the pseudo code is also presented later in this section. Location in the population
Needs/requirements (objective of the proposed algorithm)
Solution (from the literature)
Choice of heuristic in DPD
A (inferior group)
Since it is the worst part of the population, it needs to locate multiple local optima early over the entire search space. Thus part A needs to have high global search ability pick up locations of local multiple optima Since this part contains neither so good nor so bad strings, so it requires faster convergence rate with stability without being trapped in the local minima mechanism of memorizing the best solution Since this is the better part of the population, there is a fear of getting trapped into local minima or it will lead to premature convergence. So this part needs to the maintenance of diversity the increasing ability of local search
DE has global search ability [26– 34,41,42] and converges to local multiple optima [35]. Hence DE is chosen to apply on A
DE
The convergence of DE is unstable [23,42] and has no mechanism to memorize the show for best solution [42,48]. Moreover DE traps in the local minima easily [23]. Hence PSO is an appropriate solution to apply on B, as it has all these qualities [35,45]
PSO
PSO can easily be trapped in local optima [42]. In other hand DE guarantee to be maintain diversity in the population [35,42]. Moreover DE has the capacity to enhance the local search ability [23,41,48]. Especially use of best mode of DE increase the local search ability [42] Hence DE is considered to apply on C
DE
B (mid group)
C (superior group)
Due to the usual optimal information sharing concept in hybridization, one method supports over the weakness of the other during simulation. As a result the solution quality gradually improves. But there no mechanism yet in DPD to retain the best solution achieved so far. Therefore the method of ‘Elitism’ is considered. On the other hand, due to low maintenance of diversity by PSO [23,48], facing premature convergence is unavoidable in most of the cases. Hence, a concept of Non Redundant Search (NRS) [49] is also being implemented in DPD cycle. Separately, Elitism and NRS mechanisms are briefed below. 3.1.1. Elitism Elitism on two different populations X and Y, works with the following three steps, in each individual generation. (a) Merge the individuals of X and Y together. (b) Sorting them in the increasing/decreasing order of their fitness. (c) Select the best half individuals for the next population. Therefore by this process of elitism the less-fit individuals’ die-off where as the best-fit individuals get again a chance to participate in the next round.
Population
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
Inferior Group (A)
DE
Mid Group (B)
PSO
Superior Group (C)
DE
671
Fig. 1. Population Breakup in DPD.
3.1.2. NRS In DPD algorithm, Non Redundant-Search (NRS) is used as a local search which works as follows (a) Deleting all repeated individuals in the current population, except keeping only one copy of it. (b) Filling the vacant position by randomly selected individuals. Subsequently, NRS helps to avoid the individuals to cluster together at one place and hence helps to maintenance diversity in the population. 3.1.3. DPD algorithm Conceptualizing all above facts, the clear steps of the proposed DPD algorithm is presented below. Moreover, the detailed flow diagram of DPD is also outlined in Fig. 2. 3.1.4. Pseudo code of DPD Step 1: Set the starting generation t = 0. Create the initial population of size NP (a multiple of 3 to favor the tri-population mechanism of DPD) by the random choice of the values of the variables within their lower (Lj) and upper bounds (Uj) for the jth string as follows.
xi;j ¼ Lj þ rand ðU j Lj Þ where rand is a random number 2 ½0; 1. Step 2: Evaluate the fitness [F(x) = 1/(1 + f(x))] of each individual string and sort them in the increasing order of fitness, where f(x) represents the objective function (which need to be minimized). Step 3: Break the Population into 3 parts (or sub-populations) namely A, B and C (as described above). Step 4: Employ DE, PSO and DE (as explained above) on Group A, B and C respectively. Step 5: Combine all three resultant sub-populations (obtained by Step 4) to get a new population. Step 6: Apply ‘Elitism’ on two different populations (obtained from Step 1 and Step 5). Step 7: Apply NRS. Step 8: Stop if the termination criterion is met, else set t = t + 1 and go to Step 2. 3.2. Selection of DE-mutation-operators in DPD In the previous sub-section it is decided to use DE–PSO–DE on the tri-population. In case of PSO there is no much parameter to be fine-tuned. However in DE, there exist many variants of its ‘mutation’ operators. The selection of a suitable mutation operator in DE significantly affects the solution quality as it helps to better explore the search space. Since DPD have two DEs and there are 8 different variants of mutation operators of DE (as listed in Section 2.1) are picked to participate in its cycle. So it is essential first investigate the suitable mutation operators for both DE in DPD. For this let us consider a population having the sub-populations A, B and C (as discussed in Section 3.1) on which DE, PSO and DE are used respectively. Now each of the DE used on A and on C, may have different mutation operators out of M1–M8, whereas the PSO is kept fixed on B in each case. Therefore a total of 8 8 = 64 variants of DPD are generated. It is reexplained below.
DE ðwith 8 different mutation operators M1 —M8 on AÞ þ PSO on B ðfixedÞ þ DE ðwith 8 different mutation operators M1 —M8 on CÞ ¼ DPD ð64 different variantsÞ More clearly, a particular mutation operator M1 in the first DE can produce 8 variants of DPD, by employing M1–M8 in last DE of DPD, by keeping PSO fixed in each DPD. Therefore, there are 8 8 = 64 combinations of mutation operators are possible and hence a total of 64 variants of DPD are being produced. The pictorial representation of this concept is also presented in Fig. 3.
672
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
Start Initialize population randomly in the prescribed search range. Aim: Min f(x)
Fitness = 1/(1+f(X)) Sorting of the strings in the population
DE
PSO
DE
Group 'B'
Group 'C'
Mutation
Find pbest and gbest
Mutation
Crossover
Velocity Update (Eqn.1)
Crossover
Selection
Position Update (Eqn.2)
Selection
Gen = Gen + 1
Group ‘A'
Combine offspring from group A, B and C
Elitism NRS
No
Stopping Criteria Met?
Yes
Report Optimal Value
Stop
Fig. 2. Flowchart of DPD algorithm.
DE with Mutation strategies
on group A M1
M2
M3
M8
PSO
M1
M2
M3
on group B
M8 on group C
in DE
64 different variants of DPD Fig. 3. Establishment of 64 variants of DPD.
It is highly essential but challenging to investigate the best DPD out of its 64 different variants as discussed above for solving optimization problems efficiently. For this purpose, a test bed of 21 benchmark functions, classified into three categories (mentioned below) are considered and reported in Table 1. Category I: (Unimodal Functions): This category consists of the functions f1–f5. They are high-dimensional functions with only one global minimum in each. Functions in this category are ‘easier’ to solve as compared to the rest of the functions. Category II: (Multimodal High-dimensional Functions): This category consists of the functions f6–f11. They are highdimensional and contain many local minima. They are considered to be the most difficult set of problems in the list. Category III: (Multimodal Low-dimensional Functions): This category consists of the functions f12–f21. They have lower dimensions and have fewer local minima than that of functions in Category II. The reason of choosing such a set is that it contains a mixture of unimodal, multimodal, separable and non-separable unconstrained optimization problems. So the performance of an algorithm can be well justified for all sort of optimization problems. Each of the 21 problems is solved by each of 64 variants of DPD. During simulation, the considered experimental setup is discussed in the next sub-section.
Table 1 Benchmark functions. Category
Name
I
Sphere Schwefel 2.22 Step Quartic
II
Schwefel Rastrigin Ackley Generalized Griewank Generalized penalized functions
1
11
III
Foxholes Kowalik Six hump Camel back Branin GoldStein–Price Hartman’s family
Shekel’s family
f 12 ðxÞ ¼ f 13 ðxÞ ¼
1 500
P11
þ
i¼1
P25
j¼1
jþ
i¼1
P2
1 ðxi aij Þ6
i
iþ1
D
D
i¼1
fmin
[100, 100]D
0
[10,10]D
0
[30, 30]
D
0
[100, 100]D [1.28, 1.28]D [500, 500]D
0 0 418.982887⁄D
[5.12, 5.12]D
0
[32, 32]D
0 D
[600, 600] D
0
[50, 50]
0
[65.536, 65.536]D
0.998004
[5, 5]D
0.0003075
i
1
i¼1
2 2 ai x12 ðbi þbi x2 Þ bi þbi x3 þx4
f 14 ðxÞ ¼ 4x21 2:1x41 þ 13 x61 þ x1 x2 4x22 þ 4x42
S
2 2 5 þ 10 1 81p cos x1 þ 10 f 15 ðxÞ ¼ x2 45:1 p2 x1 þ p x1 6 h i
f 16 ðxÞ ¼ 1 þ ðx1 þ x2 þ 1Þ2 10 14x1 þ 3x21 14x2 þ 6x1 x2 þ 3x22 30 þ ð2x1 3x22 Þ 18 32x1 þ 12x21 þ 48x2 36x1 x2 þ 27x22 8 P P < f 17 ðxÞ ¼ 4i¼1 ci exp 3j¼1 aij ðxj pij Þ2 P P 4 6 2 : f ðxÞ ¼ 18 i¼1 c i exp j¼1 aij ðxj pij Þ 8 i1 P h > > f 19 ðxÞ ¼ 5i¼1 ðx ai Þðx ai ÞT þ ci > > < i1 P h f 20 ðxÞ ¼ 7i¼1 ðx ai Þðx ai ÞT þ ci > > h i > P > : f ðxÞ ¼ 10 ðx a Þðx a ÞT þ c 1 i i i 21 i¼1
[5, 5]D
1.0316285 D
[5, 10] & [0, 15]D [2, 2]D
0.398 3
[0, 1]D [0, 1]D
3.86 –3.32
[0, 10]D [0, 10]D [0, 10]D
10.1532 –10.4029 –10.5364
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
Rosenbrock
Formulation P 2 f 1 ðxÞ ¼ D i¼1 xi PD Q f 2 ðxÞ ¼ i¼1 jxi j þ D i¼1 jxi j 2 PD1 f 3 ðxÞ ¼ i¼1 100ðxiþ1 x2i þ ðxi 1Þ2 Þ PD 2 f 4 ðxÞ ¼ i¼1 ðbxi þ 0:5cÞ P 4 f 5 ðxÞ ¼ D i¼1 ixi þ random½0; 1Þ pffiffiffiffiffiffiffi P jxi jÞ f 6 ðxÞ ¼ D i¼1 ðxi sin PD f 7 ðxÞ ¼ 10D þ i¼1 ðx2i 10 cosð2pxi ÞÞ qffiffiffiffiffiffiffiffiffiffiffiffiffiffi PD 2ffi PD 1 1 x i f 8 ðxÞ ¼ 20 þ e 20e 5 D i¼1 i e D i¼1 cosð2pxi Þ P QD x2i xiffi p 1 f 9 ðxÞ ¼ D i¼1 4000 i¼1 cos i h i n o P 8 P 2 D1 2 2 p 10 sin2 ðpy Þ þ > f ðxÞ ¼ þ D > 10 1 i¼1 ðyi 1Þ 1 þ 10 sin ð3pyiþ1 Þ þ ðyD 1Þ i¼1 uðxi ; 10; 100; 4Þ D > 8 > > m > < xi > a < kðxi aÞ ; where yi ¼ 1 þ 14 ðxi þ 1Þ and uðxi ; a; k; mÞ ¼ 0; a 6 xi 6 a > : > > kðxi aÞmi ; xi < a h > n h io P > P > 2 2 D1 2 : f ðxÞ ¼ 0:1 sin ðpx Þ þ ðx 1Þ 1 þ sin ð3px Þ þ ðx 1Þ2 1 þ sin2 ð2px Þ þ D uðx ; 5; 100; 4Þ
673
674
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
Table 2 64 variants of DPD and their performance (in terms of number of functions solved as ‘top 20 scenario’). Combination
No. of functions solved
Combination
No. of functions solved
Combination
No. of functions solved
Combination
No. of functions solved
(1, (1, (1, (1, (1, (1, (1, (1, (2, (2, (2, (2, (2, (2, (2, (2,
04 09 06 08 08 05 08 05 02 01 00 01 02 01 03 03
(3, (3, (3, (3, (3, (3, (3, (3, (4, (4, (4, (4, (4, (4, (4, (4,
19 18 21 18 19 18 20 18 16 18 16 15 16 17 17 17
(5, (5, (5, (5, (5, (5, (5, (5, (6, (6, (6, (6, (6, (6, (6, (6,
01 01 01 03 02 01 02 02 10 09 08 10 09 10 10 10
(7, (7, (7, (7, (7, (7, (7, (7, (8, (8, (8, (8, (8, (8, (8, (8,
13 11 15 12 11 11 13 10 09 06 08 06 07 10 09 07
1) 2) 3) 4) 5) 6) 7) 8) 1) 2) 3) 4) 5) 6) 7) 8)
1) 2) 3) 4) 5) 6) 7) 8) 1) 2) 3) 4) 5) 6) 7) 8)
1) 2) 3) 4) 5) 6) 7) 8) 1) 2) 3) 4) 5) 6) 7) 8)
1) 2) 3) 4) 5) 6) 7) 8) 1) 2) 3) 4) 5) 6) 7) 8)
Table 3 Function-wise solving capability by best 20 DPDs (solved: 1, not solved: 0). Fun.
Top 20 combination of mutation strategies in DE for DPD (3, 1) (3, 2) (3, 3) (3, 4) (3, 5) (3, 6) (3, 7) (3, 8) (4, 1) (4, 2) (4, 3) (4, 4) (4, 5) (4, 6) (4, 7) (4, 8) (7, 1) (7, 3) (7, 4) (7, 7)
f1 1 f2 1 f3 1 f4 1 f5 1 f6 1 f7 1 f8 1 f9 1 f10 1 f11 1 f12 1 f13 1 f14 0 f15 0 f16 1 f17 1 f18 1 f19 1 f20 1 f21 1 Total 19
1 1 1 1 1 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 18
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 21
1 1 1 1 1 1 1 1 1 1 1 0 0 1 0 1 1 1 1 1 1 18
1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 19
1 1 1 1 1 1 1 1 1 1 1 0 0 1 0 1 1 1 1 1 1 18
1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 20
1 1 1 1 1 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 18
1 1 1 1 0 0 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 16
1 1 1 1 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 18
1 1 1 1 0 0 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 16
1 1 1 1 0 0 1 1 0 1 1 0 0 0 1 1 1 1 1 1 1 15
1 1 1 1 0 0 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 16
1 1 1 1 0 0 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 17
1 1 1 1 0 0 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 17
1 1 1 1 0 0 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 17
1 1 1 1 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 13
1 1 1 1 0 0 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 15
1 1 1 1 0 0 1 1 1 0 1 0 0 0 0 0 0 1 1 1 1 12
1 1 1 1 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 13
3.3. Experimental setup Simulations were conducted on a P-IV, 2.8 GHz computer, 2 GB of RAM, in the C-Free Standard 4.0 environment. Through an extensive experimental fine tuning, the following parameters are recommended to best suit the proposed algorithm. The mutation factor F for group A i.e. FA = 0.5 and for group C i.e. FC = 0.8. The crossover weight CR for group A i.e. CRA = 0.9 and for group C i.e. CRC = 0.9. For PSO, the acceleration constants C1 and C2 are both set to 2.0 and the inertia weights (w) ranges in [0.4, 0.9], i.e. Wmax = 0.9 and Wmin = 0.4. If the termination criterion is considered maximum number of function evaluations (NFEs) then inertia weights (w) is fixed at 0.7298 as given in [34,45]. The population size is set to 51 (which is multiple of 3 to favor the tri-population system) for all the problems and dimensions under consideration.
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
675
1.0
Performance Evaluation
0.9 0.8
Performance
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 DPD-1
DPD-2
DPD-3
DPD-4
Fig. 4a. Performance evaluation for top 4 DPDs under CEC2005 (30D) benchmark function.
1.0
Performance Evaluation
0.9 0.8
Performance
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 DPD-1
DPD-2
DPD-3
DPD-4
Fig. 4b. Performance evaluation for top 4 DPDs under CEC2005 (50D) benchmark function.
1.0
Performance Evaluation
0.9 0.8
Performance
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 DPD-1
DPD-2
DPD-3
DPD-4
Fig. 4c. Performance evaluation for top 4 DPDs under CEC2005 (100D) benchmark function.
3.4. Investigation of top DPDs Under the above experimental setup two types of evaluation criteria are considered to select the top DPDs. Out of 64 DPDs, the top 4 DPDs are investigated in type-I. Among them, the best DPD is recommended in type-II, for further use. They are discussed below.
676
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701 1.0
Performance Evaluation
0.9 0.8
Performance
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 DPD-1
DPD-2
DPD-3
DPD-4
Fig. 4d. Performance evaluation for top 4 DPDs under 6 unconstrained real life problems taken from [28].
Type-I: based on ‘top 20 scenarios’ 20 independent runs with 200 iterations are considered to start the simulation. The average function value of 20 runs for each function is recorded by each of 64 DPDs. For a fixed function, the top 20 DPDs (in terms of better optimal solution) are collected. It is referred as ‘top 20 scenario’. In this scenario, the numbers of functions solved by each of the variants of DPD are reported in Table 2. The functions that could be solved by top 20 DPDs are listed in Table 3. The digit ‘1’ in this table stands for the corresponding variant of DPD could solve the corresponding function. If failed, it is marked as ‘0’. Clearly, from Table 2 it is worth noting that no function could be solved by (2, 3). Hence, (2, 3) may here be treated as the worst variant of DPD. The numerical values exist throughout in each order pair represents the serial number of the mutation operator as per the list reported in Section 2.1. For example (3, 7) stands for (M3, M7). On the other hand, the top 4 DPDs are those using the mutation combinations (3, 3), (3, 7), (3, 5), (3, 1). It is because they solved maximum number of benchmark problems, which is shown in bold face figures in Tables 2 and 3. These DPDs are renamed as DPD-1, DPD-2, DPD-3, and DPD-4, respectively. However, it will carry some error to declare the best DPD out of them due to their probabilistic nature. Therefore, the best DPD may differ when other parameters like number of function evaluations, success rate are taken under consideration. Hence a further analysis on the investigation of the best DPD out of these 4 top DPDs is essential and is discussed below. Type-II: based on ‘performance’ A major stick of comparison of different algorithm called ‘performance index’ is being reported in [50]. A new concept namely ‘performance’ is proposed here, with a slight modification to ‘performance index’ where K1 = K2 = K3, as defined below. It signifies that equal importance have been given to the parameters (i) number of successful runs, (ii) minimum objective function value and (iii) average number of function evaluations
Performance ¼ N1p
Np X
k1 ai1 þ k2 ai2 þ k3 ai3
i¼1
subject to k1 þ k2 þ k3 ¼ 1 and k1 ¼ k2 ¼ k3 8 i ( i Mo < Mf ; if Sri > 0 if Sr i > 0 i i ; i Sri i i Ao and a3 ¼ Af where a1 ¼ Tri ; a2 ¼ : 0; if Sr i ¼ 0 0; if Sri ¼ 0
for i ¼ 1; 2; . . . ; Np
Sr i = Number of successful runs for ith problem. Tri = Total number of runs for ith problem. Moi = Mean optimal objective function value obtained by an algorithm for ith problem. Aoi = Minimum of mean optimal objective function value obtained by top 4 DPDs for ith problem. i
Mf = Average number of function evaluations of successful runs required by an algorithm in obtaining the solution for ith problem. i
Af = Minimum of average number of function evaluations of successful runs required by top 4 DPDs for ith problem. N p = Total number of problems considered. The evaluation of ‘performance’ for top 4 DPDs are being carried out over – (i) CEC2005 test functions [51] with varying dimension (D) 30, 50 and 100 (ii) 6 real life problems [28]. A total of 25 independent runs with 100,000 NFEs for a run are fixed (rest parameters of DPD are same as Section 3.3). The ‘performance’ for CEC2005 test functions are reported in Fig. 4a – for 30D, Fig. 4b – for 50D and Fig. 4c – for 100D and that for real life problems are reported in Fig. 4d. From these figures it is concluded that DPD-1 ‘performance’ the best.
677
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701 Table 4 Comparison of DPD with others in [52] for Category I & II Benchmark Set, in terms of function values. Fcn
Dim
Cr.
GA
PSO
DE
GSA
DE-GSA
DPD
f1
20
M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D.
0.0862 (6) 0.0651 (6) 0.0750 (6) 0.0152 (6) 0.0546 (6) 0.0131 (6) 1.5415 (6) 0.2228 (6) 2.1713 (6) 0.2493 (6) 2.6224 (6) 0.3626 (6) 892.2516 (6) 5.2561 (3) 659.3939 (6) 35.0295 (6) 1757.4654 (6) 45.4340 (6) 23.7364 (6) 10.2218 (6) 23.1347 (6) 4.5841 (6) 23.0706 (6) 2.8393 (6) 0.1431 (5) 0.0664 (5) 0.1307 (6) 0.0518 (6) 0.1132 (5) 0.0323 (4) 7.1444e+003 (4) 225.1424 (5) 1.4285e+004 (3) 334.7862 (2) 3.0304e+004 (3) 177.9693 (3) 17.3415 (4) 5.4070 (4) 18.0032 (4) 3.0007 (4) 27.9166 (2) 3.6881 (3) 3.0165 (6) 0.0589 (6) 2.9816 (6) 2.5227e004 (5) 2.9812 (6) 4.8999e005 (5) NaN NaN NaN NaN NaN NaN 0.2972 (6) 0.2080 (6) 0.1503 (5) 0.1134 (4) 0.0171 (6) 8.6606e004 (5) 0.0212 (4) 0.0167 (3) 0.0027 (5) 6.4061e004 (3) 3.6692e004 (5) 2.5446e004 (3) 53 50
2.3921e008 (5) 1.458e008 (5) 2.1431e007 (5) 1.154e007 (5) 1.6600e006 (5) 5.200e005 (5) 3.0915e006 (5) 2.2590e006 (5) 1.2187e005 (5) 3.6595e005 (5) 9.5727e004 (5) 3.0258e003 (5) 28.1872 (3) 7.9226 (4) 88.2824 (5) 32.5186 (5) 203.9393 (5) 69.2797 (5) 2.9861e010 (5) 2.4076e010 (5) 6.9759e009 (5) 1.200e008 (5) 2.6924e008 (5) 2.6924e008 (5) 0.0418 (4) 0.0120 (4) 0.0908 (5) 0.0268 (5) 1.5212 (6) 0.0475 (6) 4.70750e+003 (5) 4.8669e+002 (6) 9.7576e+003 (4) 1.1116e+003 (4) 1.7004e+004 (4) 1.8959e+003 (5) 28.3329 (5) 14.3352 (5) 82.7479 (6) 34.5265 (6) 195.4091 (6) 56.3481 (5) 0.0127 (5) 0.0147 (5) 0.0098 (5) 0.0102 (6) 0.0012 (5) 0.0030 (6) NaN NaN NaN NaN NaN NaN 2.2397e004 (4) 3.4072e004 (4) 0.3458 (6) 0.8222 (5) 8.2411e004 (5) 8.2886e004 (4) 0.3063 (5) 0.6793 (4) 0.5000 (6) 0.8927 (4) 42.7568 (6) 47.6000 (4) 46 47
1.1166e016 (3) 7.6700e017 (3) 2.4395e020 (3) 5.3728e020 (3) 6.3583e022 (3) 5.3732e022 (3) 5.4799e009 (3) 1.6384e009 (3) 2.5695e011 (3) 2.6668e11 (3) 2.1804e012 (3) 2.6707e011 (3) 47.7445 (4) 19.4010 (5) 60.3302 (4) 12.4921 (3) 77.3435 (4) 13.6890 (4) 7.5301e014 (3) 6.3810e014 (3) 7.8294e018 (3) 4.1231e018 (3) 2.1394e019 (3) 4.1231e019 (4) 0.0224 (3) 0.0060 (3) 0.0288 (3) 0.0075 (3) 0.0547 (3) 0.0434 (5) 8.3151e+003 (2) 59.9792 (2) 1.6641e+004 (2) 0.0000 (1) 3.3479e+004 (2) 68.4178 (2) 1.6160e009 (3) 1.4703e009 (3) 2.3882e005 (3) 5.4991e005 (3) 145.2627 (5) 59.7323 (6) 1.0706e007 (4) 4.8870e008 (4) 6.9785e010 (3) 3.7265e010 (3) 8.3041e011 (3) 3.7348e012 (3) NaN NaN NaN NaN NaN NaN 4.2327e015 (3) 3.0724e015 (3) 2.1988e018 (3) 3.4942e019 (2) 7.2401e019 (4) 2.4944e019 (3) 12.1253 (1) 0.0000 (1) 12.1253 (1) 0.0000 (1) 12.1253 (1) 0.0000 (1) 29 30
1.9108e16 (4) 1.4131e16 (4) 5.5061e17 (4) 2.0107e17 (4) 2.3377e17 (4) 2.6958e18 (4) 7.5417e08 (4) 6.0403e08 (4) 4.2746e08 (4) 4.5332e009 (4) 3.9757e08 (4) 7.0558e10 (4) 57.2537 (5) 93.6483 (6) 34.7175 (2) 0.1729 (2) 72.0420 (2) 0.0439 (2) 2.0798e13 (4) 1.1161e13 (4) 4.9932e17 (4) 9.9231e18 (4) 2.0936e17 (4) 1.0697e19 (3) 0.6554 (6) 1.5655 (6) 0.0539 (4) 0.0147 (4) 0.0549 (4) 0.0041 (3) 1.9199e+3 (6) 320.8131 (4) 2.9486e+3 (5) 498.9667 (3) 4.3034e+3 (5) 751.4651 (4) 46.9666 (6) 27.7972 (6) 29.6829 (5) 6.9052 (5) 38.3059 (3) 0.7035 (2) 1.2346e08 (3) 3.8539e09 (2) 5.5114e09 (4) 5.3301e10 (4) 2.1960e09 (4) 5.6204e11 (4) NaN NaN NaN NaN NaN NaN 0.0104 (5) 0.0402 (5) 0.0259 (4) 0.0635 (3) 5.3640e20 (3) 4.6214e21 (2) 5.5391e32 (3) 6.8897e32 (2) 4.4493e32 (4) 5.5703e32 (2) 1.4998e32 (4) 1.2726e32 (2) 46 43
7.2582e018 (2) 1.6864e17 (2) 1.2268e021 (2) 1.4505e21 (2) 4.4763e025 (2) 1.5334e24 (2) 1.4368e009 (2) 1.3166e09 (2) 8.2103e012 (2) 6.8438e13 (2) 1.2316e012 (2) 6.8541e12 (2) 24.6655 (2) 3.1346 (2) 49.6038 (3) 22.0396 (4) 75.848 (3) 4.218 (3) 1.1925e014 (2) 1.6710e14 (2) 1.6780e018 (2) 1.4098e18 (2) 1.7443e020 (2) 7.5054e21 (2) 0.0113 (2) 0.0024 (2) 0.0232 (2) 0.0070 (2) 0.0533 (2) 0.0021 (2) 8.1108e+003 (3) 205.7557 (3) -1.6641e+004 (2) 0.0000 (1) 3.3519e+004 (1) 0.0000 (1) 1.0622e013 (2) 3.4876e013 (2) 3.3810e013 (2) 5.2729e013 (2) 78.1390 (4) 15.6996 (4) 9.5664e009 (2) 1.6838e008 (3) 1.5410e010 (2) 1.3703e010 (2) 1.4788e012 (2) 4.8736e013 (2) NaN NaN NaN NaN NaN NaN 2.3564e016 (2) 9.1262e016 (2) 1.1779e032 (1) 0.0000 (1) 5.8895e033 (1) 0.0000 (1) 12.1253 (1) 0.0000 (1) 1.1504 (2) 0.0000 (1) 1.1504 (2) 0.0000 (1) 20 21
1.3018e198 (1) 2.4138e199 (1) 6.8429e184 (1) 2.4138e186 (1) 2.0873e167 (1) 4.3126e178 (1) 5.3208e158 (1) 9.6783e173 (1) 4.4632e148 (1) 1.9542e154 (1) 5.0314e122 (1) 9.6642e131 (1) 2.3487e012 (1) 1.5138e012 (1) 7.0329e004 (1) 2.0471e004 (1) 8.5102e000 (1) 1.3901e002 (1) 0.0000e000 (1) 0.0000e000 (1) 0.0000e000 (1) 0.0000e000 (1) 0.0000e000 (1) 0.0000e000 (1) 6.7328e005 (1) 3.3201e005 (1) 1.1402e004 (1) 4.7014e005 (1) 1.9032e003 (1) 5.6324e004 (1) 8.3796e+03 (1) 0.0000e000 (1) 1.6759e+04 (1) 0.0000e000 (1) 3.3519e+04 (1) 0.0000e000 (1) 0.0000e000 (1) 0.0000e000 (1) 0.0000e000 (1) 0.0000e000 (1) 0.0000e000 (1) 0.0000e000 (1) 4.16246e15 (1) 0.0000e000 (1) 4.16246e15 (1) 0.0000e000 (1) 4.16246e15 (1) 0.0000e000 (1) 0.0000e000 0.0000e000 0.0000e000 0.0000e000 0.0000e000 0.0000e000 1.6926e32 (1) 1.1574e32 (1) 4.7192e32 (2) 0.0000e000 (1) 4.7192e32 (2) 0.0000e000 (1) 1.3243e32 (2) 0.0000e000 (1) 1.3243e32 (3) 0.0000e000 (1) 1.3243e32 (3) 0.0000e000 (1) 11 10
40 80 f2
20 40 80
f3
20 40 80
f4
20 40 80
f5
20 40 80
f6
20 40 80
f7
20 40 80
f8
20 40 80
f9
20 40 80
f10
20 40 80
f11
20 40 80
RANK
20
(continued on next page)
678
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
Table 4 (continued) Fcn
Dim
Cr.
40
M S.D. M S.D.
80 Average rank
GA 53 48 51 47 50.33
PSO 52 50 52 50 49.50
DE 28 25 31 34 29.50
GSA 40 35 37 30 38.50
DE-GSA
DPD
20 19 21 20 20.16
13 10 12 10 11.00
NaN: Nan availability of results, Cr: Criteria, Parentheses () contain ranking.
Table 5 Comparison of DPD with others in [52] for Category III Benchmark Set, in terms of function values. Fcn
Dim
Cr.
GA
PSO
DE
GSA
DEGSA
DPD
f12
2
f13
4
f14
2
f15
2
f16
2
f17
3
f18
6
f19
4
f20
4
f21
4
M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D. M S.D.
0.9980 (1) 0.0000 (1) 0.0017 (4) 3.6960e004 (3) 1.0316 (1) 0.0000 (1) 0.3981 (2) 3.8806e004 (2) 3.0036 (2) 0.0028 (3) 3.8620 (2) 0.0018 (2) 3.2744 (2) 0.0651 (2) 6.5630 (3) 3.4866 (3) 6.4789 (4) 3.1670 (5) 10.5280 (2) 0.0038 (2) 23 24 23.50
0.9980 (1) 0.0000 (1) 0.0046 (5) 0.0012 (5) 1.0316 (1) 0.0000 (1) 0.3979 (1) 0.0000 (1) 3.0000 (1) 0.0000 (1) 3.8628 (1) 0.0000 (1) 3.2657 (3) 0.5732 (3) 5.0551 (4) 0.0000 (1) 5.0876 (5) 2.1503e7 (2) 5.1284 (4) 0.0000 (1) 26 17 21.50
0.9980 (1) 0.0000 (1) 0.0014 (3) 4.5493e004 (4) 1.0316 (1) 0.0000 (1) 0.3979 (1) 0.0000 (1) 3.0222 (3) 0.0014 (2) 3.8628 (1) 0.0000 (1) 3.3220 (1) 0.0000 (1) 9.9104 (2) 0.5209 (2) 10.1182 (2) 0.3283 (3) 10.2201 (3) 0.3967 (3) 18 19 18.50
4.5297 (2) 2.7184 (2) 0.0057 (6) 0.0024 (6) 1.0316 (1) 0.0000 (1) 0.3979 (1) 0.0000 (1) 3.0000 (1) 0.0000 (1) 3.7989 (3) 0.1406 (3) 2.4750 (4) 0.7890 (4) 5.0552 (5) 0.0000 (1) 9.3399 (3) 2.2007 (4) 10.5364 (1) 0.0000 (1) 27 24 25.50
0.9980 (1) 0.0000 (1) 7.1010e004 (2) 1.8186e004 (2) 1.0316 (1) 0.0000 (1) 0.3979 (1) 0.0000 (1) 3.0000 (1) 0.0000 (1) 3.8628 (1) 0.0000 (1) 3.3220 (1) 0.0000 (1) 10.1532 (1) 0.0000 (1) 10.4029 (1) 0.0000 (1) 10.5364 (1) 0.0000 (1) 11 11 11.00
0.9980 (1) 0.0000 (1) 3.075e04 (1) 1.572e21 (1) 1.0316 (1) 0.0000 (1) 0.3979 (1) 0.0000 (1) 3.0000 (1) 0.0000 (1) 3.8628 (1) 0.0000 (1) 3.3220 (1) 0.0000 (1) 10.1532 (1) 0.0000 (1) 10.4029 (1) 0.0000 (1) 10.5364 (1) 0.0000 (1) 10 10 10.00
RANK Average Rank
Undoubtedly, DPD-1 performs better than the rest for wide varieties of problems under consideration like unimodal/multimodal, separable/non-separable, unimodal and shifted, multimodal and shifted, expanded multimodal, hybrid composition (i.e. all of them are non-separable, rotated, and multimodal functions containing a large number of local minima) unconstrained problems including real life problems as well. Hence, DPD-1 is only considered for further study and comparison with other state-of-the-art algorithms. Henceforth it is referred as only DPD. 4. Efficiency of DPD over others Prior to this section, the 64 variants of DPD have been compared among themselves to investigate the top 4 DPDs. Later the best DPD is well identified. Henceforth, DPD means the best DPD. The selected problems have been well studied before as benchmarks by various approaches. Therefore it is necessary the results obtained by DPD compared with other classical evolutionary approaches. Comparative results have been reported in respective tables cited afterwards. ‘Boldface values’ in each tables represent the better value achieved by that corresponding algorithm and ‘NaN’ shows the non-availability of the results. The motto of this section is to observe the efficiency of the proposed DPD, in different scenario as discussed in the following sub-sections. 4.1. Comparison over basic benchmark function In this sub-section DPD is compared with 9 different state-of-the-art approaches for basic benchmark functions taken from [41,52]. They are GA, PSO, DE, GSA, DE–GSA [52], DE/rand/1, DE/best/1, DE/target-to-best/1 and GDE [41]. For DPD, the stopping criteria and the number of independent runs kept the same as that of compared papers. All other parameters for DE and PSO (used in DPD) are same as in Section 3.3. The simulation results of DPD over the problems in [52] (containing problems of Categories I, II and III) are discussed below.
Algorithm
Cr.
f1
f2
f3
f4
f5
f6
f7
f8
f9
f10
f11
f12
f13
GDE
M S.D. M S.D. M S.D. M S.D. M S.D. NFEs S.R. (%)
4.82e46 1.13e45 1.38e36 1.19e36 1.60e39 1.39e39 2.95e41 2.69e41 6.29e308* 0.00e+00 36525 100
2.88e21 4.99e21 7.48e19 3.58e19 4.29e20 3.63e20 9.35e21 3.65e21 7.84e308* 0.00e+00 51480 100
1.34e24 2.77e24 1.17e20 9.57e21 1.84e22 1.35e22 4.69e24 3.77e24 6.42e308* 0.00e+00 52530 100
1.04e14 2.91e14 3.05e13 2.35e13 2.68e14 2.80e14 3.94e15 1.70e15 3.08e296 0.00e+00 35381 100
1.33e15 3.37e15 2.33e12 3.35e12 2.28e21 3.28e21 5.99e22 1.26e22 0.00e+00 0.00e+00 2579 100
0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 5.80e05 2.72e06 40686 100
1.33e03 6.05e04 1.78e03 6.78e04 2.01e03 8.39e04 1.69e03 7.76e04 1.04e04 2.24e04 55171 100
7.79e+02 1.87e+02 2.46e+02 3.61e+02 6.66e+02 3.73e+02 5.01e+02 1.35e+02 0.00e+00 0.00e+00 6650 100
5.60e+00 1.57e+00 1.88e+01 3.24e+00 6.47e+00 1.64e+00 2.19e+01 3.39e+00 0.00e+00 0.00e+00 1221 100
7.99e15 2.90e15 4.44e15 0.00e+00 5.15e15 1.49e15 4.44e15 0.00e+00 4.14e15 0.00e+00 50373 100
8.88e02 4.69e02 1.82e02 9.15e02 1.22e02 1.02e01 3.13e02 8.62e02 0.00e+00 0.00e+00 3027 100
4.71e32 1.15e47 4.71e32 1.15e37 1.71e32 1.15e47 4.71e32 1.53e47 1.57e32 3.63e48 34048 100
1.35e32 2.88e48 1.35e32 2.88e48 1.35e32 2.88e48 1.35e32 2.88e48 1.35e32 2.88e48 37179 100
DE/rand/1 DE/best/1 DE/targettobest/1 DPD
*
Platform for simulation results doesn’t support the values lower than 10308. Afterwards it becomes ‘0’.
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
Table 6 Comparison of DPD with others in [41] for 10 dimensional benchmark functions taken from [41], in terms of function value.
679
680
Algorithm
Cr.
f1
f2
f3
f4
f5
f6
f7
f8
f9
f10
f11
f12
f13
GDE
M S.D. M S.D. M S.D. M S.D. M S.D. NFEs S.R. (%)
6.07e24 8.53e24 1.36e03 5.30e04 4.97e04 3.32e04 1.10e04 4.73e05 7.06e308* 0.00e+00 40528 100
1.76e07 4.19e07 2.13e01 7.31e02 2.88e02 7.53e03 2.04e02 8.34e03 8.19e308* 0.00e+00 54883 100
1.76e02 2.11e02 1.31e+04 3.75e+02 4.74e+02 1.81e+02 2.69e+02 7.35e+01 6.24e308* 0.00e+00 56305 100
3.26e01 2.68e01 2.81e+00 3.65e+01 1.00e+00 3.22e01 8.34e01 1.92e01 8.02e308* 0.00e+00 38815 100
5.22e+00 5.19e+00 2.72e+01 6.32e01 3.35e+01 2.83e+01 2.86e+01 2.04e+01 0.00e+00 0.00e+00 2789 100
1.56e23 2.65e23 1.36e03 3.84e04 6.74e04 3.16e04 1.03e04 3.62e05 2.02e05 2.62e06 46662 100
1.90e02 6.10e03 2.48e02 6.15e03 2.76e02 6.85e03 2.03e02 5.10e03 2.98e04 1.56e04 58781 100
2.90e+03 8.86e+02 7.00e+03 2.87e+02 3.10e+03 7.15e+12 4.38e+03 1.34e+03 1.15e+03 1.04e+02 8650 100
4.75e+01 1.20e+01 1.96e+02 7.63e+01 1.11e+02 1.90e+01 2.02e+02 6.95e+00 0.00e+00 0.00e+00 1628 100
2.13e10 1.12e10 1.80e02 3.41e03 8.16e03 2.82e03 3.60e03 9.85e04 7.15e15 9.95e16 56703 100
8.13e03 9.79e03 7.26e03 2.93e03 5.79e03 5.36e03 4.03e03 3.99e03 0.00e+00 0.00e+00 5675 100
6.13e21 7.05e-22 5.68e04 2.64e04 1.19e04 7.36e05 3.32e05 3.05e05 1.77e21 3.08e29 38488 100
5.54e23 9.19e23 2.51e03 9.61e04 1.40e03 3.46e03 1.02e04 7.88e05 1.69e23 1.04e29 42196 100
DE/rand/1 DE/best/1 DE/target-to-best/1 DPD
*
Platform for simulation results doesn’t support the values lower than 10308. Afterwards it becomes ‘0’.
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
Table 7 Comparison of DPD with others in [41] for 30 dimensional benchmark functions taken from [41], in terms of function value.
681
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
Considering the problems in Category I and II, the mean (M) and standard deviation (S. D.) for each of the test functions is reported in Table 4 and the same for Category III is reported in Table 5. From both the tables DPD is found to provide very competitive results as compared to the 5 state-of-the-art approaches in [52]. Only for f11, DPD gives poor result compared to DE and DE–GSA in terms of only mean. On the other hand, in order to report the overall performance, the algorithms are ranked depending on one’s M and S. D. An algorithm performs the best is ranked ‘1’; the next performer is ranked ‘2’ and so on, which are provided in parenthesis. For equally performers the same ranks have been provided. It is nil, if one algorithm could not solve that problem. The total and average ranks are placed in the last two rows in Tables 4 and 5. It is seen from Tables 4 and 5 that DPD achieves very less ranking that leads to declare it as a high performer as compared to its competitors over the functions of this category. For the 13 basic benchmark functions taken from [41], the comparative results (M and S. D.) are presented in Tables 6 and 7 for 10D and 30D respectively. Clearly from Tables 6 and 7, only for f6, DPD provides poor result than others for 10D and worse
4
7.5x10
4
7.0x10
4
6.5x10
4
Objective Function Value
GA PSO DE GSA DE-GSA DPD
Sphere Function
6.0x10
4
5.5x10
4
5.0x10
4
4.5x10
4
4.0x10
4
3.5x10
4
3.0x10
4
2.5x10
4
2.0x10
4
1.5x10
4
1.0x10
3
5.0x10 0.0 0
5
10
15
20
25
30
35
40
45
50
Generation Fig. 5a. Convergence of DPD with others in [52] for Sphere Function (80D).
8
5.5x10
8
5.0x10
Rosenbrock Function
8
Objective Function Value
4.5x10
8
GA PSO DE GSA DE-GSA DPD
4.0x10
8
3.5x10
8
3.0x10
8
2.5x10
8
2.0x10
8
1.5x10
8
1.0x10
7
5.0x10
0.0 0
5
10
15
20
25
30
35
40
45
Generation Fig. 5b. Convergence of DPD with others in [52] for Rosenbrock Function (80D).
50
682
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701 3
1.5x10
3
1.0x10
Schwefel Function
2
5.0x10
GA PSO DE GSA DE-GSA DPD
Objective Function Value
0.0 2
-5.0x10
3
-1.0x10
3
-1.5x10
3
-2.0x10
3
-2.5x10
3
-3.0x10
3
-3.5x10
3
-4.0x10
3
-4.5x10
3
-5.0x10
0
5
10
15
20
25
30
35
40
45
50
Generation Fig. 5c. Convergence of DPD with others in [52] for Schwefel Function (80D).
2
6.0x10
2
5.5x10
GA PSO DE GSA DE-GSA DPD
Rastrigin Function
2
Objective Function Value
5.0x10
2
4.5x10
2
4.0x10
2
3.5x10
2
3.0x10
2
2.5x10
2
2.0x10
2
1.5x10
2
1.0x10
0
5
10
15
20
25
30
35
40
45
50
Generation Fig. 5d. Convergence of DPD with others in [52] for Rastrigin Function (80D).
than only GDE for 30D. It is needless to mention that DPD also provide 100% success rate like others. However, it is important to note the NFEs by DPD are very less with respect to others which are 100,000 and 300,000 for Tables 6 and 7 respectively (which is taken as the stopping criteria in the referred papers). In totality, DPD is the best among the considered algorithms 4.2. Convergence analysis In this sub-section the convergence speed of DPD is compared with the other approaches reported in [41,52] over a set of 5 higher dimensional typical test functions (Sphere, Rosenbrock, Schwefel, Rastrigin and Ackley). For comparison of DPD with [41], D = 30 (i.e. the highest dimension of the problems solved in [41]) and for comparison with [52], D = 80 (i.e. the
683
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701 1
2.2x10
1
2.0x10
1
Objective Function Value
1.8x10
1
1.6x10
1
1.4x10
Ackley Function
1
1.2x10
GA PSO DE GSA DE-GSA DPD
1
1.0x10
0
8.0x10
0
6.0x10
0
4.0x10
0
2.0x10
0.0 0
5
10
15
20
25
30
35
40
45
50
Generation Fig. 5e. Convergence of DPD with others in [52] for Ackley Function (80D).
highest dimension of the problems solved in [52]) are picked. Starting from the same seed, all algorithms allowed running over generations and NFEs. It implies that all algorithms start their journey from the same initial population. For the above 5 typical test function, the convergence graphs are separately presented in Figs. 5(a–e) and Figs. 6(a–e). Form these figures it can be concluded that in all the functions DPD converge much faster than other algorithms in [41,52]. It can be concluded that, DPD is greatly stable to minimize the unconstrained optimization problems. 4.3. Comparison over 25 CEC2005 unconstrained benchmark functions In this sub-section a different set of 25 CEC2005 unconstrained benchmark functions (taken from [51]) with 30D and 50D are solved by DPD. Amongst only 8 problems (F1, F2, F4, F5, F6, F9, F12 and F13) are solved with 100D, as experimented in [53]. To verify the robustness of DPD, the stopping criteria and the number of independent runs are kept same as in [53]. During
4
5.0x10
4
Sphere Function
4.5x10
4
Objective Function Value
4.0x10
DE/rand/1 DE/best/1 DE/target-to-best/1 GDE DPD
4
3.5x10
4
3.0x10
4
2.5x10
4
2.0x10
4
1.5x10
4
1.0x10
3
5.0x10
0.0 0
3
3
3
3
3
3
3
3
3
4
1x10 2x10 3x10 4x10 5x10 6x10 7x10 8x10 9x10 1x10
NFEs Fig. 6a. Convergence of DPD with others in [41] for Sphere Function (30D).
684
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701 8
2.5x10
Rosenbrock Function 8
Objective Function Value
2.0x10
DE/rand/1 DE/best/1 DE/target-to-best/1 GDE DPD
8
1.5x10
8
1.0x10
7
5.0x10
0.0 0
3
3
3
3
3
3
3
3
3
4
1x10 2x10 3x10 4x10 5x10 6x10 7x10 8x10 9x10 1x10
NFEs Fig. 6b. Convergence of DPD with others in [41] for Rosenbrock Function (30D).
5
1.8x10
Schwefel Function 5
Objective Function Value
1.6x10
5
1.4x10
DE/rand/1 DE/best/1 DE/target-to-best/1 GDE DPD
5
1.2x10
5
1.0x10
4
8.0x10
4
6.0x10
4
4.0x10
4
2.0x10
0.0 0
3
3
3
3
3
3
3
3
3
4
1x10 2x10 3x10 4x10 5x10 6x10 7x10 8x10 9x10 1x10
NFEs Fig. 6c. Convergence of DPD with others in [41] for Schwefel Function (30D).
simulation, all rest parameters remain the same for DE and PSO (used in DPD) as quoted in Section 3.3. The M and S.D. of DPD is compared with 7 state-of-the-art approaches namely DE, SADE, ODE, JADE, NDE, MDE_pBX and MDE [53]. For 25 problems, the average objective function values of the algorithms are compared in Tables 8 and 9 for 30D and 50D respectively. Again for 100D, the results of the 8 benchmark problems are compared in Table 10. From Tables 8–10, it is observed that DPD consistently performs better with an impact of low S.D. comparatively. To compare the overall performance, the algorithms are ranked as discussed earlier. The total rank and average rank for each algorithm is presented respectively in the last two rows of Tables 8 (for 30D), 9 (for 50D) and 10 (for 100D). From ranking, it is clear that the performance of DPD is superior to all of the 7 state-of-the-art algorithms. Concretely, the DPD is superior not only to the traditional DE, but also other recent variants of DE.
685
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
Objective Function Value
2
4.5x10 2 4.3x10 2 4.0x10 2 3.8x10 2 3.5x10 2 3.3x10 2 3.0x10 2 2.8x10 2 2.5x10 2 2.3x10 2 2.0x10 2 1.8x10 2 1.5x10 2 1.3x10 2 1.0x10 1 7.5x10 1 5.0x10 1 2.5x10
DE/rand/1 DE/best/1 DE/target-to-best/1 GDE DPD
Rastrigin Function
3
0
3
3
3
3
3
3
3
3
4
1x10 2x10 3x10 4x10 5x10 6x10 7x10 8x10 9x10 1x10
NFEs Fig. 6d. Convergence of DPD with others in [41] for Rastrigin Function (30D).
1
2.2x10
1
2.0x10
DE/rand/1 DE/best/1 DE/target-to-best/1 GDE DPD
1
Objective Function Value
1.8x10
1
1.6x10
1
1.4x10
1
1.2x10
1
1.0x10
0
8.0x10
0
6.0x10
0
4.0x10
Ackley Function
0
2.0x10
0.0 0
3
3
3
3
3
3
3
3
3
4
1x10 2x10 3x10 4x10 5x10 6x10 7x10 8x10 9x10 1x10
NFEs Fig. 6e. Convergence of DPD with others in [41] for Ackley Function (30D).
4.4. Comparison over real life problems The efficiency of DPD in solving real life problems is judged in this sub-section. For this, a set of 6 popular real life problems (say RP-1 to RP-6) taken from [28] are listed below. RP-1 RP-2 RP-3 RP-4 RP-5 RP-6
Gas transmission design. Optimal capacity of gas production facilities. Design of a gear train. Optimal thermo-hydraulic performance of an artificially roughened air heater. Frequency modulation sounds parameter identification problem. The spread spectrum radar poly-phase code design problem.
686
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
Table 8 Comparison of DE, SADE, ODE, JADE, NDE, MDE_pBX, MDE and DPD on 25 instances (CEC2005) for 30D in terms of Mean and Standard Deviation. F
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
F13
F14
F15
F16
F17
Algorithm DE
SADE
ODE
JADE
NDE
MDE_pBX
MDE
DPD
4.500e+02 (1) 3.6945e06 (5) 2.6746e+04 (7) 5.1516e+03 (8) 2.0319e+08 (8) 5.0078e+07 (8) 3.8092e+04 (6) 7.3196e+03 (5) 3.4612e+03 (6) 8.4382e+02 (4) 5.3569e+02 (7) 1.6375e+02 (7) 1.786e+02 (6) 2.0896e01 (7) 1.1897e+2 (5) 4.1375e02 (2) 1.9560e+2 (8) 1.0122e+01 (3) 8.4303e+1 (8) 9.6522e+00 (2) 1.3136e+02 (8) 1.4112e+00 (4) 3.4538e+05 (8) 5.3975e+04 (7) 1.1245e+2 (7) 1.2139e+00 (4) 2.8622e+2 (8) 1.2968e01 (3) 4.1818e+02 (2) 7.2447e+01 (2) 4.2109e+02 (6) 5.2301e+01 (4) 4.6157e+02 (6) 6.5204e+01
4.500e+02 (1) 3.0056e11 (4) 1.4418e+03 (3) 8.1815e+02 (4) 1.2993e+07 (6) 5.6644e+06 (5) 8.0429e+04 (7) 1.6406e+04 (8) 1.8659e+03 (3) 5.3702e+02 (2) 4.6066e+02 (5) 6.0811e+01 (4) 1.797e+02 (5) 1.4955e01 (6) 1.1898e+2 (4) 5.7571e02 (5) 2.8782e+2 (4) 6.1550e+00 (2) 1.2690e+2 (6) 1.1150e+01 (3) 1.2824e+02 (7) 1.2740e+00 (2) 5.5368e+04 (6) 4.1296e+04 (5) 1.2142e+2 (3) 9.7213e01 (2) 2.8636e+2 (6) 1.0495e01 (2) 4.5334e+02 (3) 1.0548e+02 (8) 3.4470e+02 (3) 3.5633e+01 (2) 4.3282e+02 (4) 7.0255e+01
4.500e+02 (1) 0.000e+00 (1)
4.500e+02 (1)
1.4661e+02 (2) 4.1644e+02 (6)
4.500e+02 (1)
4.500e+02 (1)
4.0881e14 (2)
0.0000e+00 (1)
1.1594e+04 (6) 4.4786e+03 (7) 1.3569e+08 (7) 2.7147e+07 (7) 1.0108e+04 (3) 6.6140e+03 (4) 1.6057e+03 (2) 8.7081e+02 (5) 4.5787e+02 (4) 6.6482e+01 (5) 1.799e+02 (4) 5.4975e02 (5) 1.1898e+2 (4) 6.7260e02 (7) 2.0502e+2 (7) 2.1369e+01 (8) 1.1173e+2 (7) 1.1929e+01 (4) 1.2816e+02 (6) 4.0920e+00 (7) 1.4291e+05 (7) 1.0362e+05 (8) 1.1534e+2 (6) 1.0909e+00 (3) 2.8633e+2 (7) 1.9370e01 (4) 4.7149e+02 (5) 9.4605e+01 (5) 4.0871e+02 (5) 6.8472e+01 (5) 4.4654e+02 (5) 9.9609e+01
4.4731e+02 (2) 2.0987e+00 (3)
4.500e+02 (1) 5.5855e14 (3) 2.0087e+03 (4) 2.5578e+03 (5) 6.4650e+06 (4) 3.7899e+06 (4) 2.3090e+04 (5) 1.0604e+04 (6) 6.1802e+03 (8) 1.5258e+03 (7) 4.6999e+02 (6) 9.2086e+01 (6) 1.799e+02 (4) 2.9207e02 (4) 1.196e+02 (2) 1.0481e01 (8) 2.8441e+2 (5) 1.0307e+01 (4) 2.1853e+2 (3) 3.1357e+01 (6) 1.2745e+02 (5) 3.6591e+00 (5) 4.7407e+04 (5) 5.2339e+04 (6) 1.1832e+2 (5) 7.4104e+00 (7) 2.8638e+2 (5) 4.3753e01 (7) 4.9399e+02 (8) 1.0226e+02 (7) 3.8265e+02 (4) 1.4330e+02 (7) 4.2747e+02 (3) 1.4630e+02
3.8330e+03 (5)
4.500e+02 (1)
2.2761e+03 (6)
4.5000e+02 (1) 2.7998e05 (2)
1.2550e+07 (5)
3.7715e+05 (2)
5.7772e+06 (6)
1.9341e+05 (2)
1.3248e+04 (4) 1.2960e+04 (7)
1.1571e+02 (2) 3.0443e+02 (2)
6.0691e+03 (7)
2.0704e+03 (4)
1.6239e+03 (8)
6.4474e+02 (3)
9.2399e+07 (8)
4.2657e+02 (3)
1.4136e+08 (8)
2.6391e+01 (3)
4.6111e+03 (7) 3.6450e+02 (8)
1.7998e+02 (3) 9.9092e03 (2)
1.11052e+05 (1) 1.10271e+05 (1) 2.4832e+02 (1) 2.01532e+01 (1) 1.60264e+03 (1) 1.00381e+02 (1) 4.16255e+02 (1) 3.95173e+01 (1) 1.8000e+02 (1) 0.0000e+00 (1)
1.1898e+02 (4) 5.0283e02 (4)
1.1900e+02 (3) 4.3267e02 (3)
2.8970e+02 (3) 1.0417e+01 (5)
2.9352e+02 (2) 1.0917e+01 (7)
2.1517e+02 (4) 3.4585e+01 (7)
2.5348e+02 (2) 4.6193e+01 (8)
1.1955e+02 (2)
1.2627e+02 (3)
3.8406e+00 (6)
6.8903e+00 (8)
4.5588e+04 (4)
9.6035e+03 (2)
2.1288e+04 (4)
1.6371e+04 (3)
1.8011e+01 (8) 3.6810e+02 (8)
1.2218e+02 (2) 6.5013e+00 (6)
2.8720e+02 (2) 2.6255e01 (5)
2.8657e+02 (4) 5.8719e01 (8)
4.7968e+02 (6)
4.8557e+02 (7)
8.9911e+01 (3)
1.0045e+02 (6)
4.2285e+02 (7)
2.2365e+02 (2)
1.6421e+02 (8)
4.0095e+01 (3)
4.8753e+02 (8)
2.5447e+02 (2)
1.4694e+02 (8)
5.9974e+01 (2)
0.000e+00 (1)
5.2368e+06 (3) 3.2671e+06 (3) 8.1057e+02 (8) 1.0292e+03 (3) 2.7459e+03 (5) 8.8045e+02 (6) 4.1593e+02 (2) 2.5021e+01 (2) 1.7999e+02 (2) 1.1860e02 (3) 1.1898e+02 (4) 5.9958e02 (6) 2.5624e+02 (6) 1.0890e+01 (6) 1.4464e+02 (5) 1.2946e+01 (5) 1.2716e+02 (4) 1.2749e+00 (3) 1.2060e+04 (3) 1.2693e+04 (2) 1.1868e+02 (4) 1.2555e+00 (5) 2.8667e+02 (3) 2.6286e01 (6) 4.6534e+02 (4) 9.1949e+01 (4) 4.2860e+02 (8) 1.0752e+02 (6) 4.6747e+02 (7) 1.2779e+02 (6)
0.0000e+00 (1)
1.2102e+02 (1) 2.67035e02 (1) 3.3000e+02 (1) 0.0000e+00 (1) 3.2573e+02 (1) 1.85439e01 (1) 1.12254e+02 (1) 9.34716e01 (1) 1.15362e+03 (1) 1.10838e+03 (1) 1.2841e+02 (1) 1.30281e01 (1) 2.8885e+02 (1) 1.09361e02 (1) 4.08566e+02 (1) 1.45025e01 (1) 2.20481e+02 (1) 1.56573e+01 (1) 2.24586e+02 (1) 2.37248e+01
687
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701 Table 8 (continued) F
F18
F19
F20
F21
F22
F23
F24
F25
Rank A.R.
Algorithm DE
SADE
ODE
(3) 9.1237e+02 (4) 8.1847e01 (2) 9.1240e+02 (5) 6.6292e01 (2) 9.1240e+02 (4) 6.6292e01 (2) 1.2215e+03 (3) 9.6986e+01 (7) 1.2716e+03 (6) 1.5740e+01 (3) 9.3311e+02 (3) 3.5998e+01 (3) 1.2418e+03 (7) 2.8669e+00 (5) 5.1524e+02 (6) 1.1246e+01 (6) 145 108 126.50
(4) 9.1113e+02 (1) 1.0728e+00 (4) 9.1109e+02 (1) 1.3062e+00 (4) 9.1118e+02 (1) 1.3086e+00 (4) 1.2393e+03 (5) 1.0169e+02 (2) 1.2501e+03 (3) 1.6492e+01 (4) 1.0893e+03 (7) 1.7335e+02 (6) 1.0206e+03 (6) 2.6790e+02 (6) 4.7493e+02 (4) 1.4614e+00 (4) 104 102 103.00
(5) 9.1211e+02 (3) 7.9801e01 (1) 9.1160e+02 (2) 5.3881e01 (1) 9.1162e+02 (2) 5.4773e01 (1) 1.2160e+03 (2) 1.1372e+02 (8) 1.2487e+03 (2) 1.5490e+01 (2) 1.0577e+03 (5) 1.1922e+02 (5) 4.6000e+02 (2) 0 (1) 4.7353e+02 (3) 9.3482e01 (2) 107 111 109.00
JADE 9.1188e+02 (2) 1.2566e+00 (5) 9.1228e+02 (4) 1.9346e+00 (5) 9.1287e+02 (5) 2.5337e+00 (5) 1.3214e+03 (7) 2.6261e+01 (4) 1.2696e+03 (5) 2.6040e+01 (5) 1.0692e+03 (6) 1.8421e+02 (7) 6.9167e+02 (3) 2.6631e+02 (4) 4.7519e+02 (5) 2.1431e+00 (5) 108 110 109.00
NDE (7) 9.1657e+02 (6) 2.4690e+00 (6) 9.2299e+02 (7) 9.4301e+00 (7) 9.2449e+02 (8) 1.0699e+01 (6) 1.3619e+03 (8) 2.5198e+01 (3) 1.3592e+03 (8) 4.9916e+01 (8) 1.0403e+03 (4) 1.0595e+02 (4) 8.6952e+02 (5) 2.8776e+02 (7) 7.8148e+02 (7) 3.6393e+02 (7) 130 147 138.50
MDE_pBX
MDE
DPD
9.3862e+02 (7)
9.1290e+02 (5)
2.4544e+01 (8)
2.5957e+00 (7)
9.3586e+02 (8)
9.1437e+02 (6)
1.1294e+01 (8)
3.8817e+00 (6)
9.1389e+02 (6)
9.2040e+02 (7)
(1) 9.12110e+02 (3) 1.00562e+00 (3) 9.12023e+02 (3) 1.09237e+00 (3) 9.1216e+02 (3)
4.2513e+01 (8)
1.1292e+01 (7)
1.0062e+00 (3)
1.3078e+03 (6)
1.2261e+03 (4)
1.1935e+03 (1)
3.8095e+01 (5)
7.5031e+01 (6)
1.0141e+02 (1)
1.3365e+03 (7)
1.2593e+03 (4)
1.2168e+03 (1)
3.7679e+01 (7)
2.9899e+01 (6)
1.6092e+01 (1)
1.2585e+03 (8)
8.9850e+02 (2)
6.2015e+02 (1)
1.8919e+02 (8)
3.8922e+00 (2)
1.0932e+00 (1)
1.3227e+03 (8)
7.0835e+02 (4)
4.2201e+02 (1)
2.1035e+02 (2)
2.8510e+02 (3)
0.0000e+00 (1)
8.3086e+02 (8)
4.7267e+02 (2)
4.2872e+02 (1)
3.6807e+02 (8)
1.2977e+00 (3)
1.0014e01 (1)
144 161 152.50
79 110 94.50
31 31 31.00
Table 9 Comparison of DE, SADE, ODE, JADE, NDE, MDE_pBX, MDE and DPD on 25 instances (CEC2005) for 50D in terms of Mean and Standard Deviation. F
Algorithm DE
SADE
ODE
JADE
NDE
MDE_pBX
MDE
DPD
F1
4.500e+02 (1) 1.7964e04 (6) 1.1180e+05 (8) 1.1835e+04 (7) 6.8382e+08 (8) 1.1985e+08 (8) 1.4095e+05 (7) 1.4658e+04 (4) 1.4173e+04 (7) 1.9303e+03 (7) 1.2381e+03 (6)
4.500e+02 (1) 0 (1)
4.500e+02 (1) 1.0556e14 (2)
4.500e+02 (1)
4.50e+02 (1) 1.6693e12 (5)
7.2432e+04 (7)
1.3830e+04 (6)
1.3918e+04 (8)
4.500e+02 (1) 2.2811e13 (4) 4.4998e+02 (2) 2.1913e02 (2)
4.5000e+02 (1) 0.0000e+00 (1)
7.5228e+03 (4) 1.6141e+03 (4) 1.8961e+07 (5) 6.2099e+06 (4) 2.4739e+05 (8) 3.0298e+04 (7) 4.9305e+03 (4) 1.0489e+03 (4) 4.7638e+02 (4)
7.6117e014 (3) 2.4475e+02 (3) 2.4211e+02 (3)
4.3421e+08 (7)
3.1870e+06 (3)
1.4520e+07 (4)
9.6158e+05 (2)
3.0252e+05 (1)
6.5760e+07 (7)
1.2706e+06 (3)
1.0780e+07 (5)
4.5503e+05 (2)
1.2872e+05 (1)
8.1216e+04 (6)
1.4688e+04 (3)
7.1351e+04 (4)
7.1202e+03 (2)
4.8631e+02 (1)
2.2712e+04 (6)
5.6533e+03 (3)
2.1837e+04 (5)
3.8631e+03 (2)
4.3624e+00 (1)
3.7425e+03 (3)
7.5236e+03 (5)
1.5800e+04 (8)
5.9634e+03 (2)
2.2861e+03 (1)
7.3124e+02 (2)
9.2514e+02 (3)
2.5109e+03 (8)
1.2436e+03 (5)
9.8791e+01 (1)
8.1275e+04 (7)
4.4350e+02 (2)
5.3902e+02 (5)
2.1668e+03 (2) 2.1963e+03 (7) 1.3616e+04 (5) 4.3723e+03 (5) 3.7041e+07 (6) 1.4568e+07 (6) 7.3321e+04 (5) 3.2211e+04 (8) 1.3132e+04 (6) 1.9585e+03 (6) 4.2828e+08 (8)
4.7150e+02 (3)
4.1625e+02 (1)
F2
F3
F4
F5
F6
6.4318e+03 (6)
4.5000e+02 (1) 0.0000e+00 (1)
(continued on next page)
688
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
Table 9 (continued) F
F7
F8
F9
F10
F11
F12
F13
F14
F15
F16
F17
F18
F19
F20
F21
F22
F23
Algorithm DE
SADE
ODE
JADE
NDE
MDE_pBX
MDE
DPD
6.3239e+02 (6) 1.758e+02 (3) 1.4489e+00 (7) 1.188e+02 (4) 4.5854e02 (6) 4.4274e+1 (8) 1.0235e+01 (3) 1.2466e+02 (8) 1.6017e+01 (2) 1.6494e+02 (8) 1.4359e+00 (2) 1.4051e+06 (8) 1.6861e+05 (7) 9.1155e+1 (6) 2.2120e+00 (4) 2.7650e+2 (7) 1.6729e01 (5) 3.7478e+02 (2) 5.1343e+01 (3) 4.5071e+02 (8) 2.4655e+01 (2) 5.0033e+02 (7) 4.1408e+01 (2) 1.0017e+03 (6) 4.1562e+00 (3) 1.0019e+03 (5) 4.3084e+00 (4) 1.0019e+03 (4) 4.3084e+00 (4) 1.0686e+03 (2) 2.9283e+02 (6) 1.2759e+03 (2) 3.8179e+00 (2) 1.3759e+03 (7)
4.3935e+01 (3) 1.799e+02 (2) 3.7930e02 (6) 1.188e+02 (4) 4.7877e02 (7) 2.4581e+2 (3) 1.0045e+01 (2) 2.8880e+01 (6) 2.0394e+01 (4) 1.5908e+02 (6) 2.3719e+00 (4) 6.0582e+04 (3) 4.5807e+04 (3) 1.1240e+2 (3) 1.3332e+00 (2) 2.7663e+2 (5) 1.5672e01 (4) 3.9333e+02 (3) 7.4837e+01 (7) 4.1530e+02 (7) 5.8622e+01 (4) 4.5860e+02 (5) 5.3273e+01 (3) 9.8697e+02 (4) 2.2144e+00 (1) 9.8686e+02 (2) 2.0767e+00 (2) 9.8686e+02 (2) 2.0767e+00 (2) 1.0708e+03 (3) 3.1910e+02 (7) 1.2710e+03 (1) 6.7102e+00 (3) 1.3732e+03 (5)
2.4839e+05 (7)
4.7511e+01 (4)
1.0529e+02 (5)
4.4779e+01 (2)
3.9517e+01 (1)
1.7999e+02 (2) 7.8084e03 (2)
1.7999e+02 (2) 1.8460e02 (4)
1.799e+02 (2)
1.7999e+02 (2) 1.6094e02 (3)
1.800e+02 (1)
1.188e+02 (4)
1.188e+02 (4)
1.197e+02 (2)
2.8210e02 (2)
3.4861e02 (5)
1.4087e01 (8)
5.7423e+01 (7) 2.3832e+01 (7)
1.8694e+02 (6) 2.1430e+01 (6)
2.3158e+92 (4) 1.8667e+01 (5)
6.9063e+01 (7)
2.5921e+01 (5)
1.2942e+2 (4)
1.9994e+01 (3)
2.4060e+01 (5)
4.0019e+01 (6)
1.5459e+02 (3)
1.5837e+02 (4)
1.5993e+02 (7)
9.3300e+00 (7)
1.6729e+00 (3)
6.9811e+00 (6)
4.2143e+05 (7)
6.4440e+04 (4)
1.0580e+05 (5)
3.7810e+05 (8)
4.4235e+04 (4)
9.0150e+04 (6)
1.0057e+02 (5) 2.0177e+00 (3)
1.0294e+02 (4) 3.6511e+00 (5)
8.0977e+1 (7)
2.7659e+2 (6)
2.7689e+2 (3)
2.7646e+2 (8)
2.0201e01 (6)
1.5651e01 (3)
3.2953e01 (7)
4.2106e+02 (5)
4.9387e+02 (7)
4.7095e+02 (6)
9.5291e+01 (8)
6.6025e+01 (6)
6.2782e+01 (5)
4.1135e+02 (6)
3.9416e+02 (5)
3.5054e+02 (3)
2.8022e+01 (3)
5.9249e+01 (5)
9.8523e+01 (7)
5.3342e+02 (8)
4.4436e+02 (4)
3.8512e+02 (3)
7.9835e+01 (6)
6.1157e+01 (4)
7.7240e+01 (5)
9.8830e+02 (5)
9.6619e+02 (2)
1.0269e+03 (7)
3.1197e+00 (2)
5.2689e+01 (7)
1.8808e+01 (5)
9.8828e+02 (3)
1.0072e+03 (6)
1.0303e+03 (7)
3.1400e+00 (3)
1.3972e+01 (5)
1.8525e+01 (6)
9.8826e+02 (3)
1.0050e+03 (5)
1.0242e+03 (7)
3.1438e+00 (3)
1.1035e+01 (5)
1.4069e+01 (6)
1.0549e+03 (1)
1.3882e+03 (7)
1.4462e+03 (8)
3.3719e+02 (8)
2.0250e+01 (2)
2.6411e+01 (4)
1.2878e+03 (4)
1.3217e+03 (6)
1.3994e+03 (7)
2.0626e+01 (5)
1.9599e+01 (4)
4.4210e+01 (7)
1.3735e+03 (6)
1.1436e+03 (3)
1.3288e+03 (4)
4.7271e+08 (8) 6.0392e+03 (4) 2.7916e+01 (8) 1.188e+02 (4) 3.2671e02 (3) 2.3025e+2 (5) 2.5230e+01 (8) 1.4064e+2 (3) 5.3112e+01 (7) 1.4566e+02 (2) 4.6649e+00 (5) 2.0591e+05 (6) 7.0587e+04 (5) 4.2938e+1 (8) 6.1975e+01 (8) 2.7746e+2 (2) 4.3401e01 (8) 5.4492e+02 (8) 4.6900e+01 (2) 3.7986e+02 (4) 1.2198e+02 (8) 4.9436e+02 (6) 1.4937e+02 (8) 1.0612e+03 (8) 4.2678e+01 (6) 1.0630e+03 (8) 2.1452e+01 (7) 1.0702e+03 (8) 2.0428e+01 (8) 1.3034e+03 (5) 1.2139e+01 (1) 1.5476e+03 (8) 8.5036e+01 (8) 1.5023e+03 (8)
2.6739e02 (5)
3.3298e+01 (7)
1.1884e+02 (3) 3.3455e02 (4) 2.4585e+02 (2) 1.8039e+01 (4) 1.7652e+02 (2) 9.3024e+01 (8)
0.0000e+00 (1) 1.2102e+02 (1) 2.67035e02 (1) 3.3000e+02 (1) 0.0000e+00 (1)
1.5872e+02 (5)
3.2573e+02 (1) 1.85439e01 (1) 1.2458e+02 (1)
9.9312e+00 (8)
3.7251e01 (1)
3.8276e+04 (2)
1.0078e+03 (6)
2.64825e+04 (1) 5.85381e+03 (1) 1.2841e+02 (1) 2.80371e01 (1) 2.8885e+02 (1) 2.94311e02 (1) 2.50622e+02 (1) 4.02402e01 (1) 2.28856e+02 (1) 1.75832e+01 (1) 2.59786e+02 (1) 3.81248e+01 (1) 9.61048e+02 (1) 5.60281e+00 (4) 9.29382e+02 (1) 2.02257e+00 (1) 9.4816e+02 (1)
1.4222e+01 (7)
1.0865e+00 (1)
1.3383e+03 (6)
1.1258e+03 (4)
4.2562e+01 (5)
2.4516e+01 (3)
1.3093e+03 (5)
1.2874e+03 (3)
2.5058e+01 (6)
1.6972e+00 (1)
1.0690e+03 (1)
1.1186e+03 (2)
2.6880e+04 (2) 1.2264e+02 (2) 7.3062e+00 (6) 2.7669e+2 (4) 1.4781e01 (2) 4.2598e+02 (4) 5.8153e+01 (4) 2.6269e+02 (2) 7.1326e+01 (6) 3.3638e+02 (2) 1.3359e+02 (7) 9.8227e+02 (3) 5.4594e+01 (8) 9.9068e+02 (4) 3.0581e+01 (8)
689
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701 Table 9 (continued) F
F24
F25
Rank A.R.
Algorithm DE
SADE
ODE
JADE
NDE
MDE_pBX
MDE
DPD
1.9594e+00 (3) 1.2846e+03 (7) 3.1223e+00 (3) 6.5380e+02 (5) 2.3393e+01 (5) 144 111 127.50
1.7342e+00 (1) 9.6994e+02 (5) 3.2183e+02 (6) 4.9078e+02 (3) 2.7980e+00 (3) 98 94 96
1.9163e+00 (2)
2.2847e+02 (8)
6.7112e+01 (5)
1.6214e+02 (7)
2.9625e+00 (4)
8.1507e+02 (3)
8.3826e+02 (4)
1.0630e+03 (6)
4.6017e+02 (2)
4.3815e+02 (1)
3.1323e+02 (5)
3.7638e+02 (8)
3.3390e+02 (7)
2.0769e01 (2)
1.0452e01 (1)
4.8756e+02 (2)
8.0311e+02 (6)
1.0871e+03 (7)
4.9173e+02 (4)
4.4628e+02 (1)
2.1402e+00 (2)
3.6216e+02 (7)
4.0000e+02 (8)
6.0382e+00 (4)
1.4620e01 (1)
118 117 117.50
104 115 109.50
132 149 140.50
9.4738e+01 (6) 1.5609e+03 (8) 5.4633e+01 (4) 1.5510e+03 (8) 5.0937e+01 (6) 145 156 150.50
73 118 95.50
35 33 34.00
Table 10 Comparison of DE, SADE, ODE, JADE, NDE, MDE_pBX, MDE and DPD on eight instances (CEC2005) for 100D in terms of Mean and Standard Deviation. F
Algorithm DE
SADE
ODE
JADE
NDE
MDE_pBX
MDE
DPD
F1
3.9729e+2 (2) 1.0195e+01 (7) 4.8898e+05 (8) 4.6288e+04 (8) 6.0287e+05 (7) 4.6366e+04 (5) 3.3445e+04 (8) 2.4342e+03 (4) 6.0286e+06 (7) 1.7079e+06 (7) 4.3996e+02 (8) 1.7893e+01 (2) 1.1531e+07 (8) 5.3871e+05 (6) 1.2913e+04 (8) 4.2364e+03 (8) 56 47 51.50
4.5000e+02 (1) 1.0556e14 (2)
4.5000e+02 (1) 2.7927e14 (3)
4.5000e+02 (1) 1.7726e13 (4)
4.5000e+02 (1) 1.9162e12 (7)
4.0361e+05 (7)
4.7821e+03 (3)
7.1110e+04 (6)
7.2590e+03 (5)
3.8850e+04 (7)
1.9255e+03 (3)
2.6575e+04 (6)
8.6559e+05 (8)
4.8812e+05 (6)
6.5057e+04 (3)
2.5516e+05 (5)
9.5530e+04 (8)
6.0969e+04 (7)
1.1318e+04 (3)
5.8440e+04 (6)
9.0970e+03 (2)
2.0015e+04 (5)
1.8170e+04 (4)
3.7364e+04 (7)
1.4968e+03 (2)
2.6617e+03 (6)
2.7954e+03 (7)
4.9192e+03 (8)
5.3886e+02 (4)
3.8231e+03 (6)
5.1772e+02 (2)
5.9252e+02 (5)
4.7929e+01 (3)
1.8192e+04 (6)
4.5437e+01 (2)
4.8659e+01 (4)
4.7505e+1 (5)
4.2843e+02 (7)
4.7171e+01 (6)
5.8524e+1 (4)
1.8852e+01 (3)
1.9229e+01 (4)
9.6301e+01 (8)
5.3878e+01 (7)
7.3591e+05 (6)
8.5265e+06 (7)
2.0294e+05 (2)
6.0507e+05 (5)
8.8750e+05 (7)
2.0376e+06 (8)
9.4541e+04 (2)
3.3526e+05 (5)
8.5496e+01 (3) 2.7601e+00 (2)
6.0181e+01 (4) 4.2222e+00 (5)
5.2135e+01 (5) 6.6864e+00 (6)
3.3033e+01 (6)
34 32 33.00
43 46 44.50
26 35 30.50
39 50 44.50
4.500e+02 (1) 9.9413e13 (5) 3.4821e+2 (2) 1.4884e+02 (2) 4.7601e+04 (2) 8.5649e+03 (2) 1.3111e+04 (3) 2.3598e+03 (3) 5.3113e+02 (3) 4.8777e+01 (5) 9.2237e+1 (2) 3.5764e+01 (5) 3.1332e+05 (3) 1.9249e+05 (4) 1.1644e+2 (2) 3.1570e+00 (4) 18 30 24.00
4.5000e+02 (1) 0.0000e+00 (1)
5.0449e+04 (5)
7.2374e+2 (3) 1.1694e+3 (8) 1.1060e+4 (4) 4.4484e+3 (4) 7.9207e+4 (4) 1.4969e+4 (4) 3.2187e+4 (6) 2.4362e+3 (5) 3.7760e+8 (8) 3.4090e+8 (8) 7.172e+1 (3) 3.9769e+1 (6) 5.0854e+5 (4) 1.6627e+5 (3) 2.2204e+2 (7) 2.9786e+2 (3) 39 41 40.00
F2
F4
F5
F6
F9
F12
F13
Rank A.R.
4.6378e+01 (7)
4.5000e+02 (1) 0.0000e+00 (1) 2.0532e+04 (1) 1.2948e+02 (1) 4.1704e+03 (1) 6.2047e+00 (1) 3.69128e+02 (1) 1.48072e+00 (1) 3.3000e+02 (1) 1.00491e03 (1) 4.85703e+04 (1) 3.81752e+04 (1) 1.1913e+02 (1) 4.71053e01 (1) 8 8 8.00
The performance of DPD is compared with DEPSO-2S, PSO-2S, SPSO-2007 and DE. For a fair comparison the stopping criterion and number of independent runs are kept the same as that in [28]. Remaining parameters of DPD are the same as quoted in Section 3.3. The mean and standard deviation of the best objective function value of 30 independent runs is reported in Table 11 for all real life problems. From Table 11, it is seen that out of 6 RPs, DPD provides better values for all problems except RP-2, whereas it performs equally with others. The average NFEs for all the algorithms under consideration are compared in
690
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
Average NFEs
5
4.8x10 5 4.6x10 5 4.3x10 5 4.1x10 5 3.8x10 5 3.6x10 5 3.4x10 5 3.1x10 5 2.9x10 5 2.6x10 5 2.4x10 5 2.2x10 5 1.9x10 5 1.7x10 5 1.4x10 5 1.2x10 4 9.6x10 4 7.2x10 4 4.8x10 4 2.4x10 0.0 DE
SPSO-2007
PSO-2S
DEPSO-2S
DPD
METHODS Fig. 7. Average number of function evaluations (NFEs) of DPD with others in [28] for 6 Real Life Problems.
Table 11 Comparison of DPD with others in [28] for 6 real life problems, in terms of mean and standard deviation. RPs
*
DE
SPSO-2007
PSO-2S
DEPSO-2S
DPD
RP-1
D 3
NFEs 24000
2.964e+6 (0.264829)
7.432e+6 (2.28e009)
2.964e+6 (1.40e009)
RP-2
2
16000
1.698e+2 (0.000021)
1.698e+2 (1.14e013)
1.698e+2 (1.14e013)
RP-3
4
32000
RP4* RP-5
3
24000
1.7638e8 (3.5157e8) 4.21422 (5.0847e7)
6
144000
3.01253 (0.367899)
RP-6
10
240000
0.626379 (0.0821391)
20
480000
1.07813 (0.0812955)
2.9644e+6 (4.66e010) 1.6984e+2 (1.14e013) 1.4362e9 (5.05e009) 7.267e16 (5.69e016) 9.7517e+0 (6.65e+000) 5.0075e1 (1.61e001) 8.6597e1 (2.52e001)
1.401e10 (3.35e010) 2.3198e6 (1.25e005) 2.5853e+0 (3.30e+000) 3.5080e1 (7.19e002) 5.3979e1 (1.25e001)
1.397e10 (2.65e010) 3.1712e5 (7.54e005) 2.0743e+0 (3.07e+000) 3.0049e1 (7.07e002) 5.3799e1 (8.25e002)
2.9629e+6 (4.48e010) 1.6984e+2 (1.14e013) 4.556e12 (7.75e013) 4.2143e+0 (2.81e006) 1.8557e+0 (3.11e001) 1.0085e1 (2.15e002) 2.0696e1 (2.27e002)
maximization problem.
Table 12 Comparative statistical results of DPD with other algorithms. Comparison
Worse
Signed Rank Test
t-Test
For Category I and II_20 dimensional unconstrained benchmark functions (cited in [52]) DPD Vs GA Mean 10 0 S. D. 10 0 DPD Vs PSO Mean 10 0 S. D. 10 0 DPD Vs DE Mean 9 0 S. D. 9 1 DPD Vs GSA Mean 10 0 S. D. 10 0 DPD Vs DEGSA Mean 9 0 S. D. 9 1
Criteria
Better
Equal
0 0 0 0 1 0 0 0 1 0
+ + + + + +
a+ a+ a+ a+ a a a+ a+ a a
For Category I and II_40 dimensional unconstrained benchmark functions (cited in [52]) DPD Vs GA Mean 10 0 S. D. 10 0 DPD Vs PSO Mean 10 0
0 0 0
+ + +
a+ a+ a+
691
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701 Table 12 (continued) Comparison
Criteria
Better
Equal
Worse
Signed Rank Test
t-Test
S. D. Mean S. D. Mean S. D. Mean S. D.
10 9 8 10 10 8 7
0 0 2 0 0 0 3
0 1 0 0 0 2 0
+ + +
a+ a a a+ a+ a a
For Category I and II_80 dimensional unconstrained benchmark functions (cited in [52]) DPD Vs GA Mean 10 0 S. D. 10 0 DPD Vs PSO Mean 10 0 S. D. 10 0 DPD Vs DE Mean 9 0 S. D. 8 2 DPD Vs GSA Mean 10 0 S. D. 10 0 DPD Vs DEGSA Mean 7 1 S. D. 7 3
0 0 0 0 1 0 0 0 2 0
+ + + + + +
a+ a+ a+ a+ a a a+ a+ a a
For Category III_10 unconstrained benchmark functions (cited in [52]) DPD Vs GA Mean 8 S. D. 8 DPD Vs PSO Mean 5 S. D. 3 DPD Vs DE Mean 5 S. D. 5 DPD Vs GSA Mean 6 S. D. 5 DPD Vs DE-GSA Mean 1 S. D. 1
2 2 5 7 5 5 4 5 9 9
0 0 0 0 0 0 0 0 0 0
a a a a a a a a a a
For 10 dimensional unconstrained benchmark functions (cited in [41]) DPD Vs GDE Mean 11 S. D. 11 DPD Vs DE/rand/1 Mean 11 S. D. 11 DPD Vs DE/best/1 Mean 11 S. D. 11 DPD Vs DE/target-to-best/1 Mean 11 S. D. 11
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
a a a a a a a a
For 30 dimensional unconstrained benchmark functions (cited in [41]) DPD Vs GDE Mean 12 S. D. 12 DPD Vs DE/rand/1 Mean 13 S. D. 13 DPD Vs DE/best/1 Mean 13 S. D. 13 DPD Vs DE/target-to-best/1 Mean 13 S. D. 13
0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0
+ + + + + +
a a a+ a+ a+ a+ a+ a+
0 3 3 0 2 2 1 0 0 0 0 0 0 0
+ + + + + + + + +
a+ a a a+ a a a a+ a+ a+ a+ a+ a+ a+
2 2 2 2 1
a a a a a
DPD Vs DE DPD Vs GSA DPD Vs DEGSA
For 30 dimensional CEC2005 unconstrained benchmark functions (cited in [53]) DPD Vs DE Mean 24 1 S. D. 22 0 DPD Vs SADE Mean 21 1 S. D. 25 0 DPD Vs ODE Mean 21 2 S. D. 22 1 DPD Vs JADE Mean 24 0 S. D. 25 0 DPD Vs NDE Mean 24 1 S. D. 25 0 DPD Vs MDE_pBX Mean 25 0 S. D. 25 0 DPD Vs MDE Mean 23 2 S. D. 25 0 For 50 dimensional CEC2005 unconstrained benchmark functions (cited in [53]) DPD Vs DE Mean 22 1 S. D. 23 0 DPD Vs SADE Mean 21 2 S. D. 23 0 DPD Vs ODE Mean 23 1
(continued on next page)
692
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
Table 12 (continued) Comparison
Criteria
Better
Equal
Worse
Signed Rank Test
t-Test
S. D. Mean S. D. Mean S. D. Mean S. D. Mean S. D.
23 24 24 24 25 25 24 23 25
0 1 0 1 0 0 0 1 0
2 0 1 0 0 0 1 1 0
+ + + + +
a a+ a a+ a+ a+ a a a+
For 100 dimensional CEC2005 unconstrained benchmark functions (cited in [53]) DPD Vs DE Mean 8 0 S. D. 8 0 DPD Vs SADE Mean 7 1 S. D. 8 0 DPD Vs ODE Mean 7 1 S. D. 8 0 DPD Vs JADE Mean 7 1 S. D. 8 0 DPD Vs NDE Mean 7 1 S. D. 8 0 DPD Vs MDE_pBX Mean 8 0 S. D. 8 0 DPD Vs MDE Mean 7 1 S. D. 8 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ + + + + + + + + + + + + +
a+ a+ a+ a+ a+ a+ a+ a+ a+ a+ a+ a+ a+ a+
For the real life problems (cited in [28]) DPD-1 Vs DE Mean S. D. DPD-1 Vs SPSO-2007 Mean S. D. DPD-1 Vs PSO-2S Mean S. D. DPD-1 Vs DEPSO-2S Mean S. D.
0 0 0 0 0 0 0 0
+ + + + + + + +
a+ a+ a+ a+ a+ a+ a+ a+
DPD Vs JADE DPD Vs NDE DPD Vs MDE_pBX DPD Vs MDE
6 7 6 7 6 7 6 7
1 0 1 0 1 0 1 0
Empirical distribution over 25 CEC2005 (30D) functions
Fig. 7, for all 6 RPs through box plot. In each box, the lower limit represent the minimum, middle line represents the average and upper limit represents the maximum NFEs used to solve 6 RPs by a particular algorithm. Both from Table 11 and Fig. 7, it is concluded that DPD outperforms the rest due to its less S.D. and less NFEs. Undoubtedly, the DPD is either superior or comparable to the state-of-the-art algorithms for all the real life problems under consideration.
1.0 0.9 0.8 0.7 0.6
DPD MDE SADE JADE ODE MDE-pBX NDE DE
0.5 0.4 0.3 0.2 0.1 0.0 0
10
1
2
10
10
3
10
SP/SPbest Fig. 8a. Empirical distribution of normalized success performance on all 25 CEC2005 (30D) unconstrained benchmark functions.
693
Empirical distribution over 25 CEC2005 (50D) functions
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
1.0 0.9 0.8 0.7 0.6 0.5 DPD MDE SADE JADE ODE MDE-pBX NDE DE
0.4 0.3 0.2 0.1 0.0 0
10
1
2
10
3
10
10
SP/SPbest Fig. 8b. Empirical distribution of normalized success performance on all 25 CEC2005 (50D) unconstrained benchmark functions.
5. Statistical validation of the DPD results In this sub-section, the superiority of DPD is statistically validated over others with the help of t-test, an efficient nonparametric test namely Wilcoxon Signed Rank Test [54] and empirical distribution of normalized success performance test [13]. These tests are explained below as follows. 5.1. t-Test A pairwise one-tailed t-test with 98 degree of freedom (df) at a 5% significance level is used for the hypothesis testing. A null hypothesis H0: there is no performance difference between algorithms and an alternative hypothesis H1: there is a performance difference, are considered for the test. The grading of an algorithms based on this test is depending on the p-value recommendations. The p-value and conclusion of the tests are drawn based on the following criterion. (i) (ii) (iii) (iv) (v)
if if if if if
p-value > 0.10, the statistical difference is not significant. p-value 6 0.10, the statistical difference is marginally significant. p-value = 0.05, the statistical difference is zero. p-value < 0.05, the statistical difference is significant. p-value 6 0.01, the statistical difference is highly significant.
Based on the above criteria, any two algorithms namely algorithm 1 (A1) and algorithm 2 (A2) are comparatively graded as follows. a+: a: b+: b: c:
A1 is better with high significance to A2. A1 is significantly better than A2. A1 performs equally with A2. A1 performs marginally worse than A2. A1 is significantly worse than A2.
The results of t-test, on all the benchmark functions cited in [41,52,53] and all real life problems cited in [28], for all considered dimensions are reported in the last column of Table 12. The DPD is compared with many state-of-the-art algorithms separately in terms of Mean and S.D. of the best objective functions values over a number of runs (mentioned in Sections 4). Table 13 Results of application 1 (PID controller) produced by DPD and compared with DE and PSO. Algorithm
Kp
Ki
Kd
PID (Fit)
PSO DE DPD
155.8195 155.8108 186.8554
1.9139 1.9135 2.3011
0.2958 0.2958 0.0685
29.5958 29.5745 29.1058
694
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
It is observed from Table 12 that DPD has the sign either ‘a’ or ‘a+’ in every consequence. It indicates that the DPD is either significantly better or has high significance over others. 5.2. Wilcoxon Signed Rank Test An interesting nonparametric test namely Wilcoxon Signed Rank Test [54] is used in this section for the statistical comparison between two different algorithms. It is much useful when the population is normally distributed. The hypotheses H0
Step Response for PID Controllers
Amplitude
1
DPD DE PSO
0 0
0.2
0.4
0.6
0.8
1
Time(s)
Objective Function Values
Fig. 9a. Step Response of PSO, DE and DPD for PID controllers.
3.5E+02 3.3E+02 3.0E+02 2.8E+02 2.5E+02 2.3E+02 2.0E+02 1.8E+02 1.5E+02 1.3E+02 1.0E+02 7.5E+01 5.0E+01 2.5E+01 0.0E+00
PSO DE DPD
0
5
10
15
20
25
30
35
40
45
50
Iteration Fig. 9b. Convergence of PSO, DE and DPD for PID controllers (over 50 iterations). Table 14 Average number of function evaluations (NFEs) of DPD for application 2 (L–J problem) containing 3–15 atoms and compared with others in [55,56]. Number of atoms
3 4 5 6 7 8 9 10 11 12 13 14 15
Algorithm GA
WX-PM
LX-PM
WX-LLM
LX-LLM
FIW
LDIW
NLIW
GAIW
DPD
150 3673 8032 31395 48900 121247 346397 721370 NaN NaN NaN NaN NaN
846 1746 7437 12086 26598 38612 61807 158816 253963 302258 426488 513479 566714
386 1244 5929 11671 24853 32807 58498 138846 199814 280910 327712 459716 489462
1020 1892 8431 16546 29892 39613 65448 161724 267389 326847 548193 605372 632186
420 1205 5158 10267 24098 31840 57731 137662 198436 271763 320675 456732 471529
323 2875 17254 25860 252915 410413 335679 338616 541240 NaN NaN NaN NaN
815 81669 183380 247040 342315 485683 594880 672600 NaN NaN NaN NaN NaN
994 39933 90168 NaN NaN 338100 400868 NaN NaN NaN NaN NaN NaN
298 2488 30830 38340 171461 105131 341924 303270 NaN 980850 NaN NaN NaN
196 996 2508 8275 20450 28480 52355 126564 188560 197863 256755 398638 415945
695
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
and H1 remain the same as described in t-test. This test assigns +, or in this study, when an algorithm 1 performs significantly better, worse or equal to another algorithm 2, respectively. The results of Wilcoxon Signed Rank Test on all the benchmark functions cited in [41,52,53] and all real life problems cited in [28], for all considered dimensions are presented in Table 12. In this test, DPD versus the other algorithms are compared through Mean and S.D. of the best objective functions values over a number of runs (mentioned in Sections 4). From Table 12 it can be seen that the DPD performs equally with others in few cases but performs with better significance in most of the cases with respect to all considered parameters. 5.3. Empirical cumulative distribution (ECD) test Over a set of problems, the performance of an algorithm with respect to others can be measured by plotting empirical distribution of normalized success performance [13] graph. The success performance (SP) is defined below.
Success Performance ðSPÞ ¼
Average function evaluation of Successful Runs Total Runs Number of Successful Runs
A run is declared as a ‘successful run’ if jf ðxÞ f ðx Þj 6 0:0001, where f(x) is the known global minima and f(x⁄) is the obtained minima. In this test, the SPs of each algorithm for each of the test functions are first calculated. The SPs are then normalized by dividing each SP for a particular function by the SP of the best algorithm of the respective function. Results of 6
Average Function Evaluations (AFEs)
1.0x10
5
8.0x10
5
6.0x10
5
4.0x10
5
2.0x10
0.0 GA WX-PM LX-PM WX-LLM LX-LLM FIW LDIW
NLIW GAIW
DPD
Methods Fig. 10a. Average number of function evaluations (NFEs) of DPD with others in [55,56] for Lennard–Jones (L–J) cluster optimization problem.
Table 15 Success Rate (SR) of DPD for Application 2 (L-J Problem) Containing 3 to 15 Atoms and compared with others in [55,56]. Number of atoms
3 4 5 6 7 8 9 10 11 12 13 14 15
Algorithm WX-PM
LX-PM
WX-LLM
LX-LLM
FIW
LDIW
NLIW
GAIW
DPD
100 100 100 100 100 89 93 94 65 81 75 72 58
100 100 100 100 98 94 92 91 70 94 82 80 64
100 100 100 93 89 78 83 86 62 71 69 65 56
100 100 100 100 100 97 96 93 77 86 84 78 69
100 64 68 4 3 4 8 5 1 0 NaN NaN NaN
98 55 50 1 1 3 1 1 0 0 NaN NaN NaN
95 45 50 0 0 3 2 0 0 0 NaN NaN NaN
99 71 85 4 12 8 10 2 0 1 NaN NaN NaN
100 100 100 100 100 100 100 100 98 96 88 89 84
696
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
all functions are used where at least one algorithm was successful in at least one run. As reported in [13], small values of SP and large values of the empirical distribution in graphs are preferable. Also author concluded that the first algorithm which reaches quickly to the top of the graph will be treated as the best algorithm. ECD test is applied on the set of all 25 CEC2005 (30D and 50D) test functions (cited in [53]). SP/SPbest vs. Emprical Distribution over all function graph is plotted in Figs. 8a and 8b for all 10 and 30 dimensional CEC2005 test functions respectively. It is very clear from these figures that the first algorithm that reaches at the top of the graph is ‘DPD’. It again confirms the high performance of DPD over rest state-of-the-art algorithms. 6. Applications of DPD In this section, in order to assess the feasibility and effectiveness of DPD in solving engineering design problems, the following problems are considered. Application 1: Proportional-Integral-Derivative (PID) controller optimization problem. Application 2: Lennard-Jones (L-J) cluster optimization problem. Application 3: Chemical optimization problem (Cs–Rb–V series sulfuric acid catalyst).
110 100 90
Success Rate (S.R.)
80 70 60 50 40 30 20 10 0 -10 WX-PM LX-PM WX-LLM LX-LLM FIW LDIW
NLIW GAIW
DPD
Methods Fig. 10b. Success Rate (SR) of DPD with others in [55,56] for Lennard–Jones (L–J) cluster optimization problem.
Table 16 Success performance (SP) of DPD for application 2 (L–J problem) containing 3–15 atoms and compared with others in [55,56]. Number of atoms
3 4 5 6 7 8 9 10 11 12 13 14 15
Algorithm WX-PM
LX-PM
WX-LLM
LX-LLM
FIW
LDIW
NLIW
GAIW
DPD
846 1746 7437 12086 26598 43384 66459 168953 390712 373158 568651 713165 977093
386 1244 5929 11671 25360 34901 63585 152578 285449 298840 399649 574645 764784
1020 1892 8431 17791 33587 50786 78853 188051 431273 460348 794483 931342 1128904
420 1205 5158 10267 24098 32825 60136 148024 257709 316003 381756 585554 683375
323 4492.19 25373.53 646500 8430500 10260325 4195988 6772320 54124000 NaN NaN NaN NaN
831.63 148489.1 366760 24704000 34231500 16189433 59488000 67260000 NaN NaN NaN NaN NaN
1046.32 88740 180336 NaN NaN 11270000 20043400 NaN NaN NaN NaN NaN NaN
301.01 3504.23 36270.59 958500 1428842 1314138 3419240 15163500 NaN NaN NaN NaN NaN
196 996 2508 8275 20450 28480 52355 126564 192409 206108 291768 447907 495173
697
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
6.1. Application 1 (PID controller optimization) Proportional-Integral-Derivative (PID) controller has been widely used in the industry because of its simple structure and robust performance in a wide range of operating conditions. It has been quite difficult to tune properly the gains of PID controllers because many industrial plants are often burdened with problems due to their high orders, time delays, and nonlinearities. Tuning PID controller parameters means adjusting the proportional, integral and derivative values to enable the controller satisfy the performance specifications such as margin of stability, transient response and bandwidth. Over the years, several intelligence algorithms have been proposed for the tuning of PID controllers. In order to show the effectiveness of DPD in control engineering fields, it was used to optimize the three PID parameters namely proportional gain (Kp), integral gain (Ki), and derivative gain (Kd). A simple PID controller was selected to test performance of the three algorithms in this section. The transfer function of the controller is described as follow.
GðsÞ ¼
1 0:0067s2 þ 0:1s
PID controller problem is being solved by three different algorithms such as DE, PSO and DPD. Here the interest is to check the efficiency of DPD over the traditional DE and PSO. For these algorithms the following parameters are fixed for simulation.
6
1x10
6
1x10
6
Success Performance (S.P.)
1x10
5
9x10
5
8x10
5
7x10
5
6x10
5
5x10
5
4x10
5
3x10
5
2x10
5
1x10
0 WX-PM
LX-PM
WX-LLM
LX-LLM
DPD
Methods Fig. 10c. Success performance (SP) of DPD with others in [55,56] for Lennard–Jones (L–J) cluster optimization problem.
Table 17 Average Error (AE) and Average Execution Time (AET) of DPD for application 2 (L–J problem) containing 3–15 atoms and compared with others in [55,56]. Number of atoms
3 4 5 6 7 8 9 10 11 12 13 14 15
Average Error
Average Execution Time
WX-PM
LX-PM
WX-LLM
LX-LLM
DPD
FIW
LDIW
NLIW
GAIW
DPD
0.00019 0.00038 0.00071 0.00042 0.00475 0.00517 0.00642 0.00875 0.01613 0.01792 0.01816 0.02824 0.16464
0.00016 0.00048 0.00062 0.00031 0.00258 0.00486 0.00513 0.00689 0.01145 0.00933 0.0171 0.01263 0.02781
0.00021 0.00039 0.00084 0.00048 0.00661 0.00719 0.00583 0.00974 0.01835 0.01938 0.02767 0.01946 0.12987
0.00012 0.00043 0.00057 0.00025 0.00418 0.00393 0.00422 0.00673 0.00919 0.00962 0.01115 0.00994 0.03169
0.00005 0.00010 0.00026 0.00015 0.00110 0.00195 0.00225 0.00475 0.00620 0.00697 0.00995 0.00646 0.00884
0.001 0.015 0.134 0.275 3.712 7.856 7.608 9.169 18.465 NaN NaN NaN NaN
0.003 0.436 1.498 2.788 5.118 9.213 13.307 18.376 NaN NaN NaN NaN NaN
0.003 0.21 0.698 NaN NaN 6.26 9.064 NaN NaN NaN NaN NaN NaN
0.001 0.013 0.236 0.412 2.406 1.889 7.746 8.254 NaN 37.898 NaN NaN NaN
0.001 0.018 0.154 0.296 2.295 2.896 6.886 8.195 14.557 28.952 36.866 43.532 48.170
698
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
0.18 0.16
Average Error (AE)
0.14 0.12 0.10 0.08 0.06 0.04 0.02 0.00 WX-PM
LX-PM
WX-LLM
LX-LLM
DPD
Methods Fig. 10d. Average Error (AE) of DPD with others in [55] for Lennard–Jones (L–J) cluster optimization problem.
For DE: F = 0.5, CR = 0.9. For PSO: w = 0.7298, C1 = C2 = 2.0. For DPD: as described in Section 3.3. The population size is set to 30, maximum NFEs is 500, the search range of Kp, Ki and Kd is set as [200, 200] and 10 independent runs are performed for all algorithms under consideration. The objective function of PID controller problem is defined as follow:
Obj ðPIDÞ ¼
Z
1
ða1 jeðtÞj þ a2 u2 ðtÞÞdt þ a3 t u 0
where e(t) = error; u(t) = controlled variable and tu = rise time. a1 = 0.999, a2 = 0.001 and a3 = 2.0 are three weighting factors of e(t), u(t) and tu respectively. Comparative results of DPD with different algorithms for the PID controller are presented in Table 13. The step response and convergence graph (over 50 iterations) by PSO, DE and DPD for PID controller is plotted in Figs. 9a and b respectively. The comparison illustrates that DPD shows its higher performance with faster rate of convergence than its individual ingredients like DE and PSO. 6.2. Application 2 (L–J cluster optimization problem) Lennard–Jones (L–J) cluster optimization problem is a challenging problem where the minimum energy configuration needs to be estimated for a cluster of identical atoms interacting through the L–J potential. In fact, the increase in molecular size increases exponentially the number of local optima of the problem and that makes the problem more complex. The entire problem formulation is described in [55].
Table 18 Results of application 3 (chemical optimization problem) produced by DPD and compared with others in [57]. Algorithm
K1
K2
K3
E1
E2
E3
EQS
Chen et al. DE/rand/1 DE/best/1 DE/rand-to-best/1 DE/best/2 DE/rand/2 ASDE DPD
0.152 33.619 39.390 36.555 43.558 29.055 29.000 29.000
8.18 84576.00 85331.00 84670.00 85421.00 85389.00 84179.00 80406.00
0.221 17.965 15.467 16.188 16.846 11.456 14.381 14.148
62073 971.180 0.10005 0.10124 0.11557 104.150 135.550 135.053
2384 2.8179e+015 2.2124e+014 3.6438e+014 1.6912e+014 1.000e+016 2.187e+015 2.068e+015
18949 2.2354e+005 2.0654e+005 2.0969e+005 2.0416e+005 2.3353e+005 2.2254e+005 2.2247e+005
6.676 2.5466 2.8056 2.6786 2.5495 2.5706 2.5439 2.3398
(0.0018) (0.5130) (0.4313) (0.0290) (0.0213) (0.0028) (0.0001)
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
699
10 9 A - DE/rand/1 B - DE/best/1 C - DE/rand-to-best/1 D - DE/best/2 E - DE/rand/2 F - ASDE G - DPD
8
EQS
7 6 5 4 3 2 A
B
C
D
E
F
G
Methods Fig. 11. Comparison of minimum function values of DPD with other in [57] for chemical optimization problems over 10 independent runs.
For the sake of comparison, 100 trial runs are made. Population size is set as nine times the number of variables. All parameters remain the same for DPD as discussed in Section 3.3. Stopping criterion for DPD is same as reported in [55]. The solution of DPD for L–J problems containing 3–15 atoms is compared with the other algorithms in [55,56] and recorded as: the average NFEs (in Table 14 and Fig. 10a), Success Rate (SR) (in Table 15 and Fig. 10b), success performance (SP) (in Table 16 and Fig. 10c), Average Error (AE) (in Table 17 and Fig. 10d). From these tables and figures DPD can be declared as the best performer amongst all compared algorithms. Moreover, Average Execution Time (AET) by DPD for L–J problem quoted in Table 17 is extremely reasonable and hence DPD is much stable. 6.3. Application 3 (Chemical optimization problem) In this sub-section, DPD is applied to solve chemical optimization problem (Cs–Rb–V series sulfuric acid catalyst) which is described in [57]. It is an important problem due to its increasing rate of SO2 conversion and protecting behavior of environmental pollution. For this problem, DPD runs 10 times independently. A run stops when a maximum of 20,000 NFEs is reached (which is same for all the compared algorithms). Entire parameters remain the same for DE and PSO (used in DPD) as discussed in Section 3.3. The average objective function values after solving chemical optimization problem by DPD are reported in Table 18 and compared with others in [57]. From Table 18 it is observed that DPD performs better than others. Moreover, the overall comparison of the all algorithms is plotted in Fig. 11, where ‘EQS’ represents the objective function value as named in [57]. Clearly the performance of DPD here is also commendable. 7. Conclusion In this paper, an ideal tri-population approach is proposed for solving unconstrained optimization problems. On a tri-population, the order of using DE and PSO as DE–PSO–DE is proposed and is introduced shortly as DPD. The quality and significant performance of DPD is well validated over many state-of-the-art algorithms not only through benchmark functions but also through real life engineering problems. From the numerical, statistical and graphical results the following facts are concluded. (i) Employment of DPD over the ‘tri-population’ mechanism really makes DPD faster and robust. (ii) DPD (that uses the scheme ‘‘DE/best/1’’ in both of its DEs) performs better over the state-of-the-art algorithms. (iii) Appropriate choice of the mutation strategies for DE in DPD probably avoids the stagnation and improves the qualitative performance in general. (iv) The additional use of Elitism and Non-Redundant Search in DPD cycle together helps not only to catch hold of the best solution achieved so far but also to improve the solution quality consistently. (v) Statistical tests confirm the superiority of DPD.
700
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
(vi) Convergence graphs show that DPD performs faster over others. (vii) DPD has less standard deviation implies its better stability. As a whole, the mechanism of ‘memorizing the best achieved value by PSO’ and ‘the quality of maintaining diversity in the population by DE’ offers the major contribution to make DPD stronger. But the proper choice DE-mutation-strategies makes further strengthen the effectiveness of DPD. However, as a future scope of research DPD can also be used to solve constrained and multi-objective optimization problems. Further, it can adopt more popular UCI datasets in experimental design and attempts can be made to solve more number of realistic problems. References [1] M.Y. Chen, M.H. Fan, Y.L. Chen, H.M. Wei, Design of experiments on neural network’s parameters optimization for time series forecasting in stock markets, Neural Network World 23 (4) (2013) 369–393. [2] M.Y. Chen, Bankruptcy prediction in firms with statistical and intelligent techniques and a comparison of evolutionary computation approaches, Comput. Math. Appl. 62 (12) (2011) 4514–4524. [3] E. Nwankwor, A. Nagar, D. Reid, Hybrid differential evolution and particle swarm optimization for optimal well placement, Comput. Geosci. 17 (2) (2013) 249–268. [4] H. Talbi, M. Batouche, Hybrid particle swarm with differential evolution for multimodal image registration, in: Proceedings of the IEEE International Conference on Industrial Technology 3, 2004, pp. 1567–1573. [5] R. Storn, K.V. Price, Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim. 11 (4) (1997) 341–359. [6] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceeding of IEEE International Conference on Neural Networks, Piscataway, vol. 4, 1995, pp. 1942–1948. [7] S.C. Chiam, K.C. Tan, A. Al Mamun, A memetic model of evolutionary PSO for computational finance applications, Expert Syst. Appl. 36 (2009) 3695– 3711. [8] R.A. Araújo, Swarm-based translation-invariant morphological prediction method for financial time series forecasting, Inf. Sci. 180 (24) (2010) 4784– 4805. [9] C.L. Sun, J.C. Zeng, J.S. Pan, An improved vector particle swarm optimization for constrained optimization problems, Inf. Sci. 181 (6) (2011) 1153–1163. [10] Y. Li, R. Xiang, L. Jiao, R. Liu, An improved cooperative quantum-behaved particle swarm optimization, Soft. Comput. 16 (6) (2012) 1061–1069. [11] M.Y. Chen, A hybrid ANFIS model for business failure prediction - utilization of particle swarm optimization and subtractive clustering, Inf. Sci. 220 (2013) 180–195. [12] D. Chen, J. Chen, H. Jiang, F. Zou, T. Liu, An improved PSO algorithm based on particle exploration for function optimization and the modeling of chaotic systems, Soft Comput., 2014 (online 07 Oct 2014). [13] A. Qin, V. Huang, P. Suganthan, Differential evolution algorithm with strategy adaptation for global numerical optimization, IEEE Trans. Evol. Comput. 13 (2) (2009) 398–417. [14] F. Neri, V. Tirronen, Recent advances in differential evolution: a survey and experimental analysis, Artif. Intell. Rev. 33 (1–2) (2010) 61–106. [15] S. Das, P.N. Suganthan, Differential evolution: a survey of the state-of-the-art, IEEE Trans. Evol. Comput. 15 (1) (2011) 4–31. [16] Y. Wang, Z. Cai, Q. Zhang, Enhancing the search ability of differential evolution through orthogonal crossover, Inf. Sci. 185 (2012) 153–177. [17] G. Jia, Y. Wang, Z. Cai, Y. Jin, An improved (l+k)-constrained differential evolution for constrained optimization, Inf. Sci. 222 (2013) 302–322. [18] Y. Wang, H.X. Li, T. Huang, L. Li, Differential evolution based on covariance matrix learning and bimodal distribution parameter setting, Appl. Soft Comput. 18 (2014) 232–247. [19] T. Hendtlass, A combined swarm differential evolution algorithm for optimization problems, Lecture Notes Computer Science, 2070, Springer Verlag, 2001, pp. 11–18. [20] W.J. Zhang, X.F. Xie, DEPSO: Hybrid particle swarm with differential evolution operator, in: Proceedings IEEE International Conference Systems Man Cybernetics, vol. 4, 2003, pp. 3816–3821. [21] S. Das, A. Konar, U.K. Chakraborty, Improving particle swarm optimization with differentially perturbed velocity, in: Proceedings Genetic Evolutionary Computation Conference, 2005, pp. 177–184. [22] P.W. Moore, G.K. Venayagamoorthy, Evolving digital circuit using hybrid particle swarm optimization and differential evolution, Int. J. Neural Syst. 16 (3) (2006) 163–177. [23] Z.F. Hao, G.H. Guo, H. Huang, A particle swarm optimization algorithm with differential evolution, in: Proceedings Sixth International Conference Machine Learning and Cybernetics Hong Kong China, 2007, pp. 1031–1035. [24] M. Omran, A.P. Engelbrecht, A. Salman, Bare bones differential evolution, Eur. J. Oper. Res. 196 (1) (2008) 128–139. [25] C. Zhang, J. Ning, S. Lu, D. Ouyang, T. Ding, A novel hybrid differential evolution and particle swarm optimization algorithm for unconstrained optimization, Oper. Res. Lett. 37 (2) (2009) 117–122. [26] H. Liu, Z.X. Cai, Y. Wang, Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization, Appl. Soft Comput. 10 (2) (2010) 629–640. [27] M. Pant, R. Thangaraj, DE-PSO: a new hybrid meta-heuristic for solving global optimization problems, New Math. Nat. Comput. 7 (3) (2011) 363–381. [28] A. El Dor, M. Clerc, P. Siarry, Hybridization of differential evolution and particle swarm optimization in a new algorithm DEPSO-2S, Swarm Evol. Comput. 7269 (2012) 57–65. [29] T.D.F. Araújo, W. Uturbey, Performance assessment of PSO, DE and hybrid PSO–DE algorithms when applied to the dispatch of generation and demand, Elect. Power Energy Syst. 47 (2013) 205–217. [30] S. Sayah, A. Hamouda, A hybrid differential evolution algorithm based on particle swarm optimization for nonconvex economic dispatch problems, Appl. Soft Comput. 13 (4) (2013) 1608–1619. [31] X. Zuo, L. Xiao, A DE and PSO based hybrid algorithm for dynamic optimization problems, Soft. Comput. 18 (2014) 1405–1424. [32] S. Das, A. Abraham, A. Konar, Particle swarm optimization and differential evolution algorithms: technical analysis, applications and hybridization perspectives, in: Advances of Computational Intelligence in Industrial Systems, Studies in Computational Intelligence, Springer Verlag, Germany, 2008, pp. 1–38. [33] R. Thangaraj, M. Pant, A. Abraham, P. Bouvry, Particle swarm optimization: hybridization perspectives and experimental illustrations, Appl. Math. Comput. 217 (12) (2011) 5208–5226. [34] B. Xin, J. Chen, J. Zhang, H. Fang, Z. Peng, Hybridizing differential evolution and particle swarm optimization to design powerful optimizers: a review and tax-onomy, IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 42 (5) (2012) 744–767. [35] J.K. Kordestani, A. Rezvanian, M.R. Meybodi, CDEPSO: a bi-population hybrid approach for dynamic optimization problems, Appl. Intell. 40 (4) (2014) 682–694. [36] S.M. Elsayed, R.A. Sarker, D.L. Essam, Multi-operator based evolutionary algorithms for solving constrained optimization problems, Comput. Oper. Res. 38 (12) (2011) 1877–1896.
K.N. Das, R.P. Parouha / Applied Mathematics and Computation 256 (2015) 666–701
701
[37] G. Zhang, J. Cheng, M. Gheorghe, Q. Meng, A hybrid approach based on differential evolution and tissue membrane systems for solving constrained manufacturing parameter optimization problems, Appl. Soft Comput. 13 (3) (2013) 1528–1542. [38] A. Yadav, K. Deep, An efficient co-swarm particle swarm optimization for non-linear constrained optimization, J. Comput. Sci. 5 (2) (2014) 258–268. [39] L. Cagnina, S. Esquivel, C.A. Coello Coello, A bi-population PSO with a shake-mechanism for solving constrained numerical optimization, in: IEEE Congress on Evolutionary Computation (CEC’2007), IEEE Press, Singapore, 2007, pp. 670–676. [40] L.C. Cagnina, S.C. Esquivel, C.A. Coello Coello, Solving constrained optimization problems with a hybrid particle swarm optimization algorithm, Eng. Optim. 43 (8) (2011) 843–866. [41] M.F. Han, S.H. Liao, J.Y. Chang, C.T. Lin, Dynamic group-based differential evolution using a self-adaptive strategy for global optimization problems, Appl. Intell. 39 (1) (2013) 41–56. [42] X. Wang, Q. Yang, Y. Zhao, Research on hybrid PSODE with triple populations based on multiple differential evolutionary models, in: proceedings International Conference Electrical Control Engineering Wuhan China, 2010, pp. 1692–1696. [43] M. Ali, M. Pant, A. Abraham, Simplex differential evolution, Acta Polytech. Hung. 6 (5) (2009) 95–115. [44] W. Gong, Z. Cai, Differential evolution with ranking based mutation operators, IEEE Trans. Syst. Man Cybern. Part B Cybern. 43 (6) (2013) 2066–2081. [45] A. El Dor, M. Clerc, P. Siarry, A multi-swarm PSO using charged particles in a partitioned search space for continuous optimization, Comput. Optim. Appl. 53 (1) (2012) 271–295. [46] H. Wang, D. Wang, S. Yang, Triggered memory-based swarm optimization in dynamic environments, in: Applications of Evolutionary Computing, Lecture Notes in Computer Science, vol. 4448, 2007, pp. 637–646. [47] J. Branke, Memory enhanced evolutionary algorithms for changing optimization problems, Congr. Evol. Comput. 3 (1999) 1875–1882. [48] Y.C. Wu, W.P. Lee, C.W. Chien, Modified the Performance of Differential Evolution Algorithm with Dual Evolution Strategy, 2009 International Conference on Machine Learning and Computing, IPCSIT, 3, IACSIT Press, Singapore, 2011, pp. 57–63. [49] H. Zhang, M. Ishikawa, An extended hybrid genetic algorithm for exploring a large search space, in 2nd International conference on autonomous robots and agents, Palmerston North New Zealand (2004) 244–248. [50] K. Deep, K.N. Das, Performance improvement of real coded genetic algorithm with quadratic approximation based hybridization, Int. J. Intell. Defence Support Syst. 2 (4) (2009) 319–334. [51] P.N. Suganthan, N. Hansen, J.J. Liang, K. Deb, Y.P. Chen, A. Auger, S. Tiwari, Problem Definitions and Evaluation Criteria for the CEC2005 Special Session on Real-Parameter Optimization, Technical Report, Nanyang Technological University, Singapore, 2005. [52] X. Li, M. Yin, Z. Ma, Hybrid differential evolution and gravitation search algorithm for unconstrained optimization, Int. J. Phys. Sci. 6 (25) (2011) 5961– 5981. [53] D. Zou, J. Wu, L. Gao, S. Li, A modified differential evolution algorithm for unconstrained optimization problems, Neurocomputing 120 (2013) 469–481. [54] G.W. Corder, D.I. Foreman, Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach, John Wiley, Hoboken, NJ, 2009. [55] K. Deep, Shashi, V.K. Katiyar, Global optimization of Lennard–Jones potential using newly developed real coded genetic algorithms, in: Proceedings of IEEE International Conference on Communication Systems and Network Technologies, 2011, pp. 614–618. [56] K. Deep, Madhuri, Application of globally adaptive inertia weight PSO to Lennard–Jones problem, in: Proceedings of the International Conference on SocProS AISC 130, 2011, pp. 31–38. [57] C.P. Hu, X.F. Yan, A hybrid differential evolution algorithm integrated with an ant system and its application, Comput. Math. Appl. 62 (1) (2011) 32–43.