Author's Accepted Manuscript
A Modified Differential Evolution Algorithm for Unconstrained Optimization Problems Dexuan Zou, Jianhua Wu, Liqun Gao, Steven Li
www.elsevier.com/locate/neucom
PII: DOI: Reference:
S0925-2312(13)00571-7 http://dx.doi.org/10.1016/j.neucom.2013.04.036 NEUCOM13420
To appear in:
Neurocomputing
Received date: 11 September 2012 Revised date: 31 March 2013 Accepted date: 5 April 2013 Cite this article as: Dexuan Zou, Jianhua Wu, Liqun Gao, Steven Li, A Modified Differential Evolution Algorithm for Unconstrained Optimization Problems, Neurocomputing, http://dx.doi.org/10.1016/j.neucom.2013.04.036 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
A Modified Differential Evolution Algorithm for Unconstrained Optimization Problems Dexuan Zou a , Jianhua Wu b , Liqun Gao b , Steven Li c a School
of Electrical Engineering and Automation, Jiangsu Normal University, Xuzhou, Jiangsu 221116, PR China b School of Information Science and Engineering, Northeastern University, Shenyang, Liaoning 110004, PR China c Division of Business University of South Australia GPO Box 2471, Adelaide, SA 5001,Australia
Abstract A modified differential evolution algorithm (MDE) is proposed to solve unconstrained optimization problems in this paper. Gauss distribution and uniform distribution have one thing in common, that is randomness or indeterminateness. Due to this characteristic, MDE employs both distributions to adjust scale factor and crossover rate, which is useful to increase the diversity of the entire population. To guarantee the quality of the swarm, MDE uses an external archive, and some solutions of high quality in this external archive can be selected for candidate solutions. MDE adopts two common mutation strategies to produce new solutions, and the information of global best solution is more likely to be utilized for the mutation during late evolution process, which is beneficial to improving the convergence of the proposed algorithm. In addition, a central solution is generated in terms of all the other candidate solutions, and it can provide a potential searing direction. Experimental results show that MDE algorithm can yield better objective function values than the other six DE algorithms for some unconstrained optimization problems, thus it is an efficient alternative on solving unconstrained optimization problems.
Key words: Modified differential evolution algorithm; Gauss distribution; Uniform distribution; External archive; Mutation; Central solution;
∗ corresponding author. Email address:
[email protected] (Dexuan Zou). Preprint submitted to Elsevier
6 June 2013
1. Introduction Heuristic algorithms are essentially non-deterministic algorithms, and they utilize various stochastic search strategies to obtain the optimums of different optimization problems. Moreover, these optimization techniques do not need continuity and differentiability of objective functions, which provides wider application fields for heuristic algorithms. Due to these advantages, the study and analysis of the properties of heuristic algorithms has drawn much attention from researchers in recent years. So far, a large number of heuristic algorithms have emerged, such as genetic algorithm (GA) [1], particle swarm optimization (PSO) algorithm [2,3], simulated annealing (SA) algorithm [4], ant colony optimization (ACO) algorithm [5], and so on. These computing techniques have been successfully used in many practical optimization problems [6–9], and they play an important role in finding the optimal solutions of these problems. As an advanced computing technique, differential evolution algorithm (DE) [10] was first proposed by Price and Storn in 1995. In some comprehensive studies [11–13], DE was reported to be superior to many other optimization approaches on solving some common benchmark functions, thus it is a very competitive and promising stochastic searching technique. Moreover, DE has a few algorithm parameters which is convenient for its applications. Due to these merits, it has been applied into many practical optimization problems, such as power system planning [14], transient stability constrained optimal power flow [15], optimization of dynamic systems [16], two-dimensional microwave imaging [17], Network reconfiguration of distribution systems [18], and many others. In order to suit DE to more complex optimization problems, researchers have proposed many improved DE algorithms. Cai et al. [19] presented a clustering-based DE (CDE) for the unconstrained global optimization problems. They combined DE with the one-step kmeans clustering [20] which acts as several multi-parent crossover operators to utilize the information of the population efficiently. Chiang et al. [21] proposed the 2-Opt based DE (2-Opt DE) which is inspired by 2-Opt algorithms [22]. Specially, they eliminated the mutation operations of the original DE namely DE/rand/1 and DE/rand/2, and adopted the novel mutation operations of 2-Opt DE, DE/2-Opt/1 and DE/2-Opt/2, which is beneficial to accelerating DE. Liu and Sun [23] presented a computational framework called DE-kNN for DE suffering from computationally expensive optimization problems. They demonstrated the concept of DE-kNN by one novel approximate model using kNearest Neighbour (kNN) predictor [24,25]. A kNN predictor stores all of the training samples and does not build a predictor ahead of time until a new sample needs to be classified, so it is very easy to implement. Moreover, the kNN predictor is capable of producing accurate global approximations to actual parameters space, so it can provide the function evaluations efficiently. Omran et al. [26] proposed a barebones differential evolution (BBDE) which combines particle swarm optimizer and differential evolution. This new method does not adopt the original PSO parameters, and it also eliminates the DE scale parameter. In addition, a probability of recombination is introduced into BBDE, and it enables BBDE to update the particle position in terms of a new position on a randomly selected personal best position, or a mutation of the particle attractor. Zhang and Sanderson [27] proposed an adaptive differential evolution with optional external archive(JADE) which is beneficial to improving optimization performance of DE. They devised an efficient mutation scheme “DE/current-to-pbest” with optional external 2
archive. Furthermore, the “DE/current-to-pbest” is a moderate modification to the classic “DE/current-to-best”, and the optional archive operation saves historical information which has the potential to provide promising direction of search for evolution progress. Additionally, they adaptively adjusted scale factor F and crossover rate CR according to those successful parameter values. Both operations can not only diversify the population, but also improve the convergence performance. Deng et al. [28] proposed a new differential evolution (NDE) algorithm to improve the efficiency of DE. In detail, they devised a new framework with a single population to improve its exploration ability, and a second enhanced mutation operator was introduced to enhance its search capability. The standard DE is too deterministic an algorithm, and it often suffer from the stagnation problem or the limited amount of moves. This explains why in many enhanced versions of DE, a randomization within the generation of the offspring is proposed. The way to enhance the DE framework is broadly used, and this concept has been conceptualized and explained in several papers. Neri and Tirronen [29] presented a survey on DE and its recent advances. In their analysis, some extra moves can assist the DE framework in detecting new promising search directions. Thus, a limited utilization of these moves seems to be an efficient alternative in successfully assisting DE. The successful extra moves can be obtained by introducing some randomization. However, this randomization should not be used excessively, since it would be harmful to the search. In order to emphasize the importance of introducing moderate randomization, Neri and Tirronen [29] listed four representative DE algorithms: The opposition-based differential evolution (ODE) is proposed to deal with nonlinear and complex problems, and this novel approach utilizes opposition-based learning [31] to update population and to conduct generation jumping. Furthermore, the generation jumping is implemented with a certain probability Jr . Therefore, some better candidate solutions can be obtained by simultaneously checking the opposite solutions. The differential evolution with global and local neighborhoods [32,33] uses an extra vector component during the evolution process. Moreover, the neighborhoods of different vectors were chosen randomly and not according to their fitness values or geographical locations on the fitness landscape, which preserves the diversity of the vectors belonging to the same neighborhood. The self-adaptive differential evolution [34] diversifies the search moves by employing multiple mutation strategies. In detail, a vector of size N P is randomly generated in the range [0, 1]. If the jth element value of the vector is smaller than or equal to p1 , the strategy “rand/l/bin” will be applied to the jth individual in the current population. Otherwise the strategy “current to best/2/bin” will be applied. Finally, the DE algorithm based on self-adapting control parameters (SADE) [35] is an efficient technique for adapting control parameter settings related to DE, and it introduces two probabilities τ1 and τ2 to adjust scale factor F and crossover rate CR. More specifically, scale factor F (crossover rate CR) is randomly generated in [0.1, 0.9] ([0, 1]) according to a low probability τ1 (τ2 ). In other words, SADE can turn control parameters with some probabilities and after that, better control parameters are saved for the next iterations. The main advantage of SADE is that user does not need to guess the proper values for F and CR, thus it brings great convenience to users. It should be noticed that all the four improved DEs include new stochastic elements within the DE. A certain degree of randomization within a DE structure seems to have a highly relevant influence on the algorithmic success. Generally, a proper balance in randomization can be extremely helpful to the performance improvement of DE, and the search for this balance is the future of the DE scheme development. Weber et al. [36] 3
presented a distributed differential evolution which employs a novel self-adaptive scheme, called scale factor inheritance. More specifically, the population is divided into several sub-populations allocated in terms of a ring topology, and each sub-population owns its own scale factor. Based on a probabilistic strategy, the individual performing the best is transferred to the neighbor population and replaces a randomly selected individual of the target sub-population. If the individual is potential in the current generation, the target sub-population will inherit both this individual and its corresponding scale factor. Additionally, a perturbation mechanism is introduced to improve the exploration capacity of this algorithm. Neri et al. [37] proposed a disturbed exploitation compact differential evolution (DEcDE), and applied it to limited memory optimization problems. The DEcDE adopts two exploitative versions of DE, the first generates offspring solutions by using a trigonometric mutation, and the second generates offspring solutions by combining the mutation of standard DE and an exponential crossover. In both strategies, three individuals are generated to implement mutation by means of probability vector (PV). After elite selection, the current PV is updated according to the previous PV and the solutions from elite selection. In addition, a perturbation of PV is implemented by utilizing random values in [0, 1]. In short, the DEcDE performs an highly exploitative search by a randomized action that perturbs the virtual population. Thus, randomization plays an important role on improving the convergence and exploitation capacity of the DEcDE. Weber et al. [38] proposed four kinds of scale factor schemes for distributed DE (DDE), and three out of four schemes are related to randomization. More specifically, in scheme-1, a scale factor is randomly generated from a uniform distribution between 0 and 1 for each sub-population at the beginning of evolution, and then the scale factors of these sub-populations are fixed in the subsequent generations. In short, scheme-1 is helpful to enlarge the set containing the search moves, and to find solutions of high quality. In scheme-3, a scale factor is randomly initialized between 0 and 1 for each sub-population at the beginning of the evolution. Subsequently, the scale factor associated with the subpopulation which has improved the least over fixed generations is replaced by a newly generated random value within the interval [0, 1]. In other words, scheme-3 enables the efficient sub-populations to continue exploiting their search directions without changing their scale factors, and enhancing the searching capacity of the worst performing subpopulation by introducing a randomly generated scale factor. In brief, the utilization of randomness in scale factor can not only increase the diversity of the population, but also provide some promising searching directions for the population. In scheme-4, the scale factor of each sub-population is initially produced from a uniform distribution between 0 and 1, and at each subsequent generation, a newly produced random value in [0, 1] is assigned to a randomly chosen sub-population. Obviously, scheme-4 gives consideration to both uncertain and deterministic searching, because it encourages a sub-population to perform uncertain searching, and enables the others to continue their deterministic searching. On the other hand, the sub-population adopting a randomly generated scale factor is randomly selected, so all the sub-populations have the same opportunities to perform uncertain searching, which can effectively ease the stagnation problem. In sum, scheme-4 can not only increase the amount of search moves, but also avoid the blindness of random search. Mininno et al. [39] proposed a compact differential evolution (cDE) algorithm, and they presented two important issues associated with DE. The first one is the survivor selection scheme which employs the so called one-to-one spawning logic, i.e., in DE the survivor selection is implemented by comparing the performance of a parent so4
lution and its corresponding offspring. The second issue is associated with the DE search logic. A DE algorithm contains a limited amount of search moves which might contribute to producing the solutions of low quality. In order to overcome these disadvantages, some randomness is introduced into the search logic. More specifically, a (2 × N ) probability vector (PV) is generated. A solution is sampled from PV and plays the role of elite. Subsequently, at each iteration, some solutions are generated according to the selected mutation scheme. For instance, if a DE/rand/1 mutation is selected, three individuals xr , xs , and xt are sampled from PV. The detailed steps are as follows: First, a Gaussian probability distribution function (PDF) is constructed. Second, the corresponding cumulative distribution function (CDF) is obtained in terms of Chebyshev polynomials [40]. Third, a random number rand(0, 1) is generated from a uniform distribution so as to sample the variable xr [i] from PV. Finally, the inverse function of CDF, in correspondence of rand(0, 1), is then calculated for xr [i]. The extra search moves can provide the DE with promising search logic, which is helpful to assist the DE structure and to improve its performance. Neri and Mininno [41] presented a memetic compact differential evolution (McDE) for continuous optimization. More specifically, the McDE adopts a DE framework, and uses a stochastic local search algorithm. By introducing some randomness into the search logic, the quality of the elite can be improved, and the convergence of the McDE can be enhanced. Das et al. [42] proposed two improved DE schemes for faster global search, and one of the two DE schemes is based on random scale factor. In each generation, they changed the scale factor in a random manner within the range [0.5, 1]. By adopting this random strategy, the amplification of the difference vector varies stochastically, which is beneficial to retaining population diversity throughout the entire evolution. Thus, the best solution of population has little chance of getting stagnant until a truly global optimum is reached. Islam et al. [43] developed an adaptive differential evolution algorithm based on novel mutation and crossover strategies (MDE pBX) for global numerical optimization. More specifically, they introduced a p-best crossover operation by combining the conventional binomial crossover scheme of DE with a greedy parent selection strategy. Scale factor F and crossover rate Cr is generated based on the success history of previously used F and Cr values. Moreover, a series of random values related to uniform distribution, cauchy distribution and gaussian distribution are also incorporated into the generation of these two parameters. In short, this p-best crossover operation enhances the accuracy of MDE pBX in the search space by a large extent. Weber et al. [44] proposed shuffle or update parallel differential evolution (SOUPDE) for large scale optimization. The SOUPDE is composed on a structured population (parallel) DE/rand/1/exp which integrates two extra mechanisms activated by means of a probabilistic criterion. The first mechanism, namely shuffling, consists of merging the sub-populations and subsequently randomly dividing the individuals again into sub-populations. The second mechanism, namely update, consists of randomly updating the values of the scale factors of each population. The randomization effect existing in the above two mechanisms enables SOUPDE to detect the theoretical global optimum in most of the runs and for most of the problems in the benchmark, and enhance its robustness to the curse of dimensionality. In addition, the relevant work also include a novel global harmony search (NGHS) algorithm [45], an improved differential evolution (IDE) algorithm [46] and a novel modified differential evolution (NMDE) algorithm [47]. Like DE, NGHS is also a kind of searching method with randomization. It includes two important operations: position updating and 5
genetic mutation with a small probability. The former enables the worst solution to move to the global best solution rapidly in each generation, and the latter can help NGHS to get rid of the local optimum. Due to the above advantages, NGHS is able to find good solutions for unconstrained problems. IDE is a variation of DE, it modifies scale factor in terms of the objective function values of all candidate solutions, and updates crossover rate according to the generation number. By adopting these two adjusted parameters, IDE is able to fully develop the solution space. NMDE is also a variation of DE, it adjusts scale factors and crossover rates in terms of the ones related to all successful solutions. Thus, satisfactory feasible solutions can be obtained by using this adaptive strategy. In short, both IDE and NMDE are used to solve constrained optimization problems. In order to continue the relevant related work, we develop an improved version of DE to solve unconstrained optimization problems in this paper. The paper is summarized as following. In Section 2, the procedure of original differential evolution (DE) algorithm is introduced. In Section 3, a modified differential evolution (MDE) algorithm is presented, and the MDE procedure is substantially described. In Section 4, twenty-five unconstrained functions are collected to test the performance of the MDE. We end this paper with some conclusions in Section 5. 2. The original differential evolution algorithm (DE) Differential evolution algorithm (DE) [10] is an efficient evolutionary optimization technique, and it has a simple algorithm structure which is easy to study and master. In this section, we will briefly illustrate the working procedure of DE. Step 1: Initialization DE parameters are initialized in this step, and these parameters include scale factor F , crossover rate CR, population size M and the maximal iteration number K. Additionally, all candidate solutions are initialized randomly from a uniform distribution in the ranges [xj , xj ] (j = 1, 2, ..., N ), where, xj and xj are the upper bound and lower bound of the jth (j = 1, 2, ..., N ) variable, respectively, and N is the number of variables. Step 2: Mutation For any trial vector vik+1 , it is generated by mutating a target vector. Usually, trial vector vik+1 is calculated according to one of the following five equations: vik+1 = xki3 + F (xki1 − xki2 ),
(1)
vik+1 = xkbest + F (xki1 − xki2 ),
(2)
vik+1 = xki + F1 (xkbest − xki ) + F2 (xki1 − xki2 ),
(3)
vik+1 = xkbest + F1 (xki1 − xki2 ) + F2 (xki3 − xki4 ),
(4)
vik+1 = xki5 + F1 (xki1 − xki2 ) + F2 (xki3 − xki4 ).
(5)
xkbest
denotes the best solution vector at iteration Here, F , F1 and F2 are scale factors. k. i1 , i2 , i3 , i4 and i5 are different integers which are randomly chosen from the set {1, 2, ..., M }. Furthermore, Eqs. (1)-(5) represent different DE variants, and they are 6
known as DE/rand/1/bin, DE/best/1/bin, DE/current-to-best/1/bin, DE/best/2/bin, and DE/rand/2/bin, respectively. Step 3: Crossover The variables of offspring vector uk+1 are the combination of parent vector xki and i k trial vector vi , thus they are calculated as follows. v k+1 , if rand() < CR or j = r 1∼N ; i,j uk+1 = (6) i,j xk , otherwise. i,j
Here, rand() is a randomly generated number in the range [0, 1], r1∼N denotes a random integer in the range [1, N ], and CR(CR ∈ [0, 1]) represents crossover rate. Step 4: Selection , is assigned to xk+1 ) is smaller(or better) than f (xki ), offspring vector uk+1 If f (uk+1 i i i k+1 k otherwise, parent vector xi is assigned to xi . Thus, the selection step can be given by the following expression: uk+1 , If f (uk+1 ) < f (xk ); i i i xk+1 = (7) i xk , Otherwise. i
Step 5: Judge stopping condition If the maximal iteration number(K) is reached, computation is stopped. Otherwise, Steps 3 and 4 are repeated. 3. A modified differential evolution algorithm (MDE) This section presents a modified differential evolution algorithm (MDE) for solving unconstrained optimization problems. Generally speaking, MDE and DE are different in four aspects as follows: (1) MDE adds an external archive to obtain solutions of high quality. At the beginning of each iteration, the population P with M candidate solutions can be stated as follows: x1,1 x1,2 · · · x1,N x2,1 x2,2 · · · x2,N P = (8) .. . . .. .. . . . . xM,1 xM,2 · · · xM,N Here, xi,j stands for the jth (j = 1, ..., N ) dimension of the ith (i = 1, ..., M ) candidate solution in population P . For the jth dimension, the minimal value xL j and the maximal U value xj can be found from set {x1,j , x2,j , ..., xM,j }. Thus we have: xL j = min (xi,j )
(9)
xU j = max (xi,j )
(10)
1≤i≤M
1≤i≤M
7
Based on Eqs.(8)-(10), a new population P ′ can be produced, and it is described as follows: y1,1 y1,2 · · · y1,N y y · · · y 2,1 2,2 2,N P′ = (11) .. . . .. .. . . . . yM,1 yM,2 · · · yM,N Here, yi,j represents the jth (j = 1, ..., N ) dimension of the ith (i = 1, ..., M ) candidate solution in population P ′ . For the ith (i = 1, ..., M ) candidate solution yi , its jth U dimension yi,j can be randomly generated in the uniformly distributed range [xL j , xj ]: U L yi,j = xL j + rand() × (xj − xj )
(12)
Once this external archive is produced, the first half best solutions are selected from the combination of P and P ′ , and they are reassigned to population P . In fact, ODE [30] also adopts a similar external archive which is denoted with OP . For any variable Pi,j , its opposite variable OPi,j is calculated as follows: U OPi,j = xL j + xj − Pi,j
(13)
U Here, xL j and xj are the same as those in Eqs. (9) and (10). According to Eq.(13), only the opposite value of Pi,j is used to generate the corresponding variable in the range U of [xL j , xj ] for the external archive OP , and it is denoted as OPi,j . Unlike ODE, MDE U can randomly produce variable yi,j in the range of [xL j , xj ] (As Eq.(12)). In other words, once the Pi,j value is given, ODE can only obtain one possible value for OPi,j which U is equal to xL j + xj − Pi,j . However, MDE can obtain millions of possible values in the U range of [xL j , xj ]. Compared to ODE, MDE has wider searching space, which is useful to diversify population and to improve the quality of candidate solutions. (2)MDE adopts two mutation strategies to enhancing its convergence and avoid the prematurity. The original DE uses the DE/rand/1/bin scheme (As Eq. (1)) to conduct mutation operation. On the other hand, two common schemes are adopted by MDE, and they are the DE/rand/1/bin scheme (As Eq. (1)) and the DE/best/1/bin scheme (As Eq. (2)), respectively. In detail, the MDE selects these two schemes in terms of a dynamical probability, and the modified mutation operation can be stated as follows: xk + F (xk − xk ), If rand() < γ; best i1 i2 vik+1 = (14) xk + F (xk − xk ), Otherwise. i3
i1
i2
Here, xbest denotes the best solution in population. i1 , i2 and i3 are different integer numbers in the range [1, M ], and i1 ̸= i2 ̸= i3 ̸= i. Parameter γ is defined as selection probability, and it is calculated as follows: k 1/4 (15) ) K where, k and K represent the present iteration number and the maximal iteration number, respectively. Suppose K = 1000, then γ can be described as Fig. 1: γ=(
8
As can be seen from Fig. 1, selection probability γ keeps small values in the early evolution process, so the mutation operation is dominated by the the DE/rand/1/bin scheme at this stage. On the other hand, γ keeps large values in the middle and later periods of evolution, so the mutation operation is dominated by the the DE/best/1/bin scheme at this stage. In short, large values of γ are beneficial to enhancing the convergence of MDE, and small values of γ are helpful to avoid the prematurity. (3)MDE updates scale factor and crossover rate to increase the diversity of candidate solutions For the original DE, its scale factor F and crossover rate CR are set as constants, that is to say, all solutions use both constants throughout the whole evolution process. Unlike DE, MDE adjusts scale factor F and crossover rate CR by using gauss distribution and uniform distribution, respectively, because both distributions involve randomization which is beneficial to increasing the diversity of candidate solutions. Specifically, F and CR are given by the following expression: F ∼ Gd(mF , δF )
(16)
CR ∼ U d(mCR − δCR , mCR + δCR )
(17)
With regard to scale factor F , Gd(mF , δF ) represents gauss distribution with mean mF and standard deviation δF . For crossover rate CR, U d(mCR − δCR , mCR + δCR ) represents uniform distribution√in the range of [mCR − δCR , mCR + δCR ] with mean mCR and standard deviation δCR / 3. It should be noticed that the F (CR) value will be truncated to the range [Fmin , Fmax ] ([CRmin , CRmax ]) if overflow happens. (4)MDE produces a central solution to provide a potential alternative In DE algorithm, all M solutions adopt mutation and crossover operations. On the other hand, Liu et al. [48] proposed an improved version of PSO — center particle swarm optimization (CPSO) algorithm. CPSO generates a position for a center particle by averaging the positions of the other particles. The position of center particle is a potential and promising alternative, and it often guides the search direction of the population. Inspired by this characteristic, MDE uses mutation and crossover operations to update M − 1 solutions, and produces the M th offspring solution by averaging all the other solutions. This solution is called central solution, and it can provide a promising and efficient alternative in evolution process. Thus we have: uk+1 M =
M −1 1 ∑ k x M i=1 i
(18)
Based on the above detail illustrations, the MDE procedure can be presented in Table 1: 4. Experimental results and analysis Twenty-five problem instances from the IEEE CEC 2005 [49] are used to test the optimization performance of the MDE algorithm. These instances have been defined in [49], thus, there’s no need to be repeated here. Moreover, to validate the MDE on solving unconstrained optimization problems, it is compared with the other six DE approaches 9
including DE [10], SADE [35], ODE [30], JADE [27], NDE [28] and MDE pBX [36]. Furthermore, the control parameters of all the seven DE approaches are set as follows: For DE, scale factor F = 0.6, crossover rate CR = 0.5; For SADE, the lower bound of scale factor Fl = 0.1, the upper bound of scale factor Fu = 0.9, probabilities τ1 = 0.1 and τ2 = 0.1; For ODE, F = 0.6, CR = 0.5, jumping rate Jr = 0.3; For JADE, the mean and standard deviation of the normal distribution are µCR = 0.5 and 0.1, respectively, the location parameter and scale parameter of the cauchy distribution are µF = 0.5 and 0.1, positive constant c = 0.1, percentage p% = 5%; For NDE, F = 0.6, CR = 0.5, the probability of the second enhanced mutation operator SM = 0.05; For MDE pBX, the percentage q% = 15%, the initial location parameter Fm of the cauchy distribution is set as 0.5, the initial mean of the normal distribution Crm is set as 0.6, the value of n related to the power mean is set as 1.5. For MDE, the mean and standard deviation of the gauss distribution associated with scale factor are µF = 0.75 and δF = 0.1, the mean and standard deviation √ of the uniform distribution associated with crossover rate are µCR = 0.9 and 1/10 3. For each instance, two different dimension sizes are chosen, and they are 30, and 50, respectively. The corresponding population sizes are set as 40, and the corresponding maximal generation numbers are set as 1000 and 2000, respectively. 30 independent experiments are implemented in each case, and the means and standard deviations of the best solutions averaged over 30 trails on instances F1 − F25 [49] are reported in Tables 2-4. Table 2 reports the optimization results of 25 problems with N = 30. Moreover, “Fun” and “Alg” denote Function and Algorithm, respectively. From Table Table 2 we can see that MDE outperforms the other six DE approaches on 11 functions (F2 , F3 , F4 , F9 , F10 , F12 , F13 , F16 , F17 , F23 and F25 ) in terms of the criteria “mean”. On the other hand, it outperforms the other six DE approaches on 6 functions (F2 , F3 , F4 , F7 , F17 and F23 ) in terms of the criteria “standard deviation”. Regarding f1 , six DE approaches including DE, SADE, ODE, JADE, NDE and MDE can obtain the same mean −4.5000e + 002 which is better than the mean obtained by MDE pBX. With respect to problems F1 , F5 , F21 , F22 and F24 , ODE performs better than the other six DE approaches according to the criteria “mean”, moreover, it performs better than the other six DE approaches on 5 functions (f3 , f4 , f7 , f14 and f16 ) according to the criteria “standard deviation”. In addition, both SADE and JADE can obtain three best results, and MDE pBX can obtain two best results. On the contrary, both DE and NDE can only obtain one best result. In short, MDE is slightly better than ODE, and it demonstrates clear superiority over the remaining approaches. As listed in Table 3, MDE can yield better means than those from the other six DE approaches with regard to functions F2 , F3 , F4 , F9 , F10 , F12 , F13 , F16 , F17 , F23 and F24 with N = 50, and it can yield better standard deviations than those from the other six DE approaches with regard to functions F2 , F3 , F4 , F12 , F14 and F24 . Regarding F1 , six DEs can produce the same mean, and among these approaches, SADE can produce the smallest standard deviation. With respect to functions F19 , F20 and f22 , SADE can obtain better means than the other six DEs, in the meantime, it can obtain the smallest standard deviations for F1 , F6 , F9 , F13 , F18 , F19 , F20 and F23 . In addition, ODE can yield better means than the remaining approaches for F5 , F21 and F25 , and it can yield better standard deviations than the other six approaches for F5 , F7 , F8 and F25 . With respect to DE, JADE, NDE and MDE pBX, the total number of the best means obtained by these four approaches is even less than the number of the best means obtained by 10
MDE. Furthermore, DE can provide better standard deviations than the other six DEs with regard to F10 , F11 , F16 , F17 and F22 , and the corresponding means of the first four functions provided by DE are obviously worse than the best means provide by the other approaches. In other words, the objective function values obtained by DE keep to be higher levels for F10 , F11 , F16 and F17 . Therefore, the large means combined with small standard deviations indicate the poor convergence of DE for these four functions. In order to testify the performance of MDE on solving unconstrained optimization problems with high dimensions, eight functions are chosen from [49], and they are F1 , F2 , F4 , F5 , F6 , F9 , F12 and F13 , respectively. The reason why the remaining functions are not adopted is that the highest dimension sizes of these functions are 50. Thus, we can utilize the above eight functions to verify the scalability of MDE. More specifically, the dimension sizes of these eight functions are set to 100. Accordingly, the population sizes and the maximum iteration numbers are set to 60 and 4000, respectively, for all the seven DE approaches. 30 independent experiments are implemented in each case, and the means and standard deviations of the best solutions averaged over 30 runs on the above eight functions [49] are reported in Table 4. Results reported in Table 4 show that MDE can provide better means than those from the other six DE approaches for functions F2 , F4 , F9 and F13 with N = 100, and it can provide better standard deviations than those from the other six DE approaches for functions F2 and F4 . Furthermore, It manages to remain second best for each of the functions F5 , F6 and F9 being outperformed by the best approach alone. In addition, better means and standard deviations are achieved by JADE for F6 and F12 . With respect to F1 , SADE, ODE, JADE, NDE and MDE can obtain the same mean, but the standard deviation obtained by SADE is slightly better than those from ODE, JADE, NDE and MDE. With regard to function F5 , SADE can obtain better mean when compared to the remaining DEs, moreover, better standard deviations are achieved by SADE for functions F1 , F5 and F13 . To learn more about the evolution process of different approaches, the average evolution curves from the best solutions of the seven DE algorithms for eight functions with N = 100 are described in Fig. 2. It is clear from Fig. 2 that MDE achieves the best performance for functions F2 , F4 , F9 and f13 with N = 100. In addition, MDE achieves the second best performance for function F5 being outperformed by SADE alone, and it also achieves the second best performance for function F12 being outperformed by JADE alone. On the other hand, DE performs worse for functions F2 , F12 and F13 when compared to the other six DEs. MDE pBX performs worse for functions F1 and F6 when compared to the other six DEs. In a word, MDE has demonstrated fast convergence rate and strong capacity of exploitation for eight unconstrained problems with large dimension sizes. Finally, the results are carefully analyzed by the Wilcoxon rank-sum test [36,50,51] at the 5% significance level, which is used to determine whether the best approach differs from the other approaches in a statistically significant way. The P -values obtained by the Wilcoxon rank-sum test between the best approach and each of the remaining approaches on all the twenty-five problems are reported in Tables 5-7. Moreover, if the P -values are smaller than 0.05, the null hypothesis doesn’t hold, indicating that the optimization results obtained by the best approach in each case are statistically significant and have not occurred by chance. Here, NA denotes “not available”, and it always happens to the best approaches. In 11
this paper, the best approach is defined as follows: Among these seven DEs, the one who has the smallest mean is the best. In case that there exist more than one approach with the same mean, the one who has the smallest standard deviation is the best. As can be seen from Table 5, MDE can obtain eleven best results for problems F2 , F3 , F4 , F9 , F10 , F12 , F13 , F16 , F17 , F23 and F25 with N = 30. Moreover, the P -values associated with MDE are larger than 0.05 for problems F7 , F21 and F22 , indicating that the optimization results obtained by MDE are comparable to those obtained by the best approaches for problems F7 , F21 , F22 . With respect to problems F1 , F5 , F21 , F22 and F24 , ODE can provide the best optimization results. In addition, both SADE and JADE can obtain three best results, and MDE pBX can obtain two best results. On the contrary, both DE and NDE can obtain only one best result. According to Table 6, MDE performs better than the other six DEs on solving functions F2 , F3 , F4 , F9 , F10 , F12 , F13 , F16 , F17 , F23 and F24 with N = 50. Furthermore, the P -values related to MDE are larger than 0.05 for functions F7 and F18 , indicating that the optimization results obtained by MDE are comparable to those from the best approaches for problems F7 and F18 . Regarding functions F5 , F7 , F21 and F25 , ODE can provide the best optimization results, moreover, it is comparable to the best approaches on solving functions F1 , F15 , F18 , F19 and F20 . With respect to functions F1 , F19 , F20 and F22 , SADE can produce the best results, in the meantime, it can produce comparable results to the best results for functions F9 , F15 , F18 and F21 . Additionally, both JADE and MDE pBX can produce two best results. Obviously, DE and NDE performs worse for most functions when compared to ODE and SADE, not to mention MDE. A close observation of Table 7 reveals that MDE achieves statistically superior performance compared to the remaining DEs for functions F2 , F4 , F9 and F13 with N = 100. For function F6 , the performance of MDE is comparable to that of JADE. In addition, both SADE and JADE can obtain two best results. On the contrary, DE, ODE, NDE and MDE pBX fail to find the best result for any of the eight functions. In order to analyze the suitability of our proposed DE variation for large scale problems, we will give an analysis of the computational overhead (execution time without counting fitness evaluations) with respect to the problem dimensionality, and Table 7 records the average execution time in each case. Here, the term “AET” represents average execution time. According to Table 8, the computational overhead of MDE is light for F1 − F14 (N = 30), and heavy for F15 − F25 (N = 30). When dimension size N reaches 50 (As Table 9), the computational overhead of MDE is slightly heavy for the first fourteenth functions, and much heavier for functions F15 −F25 . From Table 10, it can be seen that the computational overhead of MDE is slightly heavy for functions F1 , F2 , F4 , F6 , F9 and F13 with N = 100, and much heavier for functions F5 and F12 with N = 100. Obviously, the execution time of MDE increases with the increase of problem dimensionality. Moreover, the good performance of MDE is at the cost of additional operators and more execution time. In most cases, MDE is more time-consuming than the other six DEs, and the external structure is the main reason that causing heavy computational overhead. In order to give consideration to computational accuracy and execution time, the effects of external structure should be weaken, and some other operators with equivalent effect and less execution time may be effective for improving the performance of DE. MDE is superior to the other six DEs on solving some optimization problems, because it improves the DE from four aspects. First, MDE modifies opposition-based learning; 12
Second, MDE adopts two mutation strategies; Third, MDE generates scale factor and crossover rate according to gauss distribution and uniform distribution; Finally, MDE produces a center solution. In other words, the good performance of MDE is based on some useful and necessary operators. In any case, the complexity of a algorithm should never be excessive with respect to the problem(s) to be solved. The Ockham’s Razor principle [52] can be used to discuss the complexity of the algorithm. Ockhams Razor is explained in the following way: entities must not be multiplied beyond necessity. In MDE, entities are the original DE operators and additional operators related to the performance improvements of DE, and their quantity refers to their multiple coordination to construct complex algorithmic structures. Although MDE adds additional operators to DE, its structure is still easy to understand, and it can be conducted in a low level programming language. On the other hand, it should be emphasized that since MDE uses an external structure which is computationally expensive, it has a large execution time for a given amount of fitness evaluations. Thus, the structure simplification of MDE may be useful for eliminating or relaxing the computational overhead caused by the external structure. 5. Conclusions This paper presents a modified differential evolution algorithm (MDE) for unconstrained optimization problems. MDE improves DE in four aspects as follows: First, in the MDE model, each candidate solution has its own scale factor and crossover rate. Furthermore, each scale factor is adjusted using gauss distribution, and each crossover rate is adjusted using uniform distribution. Both distributions have the characteristic of randomness, thus, they are useful to diversify the population. Second, in order to ensure the quality of the population, an external archive is adopted, and some good solutions in this external archive can be chosen for candidate solutions. Third, MDE employs two common mutation schemes to update candidate solutions, which is beneficial to improving the convergence of the proposed algorithm. Fourth, a central solution is produced by averaging all the other candidate solutions, and it can provide a promising and efficient alternative in evolution process. In all, these amendments to the original DE algorithm result in a crucial performance improvement according to two criteria (Mean and standard deviation). Experiment results demonstrate that MDE can obtain better solutions than the other six DE algorithms for most unconstrained optimization problems, thus it is a precise and reliable approach on solving unconstrained optimization problems. On the other hand, it should be noticed that since MDE uses an external structure which is computationally expensive, it has a large execution time for a given amount of fitness evaluations. Therefore, our future work will concentrate on the elimination or relaxation of computational burden brought by the external structure. Acknowledgments This work was supported by Natural Science Fundamental Research Project of Jiangsu Colleges and Universities (11KJB510026), Science Fundamental Research Project of Jiangsu Normal University (9212812101), the National Natural Science Foundation of China (No.61104222), Natural Science Foundation of Jiangsu Province (No. BK2011205). 13
References [1] J.H. Holland, Adaptive in natural and artificial systems, Ann Arbor, Univ. Mich. Press, MI (1975). [2] J. Kennedy, R.C. Eberhart, Particle Swarm Optimization, Proc.IEEE International Conference on Neural Networks (1995) 1942-1948. [3] Y.H. Shi, R.C. Eberhart, Empirical study of particle swarm optimization, Proceedings of the IEEE Congress on Evolutionary Computation (1999) 1945-1950. [4] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated annealing, Science (1983) 220. [5] M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony of cooperating agents, IEEE Transactions on Systems, Man, and CyberneticsPart B 26(1) (1996) 29-41. [6] C.N. Dai, M. Yao, Z.J. Xie, C.H. Chen, J.G. Liu, Parameter optimization for growth model of greenhouse crop using genetic algorithms, Applied Soft Computing 9(1) (2009) 13-19. [7] G.C. Luh, C. Y. Lin, Structural topology optimization using ant colony optimization algorithm, Applied Soft Computing 9(4) (2009) 1343-1353. [8] B. Karasulu, S. Korukoglu, A simulated annealing-based optimal threshold determining method in edge-based segmentation of grayscale images, Applied Soft Computing 11(2) (2011) 2246-2259. [9] H.M. Gomes, Truss optimization with dynamic constraints using a particle swarm algorithm, Expert Systems with Applications 38(1) (2011) 957-968. [10] R. Storn, K. Price, Differential evolution: a simple and efficient adaptive scheme for global optimization over continuous spaces, Technical Report TR-95-012, Berkeley, USA: International Computer Science Institute (1995). [11] K.V. Price, R.M. Storn, J.A. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Natural Computing Series (2005). [12] J. Vesterstrøm, R. Thomsen, A comparative study of differential evolution, particle swarm optimization, and evolutionary algorithms on numerical benchmark problems, in: Proceedings of the Congress on Evolutionary Computation 2 (2004) 1980-1987. [13] O. Hrstka, A. Kuˇ cerov´ a, Improvement of real coded genetic algorithm based on differential operators preventing premature convergence, Advance in Engineering Software 35(3-4) (2004) 237-246. [14] G.Y. Yang, Z.Y. Dong, K.P. Wong, A Modified Differential Evolution Algorithm With Fitness Sharing for Power System Planning, IEEE Transactions on Power Systems 23(2) (2008) 514-522. [15] H.R. Cai, C.Y. Chung, K.P. Wong, Application of Differential Evolution Algorithm for Transient Stability Constrained Optimal Power Flow, IEEE Transactions on Power Systems 23(2) (2008) 719-728. [16] R. Angira, A. Santosh, Optimization of dynamic systems: A trigonometric differential evolution approach, Computers & Chemical Engineering 31(9) (2007) 1055-1063. [17] A. Semnani, I.T. Rekanos, M. Kamyab, T.G. Papadopoulos, Two-Dimensional Microwave Imaging Based on Hybrid Scatterer Representation and Differential Evolution, IEEE Transactions on Antennas and Propagation 58(10) (2010) 3289-3298. [18] C.T. Su, C.S. Lee, Network reconfiguration of distribution systems using improved mixed-integer hybrid differential evolution, IEEE Transactions on Power Delivery 18(3) (2003) 1022-1027. [19] Z.H. Cai, W.Y. Gong, C.X. Ling, H. Zhang, A clustering-based differential evolution for global optimization, Applied Soft Computing 11(1) (2011) 1363-1379. [20] A. Jain, M. Murty, P. Flynn, Data clustering: a review, ACM Computing Surveys, vol. 31, no. 3, pp. 264-323, 1999. [21] C.W. Chiang, W.P. Lee, J.S. Heh, A 2-Opt based differential evolution for global optimization, Applied Soft Computing 10(4) (2010) 1200-1207. [22] G.A. Croes, A method for solving travelingsalesman problems, Operations Research 6(6) (1958) 791-812. [23] Y. Liu, F. Sun, A fast differential evolution algorithm using k-Nearest Neighbour predictor, Expert Systems with Applications 38(4) (2011) 4254-4258. [24] C.M. Bishop, Neural networks for pattern recognition, Oxford University Press (1995). [25] V. Vapnik, Statistical learning theory, NY: Wiley (1998). [26] M.G.H. Omran, A.P. Engelbrecht, A. Salman, Bare bones differential evolution, European Journal of Operational Research 196(1) (2009) 128-139. [27] J.Q. Zhang, A.C. Sanderson, JADE: adaptive differential evolution with optional external archive, IEEE Transactions on Evolutionary Computation 13(5) (2009) 945-958.
14
[28] C.S. Deng, B.Y. Zhao, A.Y. Deng, R.X. Hu, New Differential Evolution Algorithm with a Second Enhanced Mutation Operator, International Workshop on Intelligent Systems and Applications (2009) 1-4. [29] F. Neri, V. Tirronen, Recent advances in differential evolution: a survey and experimental analysis, Artificial Intelligence Review 33(1) (2010) 61-106. [30] S. Rahnamayan, H.R. Tizhoosh, M. M. A. Salama, Opposition-based differential evolution, IEEE Transactions on Evolutionary Computation, 12(1) (2008) 64-79. [31] H.R. Tizhoosh, Opposition-based learning: a new scheme for machine intelligence, In: Proceedings of the 2005 International Conference on Computational Intelligence for Modelling, Control and Automation, Vienna, Austria (2005) 695-701. [32] U. K. Chakraborty, S. Das, A. Konar, Differential evolution with local neighborhood, In: Proceedings of the IEEE congress on evolutionary computation, (2006) 2042-2049. [33] S. Das, A. Abraham, U. K. Chakraborty, A. Konar, Differential evolution using a neighborhoodbased mutation operator, IEEE Transactions on Evolutionary Computation 13(3) (2009) 526-553. [34] A. K. Qin, P. N. Suganthan, Self-adaptive differential evolution algorithm for numerical optimization, In: Proceedings of the IEEE congress on evolutionary computation, vol. 2 (2005) 1785-1791. ˇ [35] J. Brest, S. Greiner, B. Boˇskovi´ c, M. Mernik, V. Zumer, Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems, IEEE Transactions on Evolutionary Computation, 10(6) (2006) 646-657. [36] M. Weber, V. Tirronen, F. Neri, Scale Factor Inheritance Mechanism in Distributed Differential Evolution, Soft Computing-A Fusion of Foundations, Methodologies and Applications 14(11) (2010) 1187-1207. [37] F. Neri, G. Iacca, E. Mininno, Disturbed Exploitation Compact Differential Evolution for Limited Memory Optimization Problems, Information Sciences, 181(12) (2011) 2469-2487. [38] M. Weber, F. Neri, V. Tirronen, A Study on Scale Factor in Distributed Differential Evolution, Information Sciences 181(12) (2011) 2488-2511. [39] E. Mininno, F. Neri, F. Cupertino, D. Naso, Compact Differential Evolution, IEEE Transactions on Evolutionary Computation 15(1) (2011) 32-54. [40] W. J. Cody, Rational Chebyshev approximations for the error function, Mathematics of Computation 23(107) (1969) 631-637. [41] F. Neri, E. Mininno, Memetic Compact Differential Evolution for Cartesian Robot Control, IEEE Computational Intelligence Magazine 5(2) (2010) 54-65. [42] S. Das, A. Konar, U. K. Chakraborty, Two improved differential evolution schemes for faster global search, GECCO 2005, 991-998. [43] S.M. Islam, S. Das, S. Ghosh, S. Roy, P.N. Suganthan, An Adaptive Differential Evolution Algorithm With Novel Mutation and Crossover Strategies for Global Numerical Optimization, IEEE Transactions on Systems, Man, and Cybernetics, Part B 42(2) (2012) 482-500. [44] M. Weber, F. Neri, V. Tirronen, Shuffle Or Update Parallel Differential Evolution for Large Scale Optimization, Soft Computing-A Fusion of Foundations, Methodologies and Applications, Springer, 15(11) (2011) 2089-2107. [45] D.X. Zou, L.Q Gao, J.H. Wu, S. Li, Novel global harmony search algorithm for unconstrained problems, Neurocomputing 73(16-18) (2010) 3308-3318 . [46] D.X. Zou, H.K. Liu, L.Q. Gao, S. Li, An improved differential evolution algorithm for the task assignment problem, Engineering Applications of Artificial Intelligence 24(4) (2011) 616-624. [47] D.X. Zou, H.K. Liu, L.Q. Gao, S. Li, A novel modified differential evolution algorithm for constrained optimization problems, Computers & Mathematics with Applications 61(6) (2011) 1608-1623. [48] Y. Liu, Z. Qin, Z.W. Shi, J. Lu, Center particle swarm optimization, Neurocomputing 70(4-6) (2007) 672-679. [49] P.N. Suganthan, N. Hansen, J.J. Liang, K. Deb, Y. P. Chen, A. Auger, S. Tiwari, Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization, Technical Report, Nanyang Technological University, Singapore, (2005). [50] F. Wilcoxon, Individual comparisons by ranking methods, Biometrics 1(6) (1945) 80-83. [51] J. Derrac, S. Garca, D. Molina, and F. Herrera, A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms, Swarm and Evolutionary Computation 1(1) (2011) 3-18. [52] G. Iacca, F. Neri, E. Mininno, Y.S. Ong, M.H. Lim, Ockham’s Razor in Memetic Computing: Three Stage Optimal Memetic Exploration, Information Sciences, Elsevier, Volume 188, (2012) 17-43.
15
1 0.9
Selction probability γ
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
0
200
400
600
800
Iteration
Fig. 1. Selection probability γ
16
1000
5
5000
6 DE SADE ODE JADE NDE MDE_pBX MDE
4000 Average function value
3500 3000
DE SADE ODE JADE NDE MDE−pBX MDE
5
Average function value
4500
x 10
2500 2000 1500
4
3
2
1000 500
1
0 500
1000
1500
2000 2500 Generation
3000
3500
0
4000
500
(a) Function F1 5
3000
3500
4000
8
x 10
DE SADE ODE JADE NDE MDE−pBX MDE
10
Average function value
Average function value
2000 2500 Generation
4
12 DE SADE ODE JADE NDE MDE−pBX MDE
10
6
4
2
8
6
4
2
0
500
1000
1500
2000 2500 Generation
3000
3500
0
4000
0
500
(c) Function F4
1000
1500
2000 2500 Generation
3000
3500
4000
(d) Function F5
12
10
900 DE SADE ODE JADE NDE MDE−pBX MDE
DE SADE ODE JADE NDE MDE−pBX MDE
800 700 Average function value
10
10 Average function value
1500
(b) Function F2
x 10
12
1000
8
10
6
10
4
10
600 500 400 300 200 100 0
2
10
0
500
1000
1500
2000 2500 Generation
3000
3500
−100
4000
500
(e) Function F6
1500
2000 2500 Generation
3000
3500
4000
(f) Function F9
8
10
500 DE SADE ODE JADE NDE MDE−pBX MDE
7
10
DE SADE ODE JADE NDE MDE−pBX MDE
400
Average function value
Average function value
1000
6
10
17
300
200
100
0
−100 5
10
0
500
1000
1500
2000 2500 Generation
(g) Function F12
3000
3500
4000
500
1000
1500
2000 2500 Generation
3000
3500
4000
(h) Function F13
Fig. 2. Average optimization curves of seven DE approaches on solving problems F1 , F2 , F4 , F5 , F6 , F9 , F12 and F13 with N = 100
Table 1 Pseudo code of MDE Line Procedure of MDE 1
Begin
2
Set population size M ; the maximal iteration number K; x = (x1 , ..., xN ); x = (x1 , ..., xN );
3
Set mF = 0.75; δF = 0.1; Fmin = 0.5; Fmax = 1; mCR = 0.9; δCR = 0.1; CRmin = 0.8; CRmax = 1.
4
Initialize a random population P
5
For k = 1 to K
6
Produce an external archive P ′ according to Eqs.(8)-(12)
7
Select the first half best solutions from the combination of P and P ′ , and reassigned them to P
8
k 1/4 Calculate selection probability: γ = ( K )
9
For i = 1 to M − 1
10
F i = Gd(mF , δF ); CRi = U d(mCR − δCR , mCR + δCR )
11
If rand() < γ
12
Randomly generate two integers i1 and i2 in the range [1, M ], and i1 ̸= i2 ̸= i.
13
vik+1 = xkbest + F i × (xki1 − xki2 )
14
Else
15
Randomly generate three integers i1 , i2 and i3 in the range [1, M ], and i1 ̸= i2 ̸= i3 ̸= i.
16
vik+1 = xki3 + F i × (xki1 − xki2 )
17
End If
18
Randomly generate a integer jrand in the range [1, N ]
19
For j = 1 to N If rand < CRi or j = jrand
20
k+1 uk+1 i,j = vi,j
21 22
Elseif k uk+1 i,j = xi,j
23 24
End If
25
End For
26
If f (uk+1 ) < f (xki ) i
27
xk+1 = uk+1 i i
28
Else xk+1 = xki i
29 30
End If
31
End For
32
Calculate central solution uk+1 = M
33
If
34 35
k f (uk+1 M ) < f (xM ) xk+1 = uk+1 M M
xk+1 = xkM M
37
39
∑M −1
Elseif
36
38
1 M
End If End For End
18
i=1
xki
Table 2 Comparison of DE, SADE, ODE, JADE, NDE, MDE pBX and with means and standard deviations Fun/Alg DE SADE ODE F1 -4.5000e+002 -4.5000e+002 -4.5000e+002 (3.6945e-006) (3.0056e-011) (0) F2 2.6746e+004 1.4418e+003 1.1594e+004 (5.1516e+003) (8.1815e+002) (4.4786e+003) F3 2.0319e+008 1.2993e+007 1.3569e+008 (5.0078e+007) (5.6644e+006) (2.7147e+007) F4 3.8092e+004 8.0429e+004 1.0108e+004 (7.3196e+003) (1.6406e+004) (6.6140e+003) F5 3.4612e+003 1.8659e+003 1.6057e+003 (8.4382e+002) (5.3702e+002) (8.7081e+002) F6 5.3569e+002 4.6066e+002 4.5787e+002 (1.6375e+002) (6.0811e+001) (6.6482e+001) F7 -1.7865e+002 -1.7977e+002 -1.7997e+002 (2.0896e-001) (1.4955e-001) (5.4975e-002) F8 -1.1897e+002 -1.1898e+002 -1.1898e+002 (4.1375e-002) (5.7571e-002) (6.7260e-002) F9 -1.9560e+002 -2.8782e+002 -2.0502e+002 (1.0122e+001) (6.1550e+000) (2.1369e+001) F10 -8.4303e+001 -1.2690e+002 -1.1173e+002 (9.6522e+000) (1.1150e+001) (1.1929e+001) F11 1.3136e+002 1.2824e+002 1.2816e+002 (1.4112e+000) (1.2740e+000) (4.0920e+000) F12 3.4538e+005 5.5368e+004 1.4291e+005 (5.3975e+004) (4.1296e+004) (1.0362e+005) F13 -1.1245e+002 -1.2142e+002 -1.1534e+002 (1.2139e+000) (9.7213e-001) (1.0909e+000) F14 -2.8622e+002 -2.8636e+002 -2.8633e+002 (1.2968e-001) (1.0495e-001) (1.9370e-001) F15 4.1818e+002 4.5334e+002 4.7149e+002 (7.2447e+001) (1.0548e+002) (9.4605e+001) F16 4.2109e+002 3.4470e+002 4.0871e+002 (5.2301e+001) (3.5633e+001) (6.8472e+001) F17 4.6157e+002 4.3282e+002 4.4654e+002 (6.5204e+001) (7.0255e+001) (9.9609e+001) F18 9.1237e+002 9.1113e+002 9.1211e+002 (8.1847e-001) (1.0728e+000) (7.9801e-001) F19 9.1240e+002 9.1109e+002 9.1160e+002 (6.6292e-001) (1.3062e+000) (5.3881e-001) F20 9.1240e+002 9.1118e+002 9.1162e+002 (6.6292e-001) (1.3086e+000) (5.4773e-001) F21 1.2215e+003 1.2393e+003 1.2160e+003 (9.6986e+001) (1.0169e+002) (1.1372e+002) F22 1.2716e+003 1.2501e+003 1.2487e+003 (1.5740e+001) (1.6492e+001) (1.5490e+001) F23 9.3311e+002 1.0893e+003 1.0577e+003 (3.5998e+001) (1.7335e+002) (1.1922e+002) F24 1.2418e+003 1.0206e+003 4.6000e+002 (2.8669e+000) (2.6790e+002) (0) F25 5.1524e+002 4.7493e+002 4.7353e+002 (1.1246e+001) (1.4614e+000) (9.3482e-001)
19
MDE on instances F1 − F25 (N = 30) JADE -4.5000e+002 (0) -4.4731e+002 (2.0987e+000) 5.2368e+006 (3.2671e+006) 8.1057e+002 (1.0292e+003) 2.7459e+003 (8.8045e+002) 4.1593e+002 (2.5021e+001) -1.7999e+002 (1.1860e-002) -1.1898e+002 (5.9958e-002) -2.5624e+002 (1.0890e+001) -1.4464e+002 (1.2946e+001) 1.2716e+002 (1.2749e+000) 1.2060e+004 (1.2693e+004) -1.1868e+002 (1.2555e+000) -2.8667e+002 (2.6286e-001) 4.6534e+002 (9.1949e+001) 4.2860e+002 (1.0752e+002) 4.6747e+002 (1.2779e+002) 9.1188e+002 (1.2566e+000) 9.1228e+002 (1.9346e+000) 9.1287e+002 (2.5337e+000) 1.3214e+003 (2.6261e+001) 1.2696e+003 (2.6040e+001) 1.0692e+003 (1.8421e+002) 6.9167e+002 (2.6631e+002) 4.7519e+002 (2.1431e+000)
NDE -4.5000e+002 (5.5855e-014) 2.0087e+003 (2.5578e+003) 6.4650e+006 (3.7899e+006) 2.3090e+004 (1.0604e+004) 6.1802e+003 (1.5258e+003) 4.6999e+002 (9.2086e+001) -1.7996e+002 (2.9207e-002) -1.1962e+002 (1.0481e-001) -2.8441e+002 (1.0307e+001) -2.1853e+002 (3.1357e+001) 1.2745e+002 (3.6591e+000) 4.7407e+004 (5.2339e+004) -1.1832e+002 (7.4104e+000) -2.8638e+002 (4.3753e-001) 4.9399e+002 (1.0226e+002) 3.8265e+002 (1.4330e+002) 4.2747e+002 (1.4630e+002) 9.1657e+002 (2.4690e+000) 9.2299e+002 (9.4301e+000) 9.2449e+002 (1.0699e+001) 1.3619e+003 (2.5198e+001) 1.3592e+003 (4.9916e+001) 1.0403e+003 (1.0595e+002) 8.6952e+002 (2.8776e+002) 7.8148e+002 (3.6393e+002)
MDE pBX -1.4661e+002 (4.1644e+002) 3.8330e+003 (2.2761e+003) 1.2550e+007 (5.7772e+006) 1.3248e+004 (1.2960e+004) 6.0691e+003 (1.6239e+003) 9.2399e+007 (1.4136e+008) 4.6111e+003 (3.6450e+002) -1.1898e+002 (5.0283e-002) -2.8970e+002 (1.0417e+001) -2.1517e+002 (3.4585e+001) 1.1955e+002 (3.8406e+000) 4.5588e+004 (2.1288e+004) -1.8011e+001 (3.6810e+002) -2.8720e+002 (2.6255e-001) 4.7968e+002 (8.9911e+001) 4.2285e+002 (1.6421e+002) 4.8753e+002 (1.4694e+002) 9.3862e+002 (2.4544e+001) 9.3586e+002 (1.1294e+001) 9.1389e+002 (4.2513e+001) 1.3078e+003 (3.8095e+001) 1.3365e+003 (3.7679e+001) 1.2585e+003 (1.8919e+002) 1.3227e+003 (2.1035e+002) 8.3086e+002 (3.6807e+002)
MDE -4.5000e+002 (4.0881e-014) -4.5000e+002 (2.7998e-005) 3.7715e+005 (1.9341e+005) -1.1571e+002 (3.0443e+002) 2.0704e+003 (6.4474e+002) 4.2657e+002 (2.6391e+001) -1.7998e+002 (9.9092e-003) -1.1900e+002 (4.3267e-002) -2.9352e+002 (1.0917e+001) -2.5348e+002 (4.6193e+001) 1.2627e+002 (6.8903e+000) 9.6035e+003 (1.6371e+004) -1.2218e+002 (6.5013e+000) -2.8657e+002 (5.8719e-001) 4.8557e+002 (1.0045e+002) 2.2365e+002 (4.0095e+001) 2.5447e+002 (5.9974e+001) 9.1290e+002 (2.5957e+000) 9.1437e+002 (3.8817e+000) 9.2040e+002 (1.1292e+001) 1.2261e+003 (7.5031e+001) 1.2593e+003 (2.9899e+001) 8.9850e+002 (3.8922e+000) 7.0835e+002 (2.8510e+002) 4.7267e+002 (1.2977e+000)
Table 3 Comparison of DE, SADE, ODE, JADE, NDE, MDE pBX and with means and standard deviations Fun/Alg DE SADE ODE F1 -4.5000e+002 -4.5000e+002 -4.5000e+002 (1.7964e-004) (0) (1.0556e-014) F2 1.1180e+005 7.5228e+003 7.2432e+004 (1.1835e+004) (1.6141e+003) (1.3918e+004) F3 6.8382e+008 1.8961e+007 4.3421e+008 (1.1985e+008) (6.2099e+006) (6.5760e+007) F4 1.4095e+005 2.4739e+005 8.1216e+004 (1.4658e+004) (3.0298e+004) (2.2712e+004) F5 1.4173e+004 4.9305e+003 3.7425e+003 (1.9303e+003) (1.0489e+003) (7.3124e+002) F6 1.2381e+003 4.7638e+002 8.1275e+004 (6.3239e+002) (4.3935e+001) (2.4839e+005) F7 -1.7583e+002 -1.7993e+002 -1.7999e+002 (1.4489e+000) (3.7930e-002) (7.8084e-003) F8 -1.1882e+002 -1.1882e+002 -1.1880e+002 (4.5854e-002) (4.7877e-002) (2.8210e-002) F9 -4.4274e+001 -2.4581e+002 -5.7423e+001 (1.0235e+001) (1.0045e+001) (2.3832e+001) F10 1.2466e+002 2.8880e+001 6.9063e+001 (1.6017e+001) (2.0394e+001) (1.9994e+001) F11 1.6494e+002 1.5908e+002 1.5459e+002 (1.4359e+000) (2.3719e+000) (9.3300e+000) F12 1.4051e+006 6.0582e+004 4.2143e+005 (1.6861e+005) (4.5807e+004) (3.7810e+005) F13 -9.1155e+001 -1.1240e+002 -1.0057e+002 (2.2120e+000) (1.3332e+000) (2.0177e+000) F14 -2.7650e+002 -2.7663e+002 -2.7659e+002 (1.6729e-001) (1.5672e-001) (2.0201e-001) F15 3.7478e+002 3.9333e+002 4.2106e+002 (5.1343e+001) (7.4837e+001) (9.5291e+001) F16 4.5071e+002 4.1530e+002 4.1135e+002 (2.4655e+001) (5.8622e+001) (2.8022e+001) F17 5.0033e+002 4.5860e+002 5.3342e+002 (4.1408e+001) (5.3273e+001) (7.9835e+001) F18 1.0017e+003 9.8697e+002 9.8830e+002 (4.1562e+000) (2.2144e+000) (3.1197e+000) F19 1.0019e+003 9.8686e+002 9.8828e+002 (4.3084e+000) (2.0767e+000) (3.1400e+000) F20 1.0019e+003 9.8686e+002 9.8826e+002 (4.3084e+000) (2.0767e+000) (3.1438e+000) F21 1.0686e+003 1.0708e+003 1.0549e+003 (2.9283e+002) (3.1910e+002) (3.3719e+002) F22 1.2759e+003 1.2710e+003 1.2878e+003 (3.8179e+000) (6.7102e+000) (2.0626e+001) F23 1.3759e+003 1.3732e+003 1.3735e+003 (1.9594e+000) (1.7342e+000) (1.9163e+000) F24 1.2846e+003 9.6994e+002 8.1507e+002 (3.1223e+000) (3.2183e+002) (3.1323e+002) F25 6.5380e+002 4.9078e+002 4.8756e+002 (2.3393e+001) (2.7980e+000) (2.1402e+000)
20
MDE on instances F1 − F25 (N = 50) JADE -4.5000e+002 (7.6117e-014) -2.4475e+002 (2.4211e+002) 3.1870e+006 (1.2706e+006) 1.4688e+004 (5.6533e+003) 7.5236e+003 (9.2514e+002) 4.4350e+002 (4.7511e+001) -1.7999e+002 (1.8460e-002) -1.1882e+002 (3.4861e-002) -1.8694e+002 (2.1430e+001) 2.5921e+001 (2.4060e+001) 1.5837e+002 (1.6729e+000) 6.4440e+004 (4.4235e+004) -1.0294e+002 (3.6511e+000) -2.7689e+002 (1.5651e-001) 4.9387e+002 (6.6025e+001) 3.9416e+002 (5.9249e+001) 4.4436e+002 (6.1157e+001) 9.6619e+002 (5.2689e+001) 1.0072e+003 (1.3972e+001) 1.0050e+003 (1.1035e+001) 1.3882e+003 (2.0250e+001) 1.3217e+003 (1.9599e+001) 1.1436e+003 (2.2847e+002) 8.3826e+002 (3.7638e+002) 8.0311e+002 (3.6216e+002)
NDE -4.5000e+002 (1.6693e-012) 1.3830e+004 (6.4318e+003) 1.4520e+007 (1.0780e+007) 7.1351e+004 (2.1837e+004) 1.5800e+004 (2.5109e+003) 5.3902e+002 (1.0529e+002) -1.7997e+002 (2.6739e-002) -1.1970e+002 (1.4087e-001) -2.3158e+002 (1.8667e+001) -1.2942e+002 (4.0019e+001) 1.5993e+002 (6.9811e+000) 1.0580e+005 (9.0150e+004) -8.0977e+001 (3.3298e+001) -2.7646e+002 (3.2953e-001) 4.7095e+002 (6.2782e+001) 3.5054e+002 (9.8523e+001) 3.8512e+002 (7.7240e+001) 1.0269e+003 (1.8808e+001) 1.0303e+003 (1.8525e+001) 1.0242e+003 (1.4069e+001) 1.4462e+003 (2.6411e+001) 1.3994e+003 (4.4210e+001) 1.3288e+003 (6.7112e+001) 1.0630e+003 (3.3390e+002) 1.0871e+003 (4.0000e+002)
MDE pBX 2.1668e+003 (2.1963e+003) 1.3616e+004 (4.3723e+003) 3.7041e+007 (1.4568e+007) 7.3321e+004 (3.2211e+004) 1.3132e+004 (1.9585e+003) 4.2828e+008 (4.7271e+008) 6.0392e+003 (2.7916e+001) -1.1880e+002 (3.2671e-002) -2.3025e+002 (2.5230e+001) -1.4064e+002 (5.3112e+001) 1.4566e+002 (4.6649e+000) 2.0591e+005 (7.0587e+004) -4.2938e+001 (6.1975e+001) -2.7746e+002 (4.3401e-001) 5.4492e+002 (4.6900e+001) 3.7986e+002 (1.2198e+002) 4.9436e+002 (1.4937e+002) 1.0612e+003 (4.2678e+001) 1.0630e+003 (2.1452e+001) 1.0702e+003 (2.0428e+001) 1.3034e+003 (1.2139e+001) 1.5476e+003 (8.5036e+001) 1.5023e+003 (9.4738e+001) 1.5609e+003 (5.4633e+001) 1.5510e+003 (5.0937e+001)
MDE -4.5000e+002 (2.2811e-013) -4.4998e+002 (2.1913e-002) 9.6158e+005 (4.5503e+005) 7.1202e+003 (3.8631e+003) 5.9634e+003 (1.2436e+003) 4.7150e+002 (4.4779e+001) -1.7999e+002 (1.6094e-002) -1.1884e+002 (3.3455e-002) -2.4585e+002 (1.8039e+001) -1.7652e+002 (9.3024e+001) 1.5872e+002 (9.9312e+000) 3.8276e+004 (2.6880e+004) -1.2264e+002 (7.3062e+000) -2.7669e+002 (1.4781e-001) 4.2598e+002 (5.8153e+001) 2.6269e+002 (7.1326e+001) 3.3638e+002 (1.3359e+002) 9.8227e+002 (5.4594e+001) 9.9068e+002 (3.0581e+001) 1.0078e+003 (1.4222e+001) 1.3383e+003 (4.2562e+001) 1.3093e+003 (2.5058e+001) 1.0690e+003 (1.6214e+002) 4.6017e+002 (2.0769e-001) 4.9173e+002 (6.0382e+000)
Table 4 Comparison of DE, SADE, ODE, JADE, NDE, MDE pBX and MDE on eight instances (N = 100) with means and standard deviations Fun/Alg DE SADE ODE JADE NDE MDE pBX F1 -3.9729e+002 -4.5000e+002 -4.5000e+002 -4.5000e+002 -4.5000e+002 7.2374e+002 (1.0195e+001) (1.0556e-014) (2.7927e-014) (1.7726e-013) (1.9162e-012) (1.1694e+003) F2 4.8898e+005 5.0449e+004 4.0361e+005 4.7821e+003 7.1110e+004 1.1060e+004 (4.6288e+004) (7.2590e+003) (3.8850e+004) (1.9255e+003) (2.6575e+004) (4.4484e+003) F4 6.0287e+005 8.6559e+005 4.8812e+005 6.5057e+004 2.5516e+005 7.9207e+004 (4.6366e+004) (9.5530e+004) (6.0969e+004) (1.1318e+004) (5.8440e+004) (1.4969e+004) F5 3.3445e+004 9.0970e+003 2.0015e+004 1.8170e+004 3.7364e+004 3.2187e+004 (2.4342e+003) (1.4968e+003) (2.6617e+003) (2.7954e+003) (4.9192e+003) (2.4362e+003) F6 6.0286e+006 5.3886e+002 3.8231e+003 5.1772e+002 5.9252e+002 3.7760e+008 (1.7079e+006) (4.7929e+001) (1.8192e+004) (4.5437e+001) (4.8659e+001) (3.4090e+008) F9 4.3996e+002 -4.7505e+001 4.2843e+002 4.7171e+001 -5.8524e+001 -7.1720e+001 (1.7893e+001) (1.8852e+001) (1.9229e+001) (9.6301e+001) (5.3878e+001) (3.9769e+001) F12 1.1531e+007 7.3591e+005 8.5265e+006 2.0294e+005 6.0507e+005 5.0854e+005 (5.3871e+005) (8.8750e+005) (2.0376e+006) (9.4541e+004) (3.3526e+005) (1.6627e+005) F13 1.2913e+004 -8.5496e+001 -6.0181e+001 -5.2135e+001 3.3033e+001 2.2204e+002 (4.2364e+003) (2.7601e+000) (4.2222e+000) (6.6864e+000) (4.6378e+001) (2.9786e+002)
Table 5 P -values calculated for Wilcoxon rank-sum test for functions F1 − F25 with N = 30 Fun/Alg DE SADE ODE JADE NDE f1 1.2118e-012 1.2108e-012 NA NA 5.5610e-003 f2 3.0199e-011 3.0199e-011 3.0199e-011 3.0199e-011 3.0199e-011 f3 3.0199e-011 3.0199e-011 3.0199e-011 3.3384e-011 3.0199e-011 f4 3.0199e-011 3.0199e-011 3.0199e-011 3.2555e-007 3.0199e-011 f5 5.4617e-009 9.3341e-002 NA 8.8829e-006 3.3384e-011 f6 2.0152e-008 3.5201e-007 1.2541e-007 NA 7.1988e-005 f7 3.0199e-011 4.5043e-011 4.2067e-002 NA 1.8608e-006 f8 3.0199e-011 3.0199e-011 3.0199e-011 3.0199e-011 NA f9 3.0199e-011 2.2360e-002 3.0199e-011 3.0199e-011 3.3386e-003 f10 9.9186e-011 7.7725e-009 4.1825e-009 8.4848e-009 1.8608e-006 f11 3.6897e-011 3.1589e-010 7.0881e-008 6.1210e-010 1.2023e-008 f12 3.0199e-011 1.1023e-008 5.0723e-010 4.2067e-002 2.1540e-006 f13 7.7725e-009 7.7272e-002 2.2360e-002 7.7272e-002 7.6171e-003 f14 3.0199e-011 3.0199e-011 6.0658e-011 3.0811e-008 2.3897e-008 f15 NA 3.9525e-001 4.8289e-002 7.0103e-002 6.0957e-003 f16 3.0199e-011 3.0199e-011 3.0199e-011 6.0584e-011 8.1917e-007 f17 3.0199e-011 8.9827e-011 1.3273e-010 1.8548e-009 1.2852e-006 f18 6.3560e-005 NA 9.0307e-004 2.3243e-002 3.3384e-011 f19 1.7836e-004 NA 1.4128e-001 1.6285e-002 1.3289e-010 f20 4.9818e-004 NA 2.5188e-001 9.0688e-003 6.0658e-011 f21 8.6499e-001 6.4142e-001 NA 9.5192e-004 8.4808e-009 f22 8.8829e-006 7.7312e-001 NA 1.3703e-003 3.0199e-011 f23 1.8912e-004 1.0274e-006 1.2050e-010 4.4261e-003 7.3763e-010 f24 1.2118e-012 1.2118e-012 NA 1.7772e-010 1.2019e-012 f25 3.0199e-011 2.3800e-003 7.5059e-001 5.8282e-003 3.0199e-011
21
MDE pBX 1.2118e-012 3.0199e-011 3.0199e-011 3.0199e-011 3.0199e-011 3.0199e-011 3.0161e-011 3.0199e-011 1.4128e-001 1.0277e-006 NA 2.6695e-009 1.2493e-005 NA 1.0313e-002 1.3583e-007 8.8815e-010 2.5721e-007 3.0199e-011 2.7086e-002 7.9581e-003 3.3384e-011 3.0180e-011 1.2118e-012 1.9527e-003
MDE -4.5000e+002 (9.9413e-013) -3.4821e+002 (1.4884e+002) 4.7601e+004 (8.5649e+003) 1.3111e+004 (2.3598e+003) 5.3113e+002 (4.8777e+001) -9.2237e+001 (3.5764e+001) 3.1332e+005 (1.9249e+005) -1.1644e+002 (3.1570e+000)
MDE 5.5102e-003 NA NA NA 1.6955e-002 1.4423e-003 3.9527e-001 3.0199e-011 NA NA 5.6073e-005 NA NA 2.0338e-009 9.0620e-003 NA NA 6.0971e-003 3.0059e-004 1.4423e-003 9.8231e-001 7.0127e-002 NA 1.8231e-010 NA
Table 6 P -values calculated for Wilcoxon rank-sum test for functions F1 − F25 with N = 50 Fun/Alg
DE
SADE
ODE
JADE
NDE
MDE pBX
MDE
f1
1.2118e-012
NA
3.3371e-001
6.3490e-005
1.8051e-010
1.2118e-012
5.5135e-009
f2
3.0199e-011
3.0199e-011
3.0199e-011
3.0199e-011
3.0199e-011
3.0199e-011
NA
f3
3.0199e-011
3.0199e-011
3.0199e-011
4.1997e-010
3.0199e-011
3.0199e-011
NA
f4
3.0199e-011
3.0199e-011
3.0199e-011
6.0459e-007
3.0199e-011
3.0199e-011
NA
f5
3.0199e-011
1.1674e-005
NA
3.0199e-011
3.0199e-011
3.0199e-011
8.8910e-010
f6
3.0199e-011
4.7138e-004
2.5974e-005
NA
1.4918e-006
3.0199e-011
1.1738e-003
f7
3.0199e-011
3.6897e-011
NA
7.5059e-001
1.4733e-007
3.0199e-011
6.3533e-002
f8
3.0199e-011
3.0199e-011
3.0199e-011
3.0199e-011
NA
3.0199e-011
3.0199e-011
f9
3.0199e-011
9.4696e-001
3.0199e-011
1.4643e-010
2.7548e-003
9.8834e-003
NA
f10
6.0658e-011
9.0632e-008
1.0105e-008
7.6950e-008
2.6784e-006
4.0840e-005
NA
f11
3.0199e-011
3.6897e-011
2.3885e-004
3.0199e-011
6.7220e-010
NA
8.2919e-006
f12
3.0199e-011
3.1466e-002
6.5261e-007
9.0688e-003
3.5923e-005
3.6897e-011
NA
f13
3.6897e-011
8.4848e-009
7.1186e-009
7.7725e-009
5.0723e-010
6.6955e-011
NA
f14
4.9752e-011
8.1527e-011
8.9934e-011
4.3106e-008
6.1210e-010
NA
9.9186e-011
f15
NA
9.1171e-001
4.1188e-001
3.0662e-008
6.5261e-007
3.0199e-011
1.8564e-003
f16
3.0199e-011
7.1186e-009
8.8910e-010
9.0632e-008
4.4592e-004
8.6634e-005
NA
f17
1.5292e-005
6.5486e-004
2.0023e-006
1.7666e-003
7.4827e-002
1.7836e-004
NA
f18
1.3272e-002
3.8710e-001
4.8252e-001
NA
2.1947e-008
1.0105e-008
4.8252e-001
f19
3.0199e-011
NA
1.2597e-001
1.7769e-010
3.0199e-011
3.0199e-011
1.3832e-002
f20
3.0199e-011
NA
1.3345e-001
3.0199e-011
3.0199e-011
3.0199e-011
2.1544e-010
f21
6.5203e-001
9.1170e-001
NA
5.2603e-004
7.3713e-011
1.3730e-001
2.6069e-002
f22
1.7666e-003
NA
3.7704e-004
4.0772e-011
3.0199e-011
3.0199e-011
1.0105e-008
f23
8.4848e-009
8.4848e-009
8.4848e-009
3.7108e-001
9.8329e-008
9.9186e-011
NA
f24
2.9897e-011
1.6734e-008
1.5405e-005
2.9658e-003
2.1912e-009
2.9897e-011
NA
f25
3.0199e-011
4.0840e-005
NA
5.2640e-004
3.0199e-011
3.0199e-011
4.2259e-003
Table 7 P -values calculated for Wilcoxon rank-sum test for eight functions with N = 100 Fun/Alg DE SADE ODE JADE NDE f1 1.7203e-012 NA 2.4637e-002 4.3392e-010 1.8100e-012 f2 3.0199e-011 3.0199e-011 3.0199e-011 3.0199e-011 3.0199e-011 f4 3.0199e-011 3.0199e-011 3.0199e-011 3.0199e-011 3.0199e-011 f5 3.0199e-011 NA 3.0199e-011 3.0199e-011 3.0199e-011 f6 3.0199e-011 4.0354e-001 1.3345e-001 NA 1.7294e-007 f9 3.0199e-011 2.7829e-007 3.0199e-011 1.0277e-006 1.0763e-002 f12 3.0199e-011 1.7290e-006 3.0199e-011 NA 1.1023e-008 f13 3.0199e-011 3.0199e-011 3.0199e-011 3.0199e-011 3.0199e-011
22
MDE pBX 1.7203e-012 3.0199e-011 9.9186e-011 3.0199e-011 3.0199e-011 4.0595e-002 8.1014e-010 3.0199e-011
MDE 1.7872e-012 NA NA 2.9215e-009 1.5798e-001 NA 1.6285e-002 NA
Table 8 Average execution time (AET) for function Fi (i = 1, ..., 25) with N = 30 Fun/Alg DE SADE ODE JADE F1 3.8629e-001 3.7423e-001 4.7720e-001 1.0190e+000 F2 9.2710e-001 8.9534e-001 1.3242e+000 1.5144e+000 F3 9.7684e-001 8.5179e-001 1.2104e+000 1.4862e+000 F4 9.0488e-001 8.9113e-001 1.2824e+000 1.4837e+000 F5 6.1459e+000 6.2481e+000 9.7186e+000 6.8148e+000 F6 4.2813e-001 3.8191e-001 5.1325e-001 1.0887e+000 F7 7.9273e-001 7.8957e-001 1.0056e+000 1.4704e+000 F8 7.6466e-001 6.7709e-001 9.3757e-001 1.3605e+000 F9 4.9018e-001 4.4371e-001 6.6472e-001 1.0234e+000 F10 7.2609e-001 6.7836e-001 8.9170e-001 1.3716e+000 F11 1.7538e+001 1.7269e+001 2.8089e+001 1.8596e+001 F12 5.6144e+000 5.3645e+000 8.8738e+000 6.2117e+000 F13 6.8574e-001 6.4955e-001 9.4398e-001 1.2546e+000 F14 8.6971e-001 7.8390e-001 1.1186e+000 1.3826e+000 F15 1.0984e+002 1.0650e+002 1.7458e+002 1.0716e+002 F16 1.0889e+002 1.1213e+002 1.9921e+002 1.1089e+002 F17 1.1151e+002 1.0988e+002 1.6473e+002 1.0802e+002 F18 1.1822e+002 1.1783e+002 1.8685e+002 1.2275e+002 F19 1.1707e+002 1.1854e+002 1.8525e+002 1.1833e+002 F20 1.1655e+002 1.1403e+002 1.6602e+002 1.2183e+002 F21 1.1196e+002 1.1493e+002 1.9487e+002 1.1874e+002 F22 1.1986e+002 1.2161e+002 1.8793e+002 1.2016e+002 F23 1.1082e+002 1.0848e+002 1.6398e+002 1.0851e+002 F24 7.0796e+001 7.2237e+001 1.2928e+002 7.2521e+001 F25 7.2133e+001 7.0780e+001 1.1182e+002 7.3562e+001
23
NDE 3.6271e-001 9.0272e-001 8.2955e-001 9.2239e-001 6.4899e+000 4.1018e-001 8.3622e-001 6.9969e-001 4.2399e-001 7.0403e-001 2.0042e+001 5.5355e+000 6.6990e-001 7.9339e-001 1.1281e+002 1.1654e+002 1.1321e+002 1.2792e+002 1.2297e+002 1.1908e+002 1.2229e+002 1.2469e+002 1.1666e+002 7.5052e+001 7.7286e+001
MDE pBX 1.5525e+000 2.0489e+000 2.1204e+000 2.0665e+000 7.5186e+000 1.5879e+000 2.1757e+000 1.8446e+000 1.5986e+000 1.8925e+000 2.0627e+001 6.4126e+000 1.8068e+000 1.9193e+000 1.0856e+002 1.1256e+002 1.0862e+002 1.1786e+002 1.1926e+002 1.1931e+002 1.1627e+002 1.1928e+002 1.1639e+002 7.3618e+001 7.3592e+001
MDE 7.3781e-001 1.8006e+000 1.6234e+000 1.8474e+000 1.2462e+001 7.3773e-001 1.3807e+000 1.4540e+000 8.3403e-001 1.2678e+000 3.8421e+001 1.0501e+001 1.3056e+000 1.4906e+000 2.1088e+002 2.1592e+002 2.1171e+002 2.3292e+002 2.3753e+002 2.3629e+002 2.2921e+002 2.3679e+002 2.2202e+002 1.4164e+002 1.4255e+002
Table 9 Average execution time (AET) for function Fi (i = 1, ..., 25) with N = 50 Fun/Alg DE SADE ODE JADE F1 9.8179e-001 9.5454e-001 1.1712e+000 2.4916e+000 F2 3.3381e+000 3.2850e+000 4.8831e+000 4.8262e+000 F3 2.4798e+000 2.4015e+000 3.5586e+000 4.1816e+000 F4 3.5114e+000 3.4340e+000 5.0641e+000 4.9586e+000 F5 2.5245e+001 2.3601e+001 3.9554e+001 2.6182e+001 F6 1.0891e+000 1.0809e+000 1.3884e+000 2.8430e+000 F7 2.1349e+000 1.9087e+000 2.7463e+000 3.9425e+000 F8 1.8870e+000 1.8950e+000 2.6062e+000 3.4594e+000 F9 1.2667e+000 1.2539e+000 1.5735e+000 2.8058e+000 F10 1.8823e+000 1.8603e+000 2.5559e+000 3.3797e+000 F11 5.9652e+001 5.9534e+001 9.4421e+001 6.0376e+001 F12 3.0485e+001 2.9750e+001 4.8698e+001 3.2372e+001 F13 2.3526e+000 2.1268e+000 3.1180e+000 4.0900e+000 F14 2.6185e+000 2.5405e+000 3.6813e+000 4.4376e+000 F15 3.3588e+002 3.4530e+002 6.1631e+002 3.9554e+002 F16 4.9999e+002 5.0976e+002 8.1855e+002 4.9055e+002 F17 4.0146e+002 4.0287e+002 6.1669e+002 3.7535e+002 F18 4.2265e+002 4.2529e+002 6.7920e+002 4.1144e+002 F19 4.2684e+002 4.2375e+002 6.5646e+002 3.9890e+002 F20 4.3532e+002 4.4533e+002 6.8047e+002 4.2841e+002 F21 4.1228e+002 4.3701e+002 6.3944e+002 3.8200e+002 F22 4.4802e+002 4.3301e+002 6.5674e+002 4.0764e+002 F23 3.8954e+002 4.0535e+002 7.1956e+002 4.0740e+002 F24 2.6580e+002 2.6873e+002 4.1750e+002 2.7079e+002 F25 2.5943e+002 2.6961e+002 4.4663e+002 2.7598e+002
NDE 9.4976e-001 3.3715e+000 2.5149e+000 3.6837e+000 2.6196e+001 1.0984e+000 1.9657e+000 1.9022e+000 1.2340e+000 1.8289e+000 6.1376e+001 3.1714e+001 2.1511e+000 2.5674e+000 4.0538e+002 5.0069e+002 4.0789e+002 4.0575e+002 4.2283e+002 4.4610e+002 4.0242e+002 4.5313e+002 4.3188e+002 2.7061e+002 2.6219e+002
MDE pBX 3.6537e+000 5.8948e+000 5.4474e+000 6.1588e+000 2.7784e+001 3.9690e+000 5.1336e+000 4.6032e+000 4.0268e+000 4.8100e+000 6.1676e+001 3.5742e+001 5.6422e+000 5.9844e+000 3.9816e+002 5.3772e+002 3.9079e+002 3.9103e+002 4.1848e+002 4.3011e+002 3.9899e+002 4.2728e+002 4.0627e+002 2.4547e+002 2.5990e+002
MDE 1.8070e+000 6.5085e+000 4.8237e+000 6.7691e+000 5.0915e+001 2.0805e+000 3.7234e+000 3.6914e+000 2.2239e+000 3.5120e+000 1.1706e+002 5.8931e+001 4.3374e+000 5.0825e+000 7.7997e+002 9.9185e+002 7.4716e+002 7.9693e+002 8.3927e+002 8.4007e+002 7.9789e+002 8.7036e+002 8.0077e+002 5.1464e+002 5.0303e+002
Table 10 Average execution time (AET) for function Fi (i = 1, ..., 25) with N = 100 Fun/Alg DE SADE ODE JADE F1 4.0091e+000 3.9470e+000 4.9893e+000 1.2686e+001 F2 2.7116e+001 2.6797e+001 4.0662e+001 3.5004e+001 F4 2.6531e+001 2.7708e+001 4.2253e+001 3.4784e+001 F5 1.3923e+002 1.3804e+002 2.2345e+002 1.5433e+002 F6 5.0036e+000 4.5220e+000 5.9552e+000 1.3992e+001 F9 5.5861e+000 5.5102e+000 7.4842e+000 1.3715e+001 F12 3.4210e+002 3.4287e+002 5.5852e+002 3.5984e+002 F13 8.4115e+000 8.1251e+000 1.1902e+001 1.7586e+001
NDE 4.0752e+000 3.0391e+001 2.9177e+001 1.4922e+002 4.8917e+000 5.4746e+000 3.6509e+002 9.0180e+000
MDE pBX 1.5770e+001 3.9720e+001 4.0073e+001 1.5783e+002 1.6498e+001 1.7131e+001 3.7640e+002 2.1422e+001
MDE 7.6784e+000 5.5761e+001 5.3537e+001 2.8272e+002 8.9868e+000 1.0259e+001 7.1100e+002 1.6676e+001
24