Expert Systems with Applications 41 (2014) 877–885
Contents lists available at ScienceDirect
Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa
An adaptive single-point algorithm for global numerical optimization Francisco Viveros-Jiménez a,⇑, José A. León-Borges b, Nareli Cruz-Cortés a a b
Centro de Investigación en Computación, Instituto Politécnico Nacional, Av. Juan de Dios Bátiz s/n, Col. Nueva Industrial Vallejo, C.P 07738 México D.F., Mexico Universidad Politécnica de Quintana Roo, Av. Tulum, Manzana 1, Lote 40, Planta Alta, SM 2, 77500 Cancún, Quintana Roo, Mexico
a r t i c l e
i n f o
Keywords: Unconstrained problems Numerical optimization Hill-climbing Adaptive behavior
a b s t r a c t This paper describes a novel algorithm for numerical optimization, called Simple Adaptive Climbing (SAC). SAC is a simple efficient single-point approach that does not require a careful fine-tunning of its two parameters. SAC algorithm shares many similarities with local optimization heuristics, such as random walk, gradient descent, and hill-climbing. SAC has a restarting mechanism, and a powerful adaptive mutation process that resembles the one used in Differential Evolution. The algorithms SAC is capable of performing global unconstrained optimization efficiently in high dimensional test functions. This paper shows results on 15 well-known unconstrained problems. Test results confirm that SAC is competitive against state-of-the-art approaches such as micro-Particle Swarm Optimization, CMA-ES or Simple Adaptive Differential Evolution. Ó 2013 Elsevier Ltd. All rights reserved.
1. Introduction In the most general case, global numerical optimization is the task of finding the point (x⁄) with the smallest (minimization case) or bigger (maximization case) function value (f(x⁄)). There exists special cases where the search space is highly constrained, it means that the solution found by the algorithm must be optimal, and it must satisfy the defined problem’s constrains too (CruzCortés, Trejo-Pérez, & Coello Coello, 2005; Coello Coello & CruzCortés, 2002, 2004). Other kind of numerical optimization problems are the called Multi-Objective Optimization with more than one objective functions to be optimized at the same time (Coello Coello & Cruz-Cortés, 2005). In this work we are interested on designing an algorithm for non-linear unconstrained single-objective optimization problems. The Evolutionary Algorithms have been widely utilized to find optimal solutions to non-linear optimization problems. The Evolutionary Algorithms are population-based, e.g. they handle a set of possible solutions, further they are probabilistic methods. On the other hand, single-point optimizers such as hill-climbing, and random walks handle one point at a time. Random as well as deterministic versions of the hill-climbing algorithms can be found in the specialized literature. Most of the recent efficient optimizers for solving unconstrained optimization, such as restart Covariance Matrix Adaptation Evolution Strategy (CMA-ES) (Auger & Hansen, 2005), can be considered complex approaches because they use Hessian and ⇑ Corresponding author. E-mail addresses:
[email protected] (F. Viveros-Jiménez), jleon@upqroo. edu.mx (J.A. León-Borges),
[email protected] (N. Cruz-Cortés). 0957-4174/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.08.018
covariance matrix. Those approaches are very effective and greatly overcome more of the simple heuristic approaches, as shown in events such as Congress on Evolutionary Computation (Hansen, 2006) and the Black-Box Optimization Benchmarking workshop (Hansen, Auger, Finck, & Ros, 2010). However, we consider that a simpler approach capable of giving quality results is sufficient for most of the times. This paper presents a novel approach based on the idea of having one single point but with a powerful and adaptive mutation strategy: Simple Adaptive Climbing (SAC). It is a single-point approach similar to hill-climbing or random walk algorithms. Hill-climbing like algorithms are known for having difficulties solving global optimization problems with multiple local optimum values (as explained further in Section 2). Our algorithm can solve these problems due to the following features: It can move to all the possible search directions. Its search radius is adaptively adjusted, i.e. it is adjusted depending on the current situation. It has a restarting mechanism that allows the algorithm to resume its process when getting stuck in a solution. We tested our approach on 15 well-known unconstrained problems. Test results show that the algorithm works as well as more complex state-of-the-art approaches, such as micro-Particle Swarm Optimization, CMA-ES, or Simple Adaptive Differential Evolution. The contents of this paper are organized as follows: First, we give a brief introduction on hill-climbing algorithm in Section 2. Section 3 contains the SAC description. Section 4 contains the experimental design. Section 5 presents a comparison of SAC against four state-of-the-art approaches: Elitist Evolution, micro-Particle Swarm
878
F. Viveros-Jiménez et al. / Expert Systems with Applications 41 (2014) 877–885
Optimization, Simple Adaptive Differential Evolution, and Restart CMA-ES. Finally, Section 6 concludes this paper. 2. Brief review of hill-climbing like algorithms Let us imagine that you are a mountain climber trying to reach the peak having a real thick fog. Imagine that you have forgotten some important stuff like a compass and the map, but at least
you have a lot of food, the perfect climbing suit and equipment, and a machine that tells your current altitude. How will you find the mountain peak? Hill-climbing like algorithms are heuristics for getting you to the mountain peak, that is, keep going to the highest point surrounding you. Hill-climbing algorithms are single point optimizers with adjustable search radius. Fig. 1 depicts a generalization of hillclimbing algorithms. As we can observe in Fig. 1, these algorithms
Fig. 1. Generic algorithm for hill climbers (minimization case).
(a) Foothill problem: the searching process is stuck in a local optimum
(b) Plateau problem: the searching process is stuck in a flat surface
(c) Ridge problem: the searching area (dark gray) does not allow improving. Fig. 2. Most common problems found in hill-climbing algorithms (maximization case).
F. Viveros-Jiménez et al. / Expert Systems with Applications 41 (2014) 877–885
879
Fig. 3. Algorithm for SAC (minimization case).
are greedy strategies that try to move only to a highest next point. The main difference between hill-climber lies in the specific implementation of the following functions: mutate(Xbest, steps, parameters): Function that creates a new exploration point. adjustStepswhenSuccess(steps, parameters): Function that adjusts the steps/radius when finding a better location than the current one. adjustStepswhenFailure(steps, parameters): Function that adjust the steps/radius when finding a worst location than the current one. Hill climbing algorithms are known for being fast on getting the top of a hill (local optimum value). However, most of these strategies fail when the mountain have multiple hills (as many real mountains do). Three main reasons were detected behind this behavior are the following (Winston, 1992): 1. The foothill problem: the optimizer gets stuck in a local optimum (frequently, the first hill it climbs). 2. The plateau problem: the optimizer gets stuck in flat surfaces with some sharp peaks. 3. The ridge problem: the optimizer gets stuck because the direction of the ascent is not within the set of possible search directions. These problems are further depicted in Fig. 2. On the other hand, population based algorithms such as Differential Evolution, Genetic Algorithms, or Particle Swarm Optimization, are able fo find the global optimum neighborhood very fast. However, they are relatively slow finding local movements. For this reason, hill-climbing algorithms are commonly used in combination with population-based algorithms to create efficient memetic
algorithms (Renders & Bersini, 1994, Lozano, Herrera, Krasnogor, & Molina, 2004).
3. Simple adaptive climbing SAC is a simple single-point optimizer. Its implementation is smaller than most of the evolutionary algorithms, while maintaining a competitive performance. Fig. 3 shows the algorithm for SAC (considering minimization). The general ideas behind the algorithm are the next: (1) the searching radius is increased when improvements are found, (2) the searching radius is reduced when no improvements are made, and (3) a restarting of the search radius and direction are made when there are no improvements after a user-defined time. The Fig. 4 shows an example of SAC main behaviors. SAC requires configuration of two user parameters: 1. Base step sizes (B 2 [0.0, 0.5]). B is the initial and also the maximum possible search radius. It represents a proportion of the whole search space. So, a B = 0.5 value is equivalent to half the search space, in which case the algorithm has the possibility of going to any point in the whole space. A value greater than 0.5 will encourage exploration for a longer time and could transform SAC into a random search. 2. Maximum consecutive errors (R > 0): indicates the maximum consecutive errors necessary to restart the searching process. Very small values of R could avoid convergence in a precise search situation. Long values of R could provoke SAC gets stuck for a long time in a local optimum value. SAC searches by performing explorations in random dimension subsets. This feature allows SAC to overcome the aforementioned ridge problem. SAC uses adaptive step sizes for each dimension of
880
F. Viveros-Jiménez et al. / Expert Systems with Applications 41 (2014) 877–885
(a) SAC can explore in any dimension subset with a maximum radius of bj
(b) Restart occurs after R consecutive failures. However, it keeps memory of the best solution found. Also, please note that search space is considered connected at the extremes
(c) When improving SAC increases the search space
(d) When failing (white dot) SAC decreases the search space Fig. 4. SAC main behaviors (minimization case).
the problem (bj, j = 1, . . . , D). These b step sizes are the key of SAC exploration process. SAC adjusts them accordingly to the current success/failure state of the search: b values become greater when improving the current solution, encouraging exploration of the search space (see line 15 on Fig. 3). b values became smaller when failing, encouraging exploration of nearby areas (see line 22 on Fig. 3). In SAC, the search space is considered as connected at the begin and end of all problem dimensions. Hence, if SAC tries to explore outside the upper limit, it will explore a valid region near the lower limit, and viceversa. SAC keeps track of the consecutive
unsuccessful explorations (restart on Algorithm 3) to avoid premature convergence (foothill problem) and optimizing in almost flat surfaces (plateau problem). When restart reaches the user-defined limit R, the step-sizes and the current position are restarted, as seen in line 24 on Fig. 3. In the Fig. 5 can be seen the manner how SAC works in two 2-D functions. These pictures contain a line depicting SAC movement from the beggining (white point) to the end (black point). The X mark indicates that SAC got stuck in that point for more than 30 iterations. From those figures we can observe that: SAC is a greedy algorithm, i.e. it looks to improve all the time. SAC jumps from one hill to another most of the time.
F. Viveros-Jiménez et al. / Expert Systems with Applications 41 (2014) 877–885
881
These results are displayed in Tables B.6 and B.7 in Appendix B. Additionally, we decided to show a summary per each type of function (unimodal and multimodal) per each algorithm. In this way is easier to perform observations of the average performance results. We selected the following average measures: Success rate. It is the number of trials in which an algorithm found the global minimum value. We perform 30 trials per test function. Coverage. It is the percentage of functions in which an algorithm found a global minimum value in at least 1 trial. Time. It is the average number of function evaluations needed by an algorithm to found the global minimum value in the success trials. Error. It is the average distance between an algorithm’s answer and the global minimum value in the failure trials. We selected 15 well-known unconstrained functions (ViverosJiménez, Mezura-Montes, & Gelbukh, 2012). The benchmark functions are specified in Table 4 and further detailed in Table 1 in Appendix A. We conducted 30 trials per test function. The stop condition criterion for each run was 3E + 5 FEs. We show a comparison between SAC and four approaches: two micro-EAs and two state-of the-art-approaches. Micro-population algorithms were selected because they are considered in between of hill-climbers and regular population algorithms. Also, they are competitive against its standard counterparts (Krishnakumar, 1990; Coello Coello & Toscano-Pulido, 2001) and are used for create memetic algorithms too (Kazarlis, Papadakis, Theocharis, & Petridis, 2001; Parsopoulos, 2009). The selected approaches were:
Fig. 5. Sample run in 2-D fack (up) and fras (bottom) (see Table 1). It took SAC around 90 and 300 function evaluations to find the answer respectively. The big white dot is the beginning, the black dot is the end, the cross indicates a restart point, and the small white dots are search intermediary positions.
The restarting mechanism successfully prevents SAC of getting stuck in a local optimum. The proposed algorithm can be considered as a variation of the Elitist Evolution (EEv) (Viveros-Jiménez, Mezura-Montes, & Gelbukh, 2009), which has a very small population (micro-population algorithm) shown in Fig. 6.
4. Experimental setup In order to assess how competitive is the proposed algorithm, we performed a comparison against some state-of-the-art approaches. We measured the Error and Evaluation values for each trial in a similar way to the one proposed in the test suite for CEC 2005 special session on real-parameter optimization (Suganthan et al., 2005):
EEv (Viveros-Jiménez et al., 2009): The best micro-population algorithm so far as the authors knowledge and SAC’s precursor. l-PSO (Fuentes-Cabrera & Coello-Coello, 2007): An approach competitive to PSO. Simple Adaptive Differential Evolution (SaDE) (Qin, Huang, & Suganthan, 2009) selected because it is a Differential Evolution (Storn & Price, 1997) variant that is simple and competitive with other state-of-the-art techniques. Restart CMA-ES (Auger & Hansen, 2005) selected for measuring the gap against a technique that uses Hessian and covariance matrices. This was also the best technique on CEC 2005 special session on real-parameter optimization. All the experiments were performed using a Pentium 4 PC with 512 MB of RAM, in C language over a Linux environment. The parameter sets for the techniques were: 1. SAC: R = 150,B = 0.5 for further detail in the effect of these parameters see Section 5.2. 2. l-PSO: P = 6,C1 = C2 = 1.8, Neighborhoods = 2, Replacement generation = 100, Replacement particles = 2, Mutation % = 0.1, based on Fuentes-Cabrera and Coello-Coello (2007). 3. EEv: P = 5, B = 0.5, set as in Viveros-Jiménez et al. (2009) 4. SADE: set as in Qin et al. (2009). 5. Restart CMA-ES: set as in Auger and Hansen (2005).
5. Test results and analysis 5.1. Performance evaluation
Error = F(xo) F(x⁄), where xo is the best reported solution for a corresponding algorithm and x⁄ is the global optimum value. Evaluation is the number of function evaluations (FEs) required to reach an Error value of 108.
We performed a comparison of SAC against EEv, l-PSO, SADE and Restart CMA-ES. Table 2 shows a summary of the test results. Table 3 gives statistical significance to test results (see Tables B.6
882
F. Viveros-Jiménez et al. / Expert Systems with Applications 41 (2014) 877–885
Fig. 6. Algorithm for Elitist Evolution. D is the problem dimensionality. MaxFes is the function evaluations limit. rnd (L, U) and rndreal (L, U) return a random integer or real value within L and U. upj and lowj are the upper and lower bounds for the j dimension. flip (P) is a coin toss of P probability.
Table 1 Test functions (Mezura-Montes et al., 2006; Noman & Iba, 2008). Unimodal functions
Multimodal functions
Separable fsph Sphere model
fsch
f2.22
fras
f2.21 fstp fqtc
Schwefel’s Problem 2.22 Schwefel’s Problem 2.21 Step function Quartic function
Non-separable f1.2 Schwefel’s Problem 1.2
fros fack fgrw fsal fwhi fpen1,2
Generalized Schwefel’s Problem 2.26 Generalized Rastrigin’s function
Generalized Rosenbrock’s function Ackley’s function Generalized Griewank’s function Salomon’s function Whitley’s function Generalized penalized functions
and B.7 in Appendix B for the detailed results). Tests allow us to confirm that: SAC is a global optimizer hill-climbing like algorithm. SAC was the algorithm having the best average speed in the success trials and the one having the smallest average error in failure trials.
Table 2 Summary of test results for functions with a D = 30. Average measures were calculated by function type: unimodal (7 functions), multimodal (8 functions) and global (15 functions). Best results are marked in boldface. SAC’s global results have its ranking as a prefix. SAC
l-PSO
EEv
CMA-ES
SADE
Success rate Unimodal Multimodal Global
83% 64% (3rd)72%
67% 37% 49%
67% 51% 58%
100% 55% 73%
83% 69% 74%
Coverage Unimodal Multimodal Global
83% 78% (2nd)80%
67% 44% 53%
67% 67% 67%
100% 67% 80%
83% 89% 87%
Time Unimodal Multimodal Global
4.6E+4 5.6E+4 5.5E+4
1.0E+5 1.1E+5 1.1E+5
4.3E+4 1.2E+5 9.2E+4
1.0E+4 1.5E+5 7.8E+4
4.9E+4 7.8E+4 6.7E+4
Error Unimodal Multimodal Global
3.2E4 4.1 3.4
1.6E2 2.4E+2 1.8E+2
7.9E3 3.1E+2 2.2E+2
0.0 2.5E+3 2.5E+3
4.5 1.8E+1 1.6E+1
SAC has a competitive performance in global optimization problems with high dimensionality. It is competitive against state-of-the-art approaches like SADE and CMA-ES.
F. Viveros-Jiménez et al. / Expert Systems with Applications 41 (2014) 877–885 Table 3 Results of Mann–Whitney U paired test between SAC and other techniques. w means that the improvement is statistically significant. X means that the other technique outperforms SAC. – means that the difference is not statistically significant.
l-PSO
EEv
CMA-ES
SADE
fsph f2.22 f2.21 fstp fqtc
w w w w w
w w w – w
X X X X X
– X w X w
fsch fras
w w
w w
w w
w w
f1.2 fros fack fgrw fpen1 fpen2 fsal fwhi
– – w – w w – w
– – w – w w – w
X X X X X w – w
X X – X w – – w
Table 4 Changes in the success rates when using different B values. Only the functions with different success rates are shown. B=
0.1
0.2
0.3
0.4
0.5
fsch fras
– –
1% 63%
76% 90%
86% 93%
96% 100%
fgrw fwhi
20% 10%
16% 23%
13% 43%
36% 26%
10% 43%
Table 5 Changes in the success rates when using different R values. Only the functions with different success rates are shown.
883
SAC does not need complex configuration of its parameters. B is the reference search radius of SAC. Greater B values encourage global exploration and smaller values encourage local exploitation. R controls SAC’s searching time over the current location. Smaller values allow SAC to move often, while bigger values provokes best exploitation of the current area. Smaller B values increase SAC’s performance on unimodal problems and decrease performance on the multimodal ones. Hence, SAC behaves more like a hill-climbing algorithm. Bigger B values cause better average performance. Smaller R values provoke poor performance. Bigger R values cause a better performance on multimodal nonseparable problems. 6. Conclusions and future work In this paper we presented a novel optimizer called SAC and tested it on 15 benchmark functions. SAC is a simple technique that uses a single exploration point and adaptive step sizes. Its main features are: (1) easy implementation, (2) competitive against approaches like: l-PSO, SADE and CMA-ES, (3) easy parameter configuration, (4) fast solution speed, and (5) high success rate. SAC requieres configuration of two parameters: B and R. B is reference search radius of SAC. Large B values encourage global exploration and small B values encourage local exploitation. R controls SAC’s searching time over the current location. Small R values allow SAC to move often, while larger values cause a more extensive exploration over the current minimum value. More comparative studies and further analysis should be carried out to provide a more detailed understanding of SAC. We also plan to test SAC in constrained and multi-objective optimization problems. Acknowledgments
R=
15
50
150
300
500
fsph f2.22 f2.21 fstp fqtc
– – – – –
100% – – 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
The first author acknowledges support from CONACYT through a PhD scholarship. The third author acknowledges support from CONACYT through project number 132073.
fsch fras
– –
60% 100%
96% 100%
96% 96%
90% 93%
Appendix A. Test functions (Mezura-Montes, Velzquez-Reyes, & Coello-Coello, 2006; Noman & Iba, 2008)
fros fack fgrw fpen1 fpen2 fwhi
– – – – – –
3% – 30% 100% 100% 46%
– 100% 10% 100% 100% 43%
3% 100% 30% 100% 100% 46%
3% 100% 30% 100% 100% 100%
SAC outperforms EEv and l-PSO on most of the functions. The difference between its performance and EEv and l-PSO’ performances is statistical significant on most of the functions. SAC works well without requiring the fine-tuning of its two parameters. SAC returned and answer in 80% of the functions. It only had less coverage than SADE which obtained 87%. SAC has a good success rate of 72%. It had less success rate than SADE (74%) and CMA-ES (73%).
5.2. Sensibility to parameter adjustments We performed tests with different B and R values to observe the effects of parameter configuration on SAC’s performance. Results are shown in Tables 4 and 5. Tests allow us to observe that:
fsph – Sphere Model P fsph ðxÞ ¼ Di¼1 x2i 100 6 xi 6 100 min(f⁄) = fsph(0, . . . , 0) = 0 f2.21 – Schwefel’s Problem 2.21 f2.21(x) = maxi{jxij, 1 6 i 6 D} 100 6 xi 6 100 min(f⁄) = f2.21(0, . . . , 0) = 0 fqtc – Quartic Function P 4 fqtc ðxÞ ¼ Di¼1 ixi 1.28 6 xi 6 1.28 min(f⁄) = fqtc(0, . . . , 0) = 0 fsch – Schwefel’s Problem 2.26 pffiffiffiffiffiffiffi P jxi j fsch ðxÞ ¼ Di¼1 xi sin 500 6 xi 6 500 min(f⁄) = fsch(420.9687, . . . , 420.9687) =420.9687 ⁄ D fsal – Salomon’s Function pffiffiffiffiffiffiffiffiffiffiffiffiffi fsal ðxÞp ¼ffiffiffiffiffiffiffiffiffiffiffiffiffi 1 cos 2p fsph ðxÞ þ0:1 fsph ðxÞ 100 6 xi 6 100 min(f⁄) = fsal(0, . . . , 0) = 0
884
F. Viveros-Jiménez et al. / Expert Systems with Applications 41 (2014) 877–885
Table B.6 Mean error values obtained on functions with D = 30. A 0.0 value means that 108 was reached in all runs (100% success rate). On values like X.XXE+X (Y) Y represents the success rate (only when Y 2 [1%, 99%]). D = 30
SAC
l-PSO
EEv
CMA-ES
SaDE
fsph f2.22 f2.21 fstp fqtc
0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.39E2 0.0 0.0
0.0 0.0 9.08E3 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 4.55E+0 0.0 0.0
f1.2
3.21E4
1.94E2
6.17E3
0.0
0.0
fsch fras
4.8E7(96%) 0.0
1.33E+3 8.06E+0
1.48E+3 0.0
1.24E+4 3.34E+0(10%)
3.94E+0(96%) 7.95E1(63%)
fros fack fgrw fpen1 fpen2 fsal fwhit
1.32E+1 0.0 2.39E2(33%) 0.0 0.0 6.63E1 6.98E+0(43%)
1.64E+1 0.0 2.38E2(30%) 0.0 0.0 4.53E1 1.01E+2
4.17E+1 0.0 2.61E2(23%) 0.0 0.0 6.33E1 1.09 + 1(40%)
0.0 0.0 0.0 0.0 1.46E3(86%) 2.19E1 4.88E+2
3.98E1(63%) 3.10E2(96%) 2.78E3(83%) 6.91E3(93%) 0.0 2.03E1 5.93E+1(20%)
Table B.7 Average FEs required in successful runs. D = 30
SAC
l-PSO
EEv
CMA-ES
SaDE
fsph f2.22 f2.21 fstp fqtc
2.20E+4 4.74E+4 1.29E+5 3.10E+4 1.25E+4
9.91E+4 1.73E+5 – 6.29E+4 7.47E+4
3.86E+4 7.90E+4 – 3.08E+4 2.41E+4
3.34E+3 7.85E+3 1.15E+4 2.36E+2 2.15E+3
2.47E+4 3.61E+4 – 1.40E+4 2.41E+4
f1.2
–
–
–
3.60E+4
1.58E+5
fsch fras
7.77E+4(96%) 8.60E+4
– –
– 9.56E+4
– 2.44E+5(10%)
9.48E+4(96%) 5.19E+4(63%)
fros fack fgrw fpen1 fpen2 fsal fwhit
– 5.05E+4 3.66E+4(33%) 1.68E+4 1.93E+4 – 1.06E+5(43%)
– 2.29E+5 3.83E+4(30%) 9.38E+4 9.86E+4 – –
– 9.16E+4 3.00E+5(23%) 3.17E+4 3.51E+4 – 1.94E+5(40%)
2.08E+5 6.50E+3 4.03E+3 5.23E+3 7.74E+3(86%) – –
2.31E+5(63%) 3.84E+4(96%) 5.60E+4(83%) 2.20E+4(93%) 2.39E+4 – 1.10E+5(20%)
f2.22 – Schwefel’s Problem 2.22 P Q f2:22 ðxÞ ¼ Di¼1 jxi j þ Di¼1 jxi j 10 6 xi 6 10 min(f⁄) = f2.22(0, . . . , 0) = 0 fstp – Step Function P fstp ðxÞ ¼ Di¼1 ðbxi þ 0:5cÞ2 100 6 xi 6 100 min(f⁄) = fstp(0, . . . , 0) = 0 f1.2 – Schwefel’s Problem 1.2 2 P Pi f1:2 ðxÞ ¼ Di¼1 j¼1 xj 100 6 xi 6 100 min(f⁄) = f1.2(0, . . . , 0) = 0 fras – Rastrigin’s Function P f9 ðxÞ ¼ Di¼1 x2i 100cosð2pxi Þ þ 10 5.12 6 xi 6 5.12 min(f⁄) = fras(0, . . . , 0) = 0 fgrw – Griewank’s Function PD 2 QD x 1 piffi þ 1 fgrw ðxÞ ¼ 4000 i¼1 xi i¼1 cos i 600 6 xi 6 600 min(f⁄) = fgrw(0, . . . , 0) = 0 fros – Rosenbrock’s Function P 2 2 fros ðxÞ ¼ D1 þ ðxi 1Þ2 j i¼1 j100 xiþ1 xi 30 6 xi 6 30 min(f⁄) = fros(0, . . . , 0) = 0 fack – Ackley’s Function
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P fack ðxÞ ¼ 20 20exp 0:2 D1 Di¼1 x2i P rightÞ exp D1 Di¼1 cosð2pxi Þ þ e 32 6 xi 6 32 min(f⁄) = fack(0, . . . , 0) = 0 fwhi – Whitley’s Function P P yðxi ;xj Þþð1xj Þ2 þ cosðyðxi ; xj Þ þ ð1 xj Þ2 Þ þ 1 fwhi ðxÞ ¼ Di¼1 Dj¼1 4000 2 yðxi ; xj Þ ¼ 100 x2i xj 100 6 xi 6 100 min(f⁄) = fwhi(1, . . . , 1) fpen1, fpen2 –Generalized Penalized Functions P 2 2 fpen1 ðxÞ ¼ Dp wðy1 Þ D1 i¼1 ðyi 1Þ ð1 þ wðyiþ1 ÞÞ þ ðyD 1Þ PD Þ þ i¼1 uðxi ; 10; 100; 4Þ w(yi) = 10sin2(pyi) 50 6 xi 6 50 min(f⁄) = fpen1(1, . . . , 1) = 0
PD1 2 2 2 PD ðxi 1Þ2 ð1þsin2 ð3pxiþ1 ÞÞ px1 Þ ð2pxD ÞÞ fpen2 ðxÞ ¼ i¼1 þ sin ð3 þ ðxD 1Þ ð1þsin þ i¼1 10 10 10 uðxi ; 5; 100; 4Þ
50 6 xi 6 50 min(f⁄) = fpen2(1, . . . , 1) = 0 8 m where: < kðxi aÞ ; uðxi ; a; k; mÞ ¼ 0; : kðxi aÞm 1 yi ¼ 1 þ 4 ðxi þ 1Þ.
xi > a; a 6 xi 6 a; xi < a:
F. Viveros-Jiménez et al. / Expert Systems with Applications 41 (2014) 877–885
Appendix B. Detailed test results Tables B.6 and B.7. References Auger, A., & Hansen, N. (2005). A restart CMA evolution strategy with increasing population size. The 2005 IEEE congress on evolutionary computation, 2005 (Vol. 2, pp. 1769–1776). Coello Coello, C. A., & Cruz-Cortés, N. (2002). A parallel implementation of an artificial immune system to handle constraints in genetic algorithms: Preliminary results. In Proceedings of the 2002 congress on evolutionary computation, 2002. CEC’02, (Vol. 1, pp. 819–824). Coello Coello, C. A., & Cruz-Cortés, N. (2004). Hybridizing a genetic algorithm with an artificial immune system for global optimization. Engineering Optimization, 36(5), 607–634. Coello Coello, C. A., & Cruz-Cortés, N. (2005). Solving multiobjective optimization problems using an artificial immune system. Genetic Programming and Evolvable Machines, 6(2), 163–190. Coello Coello, C. A., & Toscano-Pulido, G. (2001). A micro-genetic algorithm for multiobjective optimization. Evolutionary Multi-Criterion Optimization, Lecture Notes in Computer Science, 1993, 126–140. Cruz-Cortés, N., Trejo-Pérez, D., & Coello Coello, C. A. (2005). Handling constraints in global optimization using an artificial immune system. Artificial Immune Systems, Lecture Notes in Computer Science, 3627, 234–247. Fuentes-Cabrera, J. C., & Coello-Coello, C. A. (2007). Handling constraints in particle swarm optimization using a small population size. MICAI 2007: Advances in Artificial Intelligence, Lecture Notes in Computer Science, 4827, 41–51. Hansen, N. (2006). Compilation of results on the 2005 CEC benchmark function set. (Technical Report). ETH, Zurich: Institute of Computational Science, Computational Laboratory. Hansen, N., Auger, A., Finck, S., & Ros, R. (2010). Comparison tables: BBOB 2009 function testbed in 20-D. (Technical Report), INRIA. Kazarlis, S. A., Papadakis, S. E., Theocharis, J. B., & Petridis, V. (2001). Microgenetic algorithms as generalized hill-climbing operators for GA optimization. IEEE Transactions on Evolutionary Computation, 5(3), 204–217.
885
Krishnakumar, K. (1990). Micro-genetic algorithms for stationary and nonstationary function optimization. SPIE: Intelligent Control and Adaptive Systems, 1196, 289–296. Lozano, M., Herrera, F., Krasnogor, N., & Molina, D. (2004). Real-coded memetic algorithms with crossover hill-climbing. Evolutionary Computation, 12(3), 273–302. Mezura-Montes, E., Velzquez-Reyes, J., & Coello-Coello, C. A. (2006). A comparative study of differential evolution variants for global optimization. In GECCO ’06: Proceedings of the 8th annual conference on genetic and evolutionary computation (pp. 485–492). Noman, N., & Iba, H. (2008). Accelerating differential evolution using an adaptive local search. IEEE Transactions on Evolutionary Computation, 12(1), 107–125. Parsopoulos, K. E. (2009). Cooperative micro-particle swarm optimization. In GEC’09: Proceedings of the first ACM/SIGEVO summit on genetic and evolutionary computation (pp. 467–474). Qin, A. K., Huang, V. L., & Suganthan, P. N. (2009). Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Transactions on Evolutionary Computation, 13(2), 398–417. Renders, J. M., & Bersini, H. (1994). Hybridizing genetic algorithms with hillclimbing methods for global optimization: Two possible ways. In Proceedings of the first IEEE conference on evolutionary computation, 1994 IEEE world congress on computational intelligence (Vol. 1, pp. 312–317). Storn, R., & Price, K. (1997). Differential evolution a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 11(4), 341–359. Suganthan, P. N., Hansen, N., Liang, J. J., Deb, K., Chen, Y. P., Auger, A., & Tiwari, S. (2005). Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization (KanGAL Report, 2005005). Singapore, India: Nanyang Technological University, Kanpur Genetic Algorithms Laboratory, IIT Kanpur. Viveros-Jiménez, F., Mezura-Montes, E., & Gelbukh, A. (2009). Elitistic evolution: An efficient heuristic for global optimization. Adaptive and Natural Computing Algorithms, Lecture Notes in Compute Science, 5495, 171–182. Viveros-Jiménez, F., Mezura-Montes, E., & Gelbukh, A. (2012). Empirical analysis of a micro-evolutionary algorithm for numerical optimization. International Journal Physical Sciences, 7(8), 1235–1258. Winston, P. H. (1992). Artificial intelligence (3rd ed.). Reading, Boston, MA, USA: Addison-Wesley Longman Publishing Company.