GEM: A novel evolutionary optimization method with improved neighborhood search

GEM: A novel evolutionary optimization method with improved neighborhood search

Applied Mathematics and Computation 210 (2009) 376–386 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

414KB Sizes 0 Downloads 49 Views

Applied Mathematics and Computation 210 (2009) 376–386

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

GEM: A novel evolutionary optimization method with improved neighborhood search A. Ahrari, M. Shariat-Panahi, A.A. Atai * Department of Mechanical Engineering, University of Tehran, Tehran, Iran

a r t i c l e

i n f o

Keywords: Stochastic optimization methods Evolutionary algorithms Multimodal functions Agent’s territory Guided random search Optimal search direction Multiple global minima

a b s t r a c t A new optimization technique called Grenade Explosion Method (GEM) is introduced and its underlying ideas, including the concept of Optimal Search Direction (OSD), are elaborated. The applicability and efficiency of the technique is demonstrated using standard benchmark functions. Comparison of the results with those of other, widely-used, evolutionary algorithms shows that the proposed algorithm outperforms its rivals both in the success rate and rate of convergence. The method is also shown to be capable of finding most, or even all, optima of functions having multiple global optima. Moreover, it is shown that the performance of GEM is invariant against shifting and scaling of the search space and objective function. Ó 2009 Elsevier Inc. All rights reserved.

1. Introduction Evolutionary algorithms (EAs) have gained popularity in recent years for their ability to find near-optimal solutions to problems that cannot be solved by analytical methods within reasonable computation times due to the multimodality or high dimensionality of their objective functions [1,2]. These algorithms are usually inspired by such natural phenomena as the biological evolution of live species (as in Genetic Algorithms) [3–5], the homogenization of material properties in annealed metals (as in Simulated Annealing) [1,6], the choreography of bird flocks (as in Particle Swarm Optimization) [7,8], the foraging behavior of ants (as in Ant Colony Optimization) [9,10], and the intelligent behavior of honey bee swarms (as in Bee Colony Optimization) [11,12]. In the majority of these algorithms, a population of agents [13] is randomly sent to explore the feasible space. Then a part (or all) of these agents are relocated according to some predefined logic, in hope that they collectively move towards the optimal point. An objective function may have several global minima, i.e. several points in which the values of the objective function are equal to the global minimum value. Furthermore, it may have some local minima in which the values of the objective function are close to the global minimum value. Since the mathematical formulation of a real-world problem often involves several simplifications, finding all global or even these local minima would provide the decision makers with multiple options to choose from [14]. In this work, a new algorithm called Grenade Explosion Method (GEM) which utilizes the concept of optimal search direction (OSD) is introduced. Basic concepts of the method are explained and a pseudo code for the algorithm is presented. The applicability and efficiency of this algorithm is demonstrated and its performance is compared with those of some popular methods whose results from standard benchmark functions are available. Furthermore, the ability of the proposed method to find most or even all global minima of functions having several global minima is demonstrated. * Corresponding author. E-mail address: [email protected] (A.A. Atai). 0096-3003/$ - see front matter Ó 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2009.01.009

A. Ahrari et al. / Applied Mathematics and Computation 210 (2009) 376–386

377

2. Grenade Explosion Method (GEM) GEM is inspired by the mechanism of a grenade explosion, where objects within a neighborhood of radius Re are hit by the shrapnel. The cumulative damage caused by each shrapnel hitting an object is calculated. A high damage-per-shrapnel value in an area indicates there are valuable objects in that area. To intensify the damage, the next grenade is thrown where the greatest damage occurs. Although objects near the grenade’s current location are more likely to be damaged, the chance of being hit is also given to farther objects by choosing a high value for Re. This process would result in finding the best place for throwing the grenades, even though the shrapnel cannot discover this place in early iterations. The overall damage caused by the hit is considered as the ‘‘fitness” of the solution. A unique feature of the new algorithm which is usually not considered by other local search-augmented evolutionary algorithms like Artificial Bee Colony (ABC), is the concept of agent’s territory radius (Rt). This means that an agent (here a grenade) does not let other agents come closer than a specific distance, which is specified by Rt. When several agents are exploring the allowable space, a high value for this parameter makes sure that grenades are spread quite uniformly in the space domain; while a low value for this parameter lets the grenades get closer to search the local region all together, because in all iterations we have Re > Rt. Each grenade throws a number of shrapnel in random directions. Each shrapnel hits an object which is in a random distance from the grenade. The damage inflicted upon an object being hit is calculated and the grenade is moved to the place where the most damage has occurred. Unlike the local search process, the search radius is large and covers the whole search space especially in the early iterations of the algorithm. Consequently, this can be considered as a local–global search:

X 0 ¼ X þ r p1 Re~ drnd ;

ð1Þ

where X is the grenade’s current location, X is the location of the hit object, ~ drnd is a random direction, Re is the radius of explosion, r 1 2 ½0; 1 is a uniformly distributed random number in the range and p is a constant. The value of exponent p is determined in such a way that the hitting point occurs inside the territory of the grenade with a probability of pTS (probability of territory search). This value is 0.2 in the first iteration and may increase or decrease based on the disðkÞ tance where the most damage has occurred. Having the value of pTS , the value of exponent p can be determined as follows: 0

p ¼ max



 1 logðRt =Re Þ ; : n logðpTS Þ

ð2Þ

Some algorithms perform local search simply by modifying some components of the current solution [15,16], However in GEM two other methods which we label them M1 and M2 are implemented. In method M1, initially a random direction is generated ð~ drnd Þ. A simple method to do this job is to choose a random value with standard normal distribution for each component of ~ drnd and divide the obtained vector to its norm. In this method, all directions have similar probability density for being chosen [17]. In method M2, a random point in the intersection of a hyper cube with a side length of 2Le along each coordinate direction and the search domain is specified. ~ drnd is the direction along the vector connects the current solution to the newly generated point (Fig. 1). In GEM, both methods M1 and M2 are implemented for generating a new point with probabilities of pM1 and ð1  pM1 Þ, respectively, where pM1 is a small value in the early iterations and increases gradually up to unity until the final iteration.

Fig. 1. Generating a random direction using method M2. The hatched region is the intersection of a hyper cube with a side length of 2Le along each coordinate direction and the search domain.

378

A. Ahrari et al. / Applied Mathematics and Computation 210 (2009) 376–386

2.1. Optimal search direction Evolutionary algorithms use some kind of random search. This randomness keeps the algorithm from being trapped in undesired local minima and lets it explore the whole search domain for probable high-fitted areas. GEM, like some of other Evolutionary methods, exploits neighborhood search. Consequently, definition of neighborhood function affects the performance of the method. GEM exploits the concept of orthogonal arrays which has recently been introduced for local search in simulated annealing [18] to improve the neighborhood search mechanism. The orthogonal arrays exploit the neighborhood of the current solution by analyzing the main effect of variables on the current solution. Although in GEM the explosion radius is large and covers the whole search domain, a quite similar strategy for locating probable regions where the global minimum may be is used: Each iteration, a number of 2n shrapnel are thrown in perpendicular directions to gather information on the approximate behavior of the objective function around the current location of the grenade. Having analyzed the information obtained, the probable optimal direction to explore for a better solution is specified. The rest of shrapnel are then thrown along biased random directions, which means these directions are biased towards the optimal search direction:

~ dOSD þ ð1  mOSD Þðr 2 Þ~ drnd ; d ¼ mOSD ðr 1 Þ~ ~ d ~ dGRS ¼ ; ~ kdk

0 6 mOSD < 1; ð3Þ

where ~ dGRS is the final direction along which a shrapnel is thrown, ~ dOSD is the optimal search direction determined by analysis of gathered data, ~ drnd is a random direction, and 0 6 r1 ; r2 6 1 are random numbers in the range. mOSD is the weight of optimal search direction in generation of the final direction. Thus, a new point along ~ dGRS can be generated as follows:

X 0 ¼ X þ ðr 1 Þp ðRe Þ~ dGRS ;

ð4Þ

which is similar to Eq. (1) if ~ dOSD is replaced with ~ drnd . When generating a point using Eq. (4), although the search direction is ~ biased towards dGRS , but use of random numbers makes the generation of all directions possible. Thus, this can be considered as a ‘‘guided random search” procedure (GRS). It is recommended to select a small value for mOSD in primary iterations of the algorithm and let it increase up to a value close to unity in the final iterations. Use of a large value for Rt accompanied by guided random search helps the algorithm to locate the near-optimal regions fast especially when there is a pattern on the value of the objective function towards the global minimum (Fig. 2). On the other hand, having located the valley of the global minimum, use of optimal search direction helps the algorithm to converge to the exact solution faster. It is important to note that only some of the grenades are allowed to throw 2n extra shrapnel in order to determine ~ dOSD . The number of these grenades is equal to the number of minima supposed to be found, i.e. only the best grenade when finding a single global minimum is desirable, and the j best grenades when the goal is finding the j best minima. Other grenades throw only Nq shrapnel according to Eq. (1). To improve the efficiency of GEM, especially when several minima are supposed to be found, the number of shrapnel for a grenade would be subtracted by half of the current value if it cannot improve its fitness in z successive iterations at least 106 =z, where:

 z¼

1:5 

Iter:No: Max:Iters:

  Max:Iters: 20

ð5Þ

provided that it has already more than on shrapnel. The subtracted shrapnel are randomly distributed among all grenades. A high value for Rt should be chosen at the beginning ðRt-initial Þ and reduced gradually to let the grenades search the probable location of the global minimum which has already been found altogether for the exact answer. Rrd is the ratio of Rt-initial to the final value of the Rt and is specified at the beginning of the algorithm. The steps in GEM can be stated as follows: (1) Scale the independent variables’ ranges to [1, 1]. (2) Select problem parameters: (Ng, Nq, Rtinitial, Rrd, psin, mmin, Max.Iters). pffiffiffi (3) Let ðRe ¼ 2 n; Rt ¼ Rt-initial ; mOSD ¼ 0; Iter:No: ¼ 1; pTSi ¼ 0:2ði ¼ 1; 2; . . . ; nÞÞ.

Fig. 2. A multimodal function (a) with (b) without a simple pattern towards the global minimum.

A. Ahrari et al. / Applied Mathematics and Computation 210 (2009) 376–386

379

(4) Put Ng grenades in random locations in n-dimension space ~ X i 2 ½1; 1n ; ði ¼ 1; . . . ; N g Þ which are a distance of at least Rt apart from each other. (5) While Iter.No. 6 Max.Iters. (6) Arrange the grenades based on their fitness (the best grenade is the first). (7) Let: i = 1. (8) If ðIter:No: 6 0:1  Max:ItersÞ and i is smaller than the number of desired minima, specify the optimal search direction by throwing 2n shrapnel along coordinate directions. Label the best of these shrapnel which has not entered the territory of any of the previous grenades in this iteration ‘‘X 0OSD ”. If the current location of the grenade is not in the territory of the previous grenades in this iteration, label it as X 0cur , otherwise X 0cur ¼ ½ . dOSD exists for this grenade. Label the best of these (9) Let this grenade throw Nq shrapnel. Use guided random search if ~ shrapnel as X 0rnd . (10) Move the grenade to the best location among fX 0rnd ; X 0cur ; X 0OSD g. If X 0rnd is the best, adjust pTSi based on the distance between X 0rnd and the old location of the moved grenade. If the value of the objective function has not improved with an amount of 10 6=z in z successive iterations, reduce the number of shrapnel for this grenade to half and distribute the reduced shrapnel among all grenades randomly. (11) i i þ 1. If i 6 N g go to step 8. pffiffiffi (12) Reduce the values of Rt and m. Update the value of Re : Re ¼ ð2 nÞm ðRt Þð1mÞ . 

 psin . (13) Update mOSD ¼ sin p2 Iter:No:0:1Max:Iters 0:9Max:Iters (14) End While % Number of iterations. Although GEM is highly elitist, A large value for Re, especially in the primary iterations keeps the algorithm from being trapped in local minima.

2.2. Several guidelines for choosing parameters Unsuitable values for algorithm parameters may result in a low convergence rate, convergence to a local minimum or unreliability of solutions. – Number of grenades plays the same role as number of bee colonies in BCO. For an ordinary optimization problem, two or three grenades and 10–15 shrapnel per grenade would be sufficient. Nq can be raised up to 100 for complicated problems if necessary. When finding several global minima (if available) or local minima whose values are close to the global minimum, a number of 1.5 or 2 times of the number of minima desired would be sufficient. – The initial value for the radius of the grenade territory ðRt-initial Þ should be large enough so that the possibility of crowding would be eliminated in the early iterations of the algorithm. If Rt-initial is very large, it takes a long time to put them in an acceptable configuration; larger values may make it impossible. The optimal value for Rt-initial depends only on the number of grenades and the space dimension. Furthermore, a high value for Rrd leads to more accurate solutions, but increases the probability of convergence to local minima. – Parameter m specifies the weight of Re-initial ¼ 2n in Re value. It makes sure that Re is kept large in comparison with Rt so that far regions can be explored for new high-fitted regions and the algorithm would not be trapped in local minima. This, somehow works like mutation operation in GA, or scout bees in BCO, but there is a pattern for the global search, i.e. although far regions have the chance to be explored, but the chance rises for closer regions, which means the closer, the more probable. There is no strict criteria to distinguish shrapnel that participate in the random search from those participate in the local search, i.e. all shrapnel participate in both global and local search simultaneously; only the weight of each kind of search varies. The value of m is unity at the beginning and linearly decreases to a small value near zero ðmmin  0:2Þ at the final iterations. – A higher value for number of total iterations increases the probability of finding near optimum solution or solutions, and at the same time, increases the accuracy of the found solutions. – Selection of the value of 0:1 6 psin 6 10 depends on existence of a pattern in the general trend of the objective function. If it exists, a small value should be selected for this parameter so that mOSD increases fast. Otherwise selection of a higher value is preferred in order to improve the random search.

3. Validation of GEM In this section, a number of benchmark functions which have been used to compare the abilities of GA with some of its variations in [19] are optimized to verify the reliability and fast convergence of GEM. Table 1 presents these functions, the search domain and the global minimum value of each. Parameter selection used for optimization of these functions is presented in Table 2. As in can be observed, just a few modifications are required to achieve good results.

380

A. Ahrari et al. / Applied Mathematics and Computation 210 (2009) 376–386

Table 1 Specifications of 13 benchmark functions used in [19]. Function name

Interval

Function

Global minimum

Alluffi–Pentiny (Ap)

x 2 ½10; 102

1 1 1 1 f ðxÞ ¼ x41  x21 þ x1 þ x22 4 2 10 2

fmin ¼ 0:352386

Bohachevsky 1 (Bf1)

x 2 ½100; 1002

Bohachevsky 2 (Bf2)

x 2 ½50; 502

Becker and Lago (BL)

x 2 ½10; 102

f ðxÞ ¼ ðj x1 j 5Þ2 þ ðj x2 j 5Þ2

Branin

x1 2 ½5; 10 x2 2 ½0; 15

f ðxÞ ¼ x2 

Camel

x 2 ½5; 52

  x4 f ðxÞ ¼ 4  2:1x21 þ 1 x21 þ x1 x2 þ ð4 þ 4x22 Þx22 3

Three Hump (Cb3)

x 2 ½5; 52

Cosine Mixture (CM)

x 2 ½1; 1n

f ðxÞ ¼ x21 þ 2x22 

3 4 7 cosð3px1 Þ  cosð4px2 Þ þ 10 10 10

fmin ¼ 0

f ðxÞ ¼ x21 þ 2x22 

3 3 cosð3px1 Þ cosð4px2 Þ þ 10 10

fmin ¼ 0

  5:1 2 5 1 x1 þ ðx1  6Þ2 þ 10 1  cosðx1 Þ þ 10 2 p 4p 8p

n X

x2i 

i¼1

De Jong

x 2 ½5:12; 5:122

Exponential (Exp)

x 2 ½1; 1n

fmin ¼ 0:397887

fmin ¼ 1:0316

x61 þ x1 x2 þ x22 6

f ðxÞ ¼ 2x21  1:05x41 þ

f ðxÞ ¼

fmin ¼ 0

fmin ¼ 0

n 1 X cosð5pxi Þ 10 i¼1

fmin ¼ 0:4

f ðxÞ ¼ x21 þ x22 þ x3

f ðxÞ ¼  exp 0:5

fmin ¼ 0

n X

!

fmin ¼ 1

x2i

i¼1

GoldStein and price

x 2 ½2; 22

f ðxÞ ¼ ½1 þ ðx1 þ x2 þ 1Þ2 ð19  14x1 þ 3x21  14x2

fmin ¼ 3

þ 6x1 x2 þ 3x22 Þ ½30 þ ð2x1  3x2 Þ2 ð18  32x1 þ 12x21 þ 48x2  36x1 x2 þ 27x22 Þ Hartman6

x 2 ½0; 16

f ðxÞ ¼ 

4 X

ci exp 

i¼1

2

10 3 6 6 0:05 10 6 6 6 3 3:5 4 17 2

8

6 X

! aij ðxj  pij Þ2

fmin ¼ 3:322368

j¼1

17

3:5 1:7

2 3 1 7 6 7 6 1:2 7 14 7 7 6 7 7; c ¼ 6 7 6 3 7 8 7 5 4 5 8

17

0:1

8

1:7

10

17

0:05

10

0:1 14

3

3:2

0:1312 0:1696 0:5569 0:0124 0:8283 0:5886

3

6 7 6 0:2329 0:4135 0:8307 0:3736 0:1004 0:9991 7 6 7 p¼6 7 6 0:2348 0:1451 0:3522 0:2883 0:3047 0:6650 7 4 5 0:4047 0:8828 0:8732 0:5743 0:1091 0:0381 Shekel5

x 2 ½0; 10

4

f ðxÞ ¼  2

4 6 61 6 a¼6 68 6 46 3

5 X i¼1

4 1 8 6 7

1 ðx  ai Þðx  ai ÞT þ ci 3 2 3 4 4 0:1 7 6 7 1 17 6 0:2 7 7 6 7 6 0:2 7 8 87 ; c ¼ 7 6 7 7 6 7 6 65 4 0:4 5 3 7 0:4

fmin ¼ 10:1532

381

A. Ahrari et al. / Applied Mathematics and Computation 210 (2009) 376–386 Table 2 Parameter selection of GEM used for optimization of benchmark functions presented in Table 1. Function

Dim.

Ng

Nq

No. of iterations

Rrd

Rt

mmin

psin

Ap Bf1 Bf2 BL Branin Camel Cb3 CM De Jong Exp4 EXP32 GoldStein and price Hartman6 shekel5

2 2 2 2 2 2 2 4 3 4 32 2 6 4

1 1 1 1 1 1 1 1 1 1 1 1 4 5

12 8 10 4 4 8 14 10 6 6 20 16 9 12

30 40 50 20 20 30 50 30 20 20 10 40 60 100

100 2000 100 2000 2000 100 50 70 2000 2000 2000 80 300 200

1 1 1 1 1 1 1 1.5 1.2 1.5 3 1 1.6 1.2

0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.1 0.2 0.2 0.2 0.3 0.2 0.3

0.1 0.1 0.1 0.1 0.1 0.2 0.5 0.1 0.1 0.1 0.1 1 0.2 0.1

Table 3 Results from optimization of benchmark functions presented in Table 1 by using genetic algorithm and some of its variances [19] and GEM. Function

GEN

GEN_S

GEN_S_M

GEN_S_M_LS

GEM

Ap Bf1 Bf2 BL Branin Camel Cb3 CM De Jong Exp4 EXP32 GoldStein and price Hartman6 shekel5

1360 (0.99) 3992 20234 19596 1442 1358 9771 2105 9900 3237 9934 1478 2562(0.54) 2527(0.61)

1360 3356 3373 2412 1418 1358 2045 2105 3040 3237 9932 1478 2562(0.54) 2507(0.61)

1277 1640 1676 2439 1404 1336 1163 1743 1462 2054 5113 1408 2530(0.67) 2509(0.69)

1253 1615 1636 1463 1257 1300 1118 1539 1281 1496 2241 1325 1865(0.68) 2049(0.67)

274 291 344 92 218 224 315 214 122 109 499 475 531 586

Total

89496(0.94)

40183(0.94)

27754(0.95)

21438(0.95)

4294(1.00)

The results obtained by GEM are listed in Table 3 along with those obtained by GA and some of its variations, which were directly derived from [19]. The numbers in cells denote the average number of function evaluation from 100 independent runs for every objective function described in Table 1. The numbers in parentheses represent the fraction of successful runs in which the algorithm has located the global minimum with predefined accuracy, which is e ¼ fmin  ffinal ¼ 104 . Absence of parentheses denotes that the algorithm has been successful in all independent runs. Result comparison demonstrates that GEM has considerably a faster convergence and besides, unlike GA and its variations, GEM has located the global minimum of all functions in all runs. 4. Special abilities of GEM Besides the general properties of an ordinary optimization method, GEM shows special abilities in finding solutions for an optimization problem, which can be considered as reliable performance for high dimensional optimization problems, finding all global minimum of functions having several global minima, finding high-fitted local minima and being invariant against shifting and scaling of the objective function and search domain. 4.1. High dimensional problems The problems caused by rapid increase in volume associated with adding extra dimensions to a mathematical space, which is known as curse of dimensionality [20] may limit the reliability of some optimization algorithm significantly as the number of state variables increases [21,22]. Nevertheless, GEM is an efficient tool for optimization of these problems. To show this, four 50D benchmark functions presented in [13] are optimized by using GEM and the obtained results are compared with those from DE, PSO, EA and ABC presented in [13]. Among the optimization results presented in [13], the best results belong to ABC and thus, the performance of GEM is compared only with that of ABC. These functions are presented in Table 4.

382

A. Ahrari et al. / Applied Mathematics and Computation 210 (2009) 376–386

Table 4 Fifty dimensional benchmark functions optimized in [13]. Function name

Interval

Sphere

Function

x 2 ½100; 100

50

f ðxÞ ¼

50 X

Global minimum

f ð~ 0Þ ¼ 0

x2i

i¼1

x 2 ½5:12; 5:1250

Rastrigin

f ðxÞ ¼

50 X ðx2i  10 cosð2pxi Þ þ 10Þ

f ð~ 0Þ ¼ 0

i¼1

x 2 ½600; 60050

Griewank

f ðxÞ ¼

x 2 ½50; 5050

Rosenbrock

f ðxÞ ¼

  50 50 Y 1 X xi  100 pffi þ1 ðxi  100Þ2  cos 4000 i¼1 i i¼1 49 X

f ð1~ 00Þ ¼ 0

f ð~ 1Þ ¼ 0

100ðxiþ1  x2i Þ2 þ ðxi  1Þ2

i¼1

To obtain the convergence diagram, 30 independent runs were executed and the mean of the best individual values is calculated which represents the final result. In [13] the results are presented for colony sizes of 10, 50, 100. For each function, only the best two results of ABC are presented here. However, the number of shrapnel per iteration in GEM is more than 100, thus the performance of these algorithms is illustrated as a function of the number of function evaluations instead of the iteration number. Parameter selection of GEM used for optimization of these functions is presented in Table 5. Diagrams in Fig. 3 illustrate the convergence of these functions when using ABC with different colony sizes and GEM. For the sphere function, both algorithms have located the global minima. The fastest convergence belongs to ABC with population size of 10 and GEM. ABC10 shows a faster convergence until fmin  ffinal ¼ 1014 . After fmin  ffinal ¼ 1012 , unlike GEM’s, its convergence rate slows down and thus GEM surpasses ABC10. The constant convergence rate of GEM for this function may be the result of implementation of optimal search direction. For the Rastrigin Function, the convergence of GEM is considerably faster than that of ABC. Although this function is highly multimodal, but averagely the value of the objective functions has a decreasing trend while moving towards the global minimum. Using optimal search direction, GEM can utilize this trend and focuses on regions close to the global minimum. Having located the global minimum valley, use of optimal search direction helps the algorithm to converge to the global minimum fast. For the Griewank Function, ABC with colony sizes of 100 and 50 and GEM have located the global minimum. Again GEM has converged to the global minimum faster than ABC. For the Rosenbrock Function, ABC is more successful up to fmin  ffinal ¼ 0:1, after that, GEM shows a considerably higher rate of convergence and thus, surpasses ABC. 4.2. Finding all global minima of functions with multiple global minima As it was mentioned before, the ability of finding multiple global minima, if available and local minima with high fitness can be counted as an advantage for an optimization algorithm. To verify this ability for the proposed method, two benchmark functions having several global minima are optimized by using this method. Table 6 presents these functions and their specifications. Parameter selection for these functions is listed in Table 7 and the obtained results which are the outcome of 30 independent runs for each function are presented in Table 8. A global minimum is considered to be found if fmin  ffinal 6 106 . For these functions we set the number of grenades equal to 1.5 times of the number of minima desired. Number of grenades which utilize optimal search direction is equal to the number of global minima (see Fig. 4). For Himmelblau Function, GEM has found all global minima in all runs. The average value for the global minimum values found by this method is 4.5e9, and the standard deviation for this parameter is 5.1e9. For Shubert Function, the average probability of finding any of the global minima in a single run is 0.965 and the standard deviation for this value is 0.042, which shows that the probability of finding all global minima is quite similar. The average final values of all global minima

Table 5 Parameter selection for optimization of 50 dimensional benchmark functions presented in Table 4. Function

Ng

Nq

No. of Iterations

Rrd

Rt

mmin

psin

Sphere Rastrigin Griewank Rosenbrock

1 1 1 1

50 50 50 25

40 400 400 4000

400 100 300 50000

4 4 4 4

0.2 0.4 0.3 0.2

0.1 0.2 0.5 0.1

383

A. Ahrari et al. / Applied Mathematics and Computation 210 (2009) 376–386

Rastrigin Function

Sphere Function f(x)

f(x) Func. Evals.

Func. Evals.

1.E+00

1.E+00 0

4000

8000

12000

0

100000

200000

1.E-03 1.E-04

ABC10 ABC50 GEM-OSD

300000 ABC50 ABC100 GEM-OSD

1.E-06

1.E-08 1.E-09 1.E-12

1.E-12

1.E-16

1.E-15

Griewank Function

Rosenbrock Function

f(x)

f(x)

Func. Evals.

1.E+00 0

100000

200000

1.E+06

300000

ABC50 ABC100 GEM-OSD

1.E-04

1.E+04

ABC50 ABC100 GEM-OSD

1.E-08

1.E+02

1.E-12

1.E+00

Func. Evals.

0

250000

500000

1.E-02

1.E-16

Fig. 3. Optimization of (a) 50D Sphere function (b) 50D Rastrigin Function (c) 50D Griewank Function and (d) 50D Rosenbrock Function by using ABC and GEM. Results for ABC are derived from [13].

Table 6 Specifications of multimodal function with several global minima. Function name

Dim.

Interval

Function

Himmelblau

2

x1 2 ½6; 6 x2 2 ½6; 6

f ðXÞ ¼ ðx21 þ x2

Shubert

2

x1 2 ½10; 10 x2 2 ½10; 10

Global minimum

2

3 6 2:805118 6 X min ¼ 6 4 3:779310 3:584428 fmin ¼ 0

þ 11Þ2

þ ðx1 þ x22  7Þ2

5 X

f ðXÞ ¼

!



See Fig. 4 fmin = 186.730908831023

i cosðði þ 1Þx1 þ iÞ

i¼1 5 X

3

1 72 7 7 3:283185 5 3 1:8488126 4

2 3:131312

! j cosððj þ 1Þx1 þ jÞ

j¼1

is 3.3e11 and the standard deviation for this parameter is 1.4e10, which shows that the global minima have been found with high accuracy. 4.3. Finding high-fitted local minima As it was mentioned, finding local minima in which the value of the objective function is close to the global minimum value can be considered as an advantage for an industrial optimization problem. To show the proposed algorithm’s ability in finding these local minima, two-dimensional Rastrigin Function, which has several local minima is optimized by using GEM. Fig. 5 depicts the location of good local minima for this function ðfloc < 10Þ. We suppose the goal is to locate the twelve best minima of this function. We set: N g ¼ 18; N q ¼ 12; Rrd ¼ 30; Rt ¼ 0:4; mmin ¼ :3; psin ¼ 1 and Max Iters ¼ 60. Each iteration, only the best twelve grenades are allowed to have the optimal search direction. Thus the number of total function evaluations is: 15840. Again a number of 30 independent runs were executed

384

A. Ahrari et al. / Applied Mathematics and Computation 210 (2009) 376–386

Table 7 Parameter selection for optimization of Himmelblau and Shubert Functions. Function

Ng

Nq

No. of iterations

Rrd

Rt

mmin

psin

No. of total func. evals.

Himmelblau Shubert

6 27

7 10

50 250

150 130

0.65 0.3

0.2 0.3

0.2 5

2900 85500

Table 8 Results for optimization of functions presented in Table 6. Function

Average probability of finding any of the global minima

SD for the probability of finding any of the global minima

Average value for ffinal  fmin

SD for the value of ffinal  fmin

Himmelblau Shubert

1.000 0.965

0.000 0.042

4.50E-09 3.30E-11

5.10E-09 1.40E-10

Fig. 4. Two-dimensional Shubert Function: (a) 3D plot (b) global minima.

Fig. 5. High-fitted minima of 2D Rastrigin Function in which the values of the objective function is less than 10: (a) surface plot; (b) locations of these minima.

and the average probability of finding any of the minima plus the average error in the minimum values is listed in Table 9. A minimum is considered to be found if ffinal  floc 6 106 . Results demonstrate that the global minimum has been found in all runs. The average tolerance between the global minimum and the best minimum found is 2.20E14. Local minima are found with a probability which depends on the goodness of the minimum. The highest is for minimum which has the second rank (1.000) and the lowest is for the one which has the

385

A. Ahrari et al. / Applied Mathematics and Computation 210 (2009) 376–386 Table 9 Results for optimization of the 2D Rastrigin Function. Rank of minimum

No. of minima in this rank

Minimum value

Probability of finding each minima in this rank

Average value for ffinal  fmin

1 2 3 4 5 6 7 8 9

1 4 4 4 8 4 4 8 8

0 0.994959057 1.989918114 3.979831191 4.974790248 7.959662381 8.954601241 9.949560299 12.934432432

1.000 1.000 0.925 0.633 0.433 0.125 0.092 0.056 0.004

2.20E-14 8.26E-09 1.40E-08 4.01E-08 1.23E-07 2.38E-07 1.27E-07 2.41E-07 4.17E-08

9th rank. Local minimum having worse ranks have not been found at all. Furthermore, only 4.3% of the grenades have converged to non-minimum points. 4.4. Invariance of GEM The performance of GEM is invariant against linear shifting and scaling the search domain and objective function, which means that convergence processes of functions f(x) and af ðbx þ cx0 Þ þ d are quite similar when using GEM, provided that the global minimum does not get too close to the bounds. x0 is a random point in [1, 1]n. This property of GEM lets one optimize a simplified form of the objective function if available and thus, improves the time efficiency of the algorithm. To show this, shifted and scaled forms of two benchmark functions which have already been optimized by GEM in Section 4.1 are optimized and the convergence processes of the modified functions are illustrated along with the basic forms’. These two functions are 50D Rastrigin and Griewank Functions. Parameter selection is the same as what was listed in Table 5. The random values chosen for a; b; c; d are presented in Table 10. The range of independent variables is divided by b. The convergence rates of these functions are illustrated in Fig. 6. To ease the comparison, the value of f ðbx þ cx0 Þ is plotted instead of f ðbx þ cx0 Þ þ d. Diagrams obtained demonstrate that the performance of GEM is totally invariant against shifting the objective function and scaling the objective function and the search domain. Shifting the global minimum may affect the performance of GEM slightly.

Table 10 Random values used for shifted and scaled forms of 50D Rastrigin and Griewank Functions. Function

Test

Rastrigin

Parameter value

T1 T2 T3 T1 T2 T3

Griewank

Rastrigin Function

a

b

c

d

1.4 0.056 327 1.1 436 7.9

1.8 5.1 0.14 142 54 0.47

0 1 2 0 150 300

129 54 74 30.7 65 910

Base T1 T2 T3

f(bx+cx0)

Func. Evals.

1.E+00 0 1.E-05

10000

20000

30000

Griewank Function f(bx+cx0)

Base T1 T2 T3

Func. Evals

1.E+00 0

40000

15000

30000

1.E-05

1.E-10

1.E-10

1.E-15

1.E-15

Fig. 6. Convergence of (a) Rastrigin Function (b) Griewank Function and their shifted and scaled forms’.

45000

386

A. Ahrari et al. / Applied Mathematics and Computation 210 (2009) 376–386

5. Conclusions A new evolutionary method for optimization of complex, multimodal functions was presented. The method is inspired by the mechanism of a grenade explosion and the heuristics by which the location of the next explosion is determined. Underlying concepts of the method, including the concepts of an agent’s territory and the optimal search direction were elaborated and a pseudo code for the method was presented. Application of the presented method to numerous benchmark functions showed that the method outperforms its rivals, such popular evolutionary algorithms as GAs and ABC, both in the success rate and the rate of convergence. In addition to the common capabilities of an evolutionary optimization method, GEM was shown to be able to locate virtually all global minima of functions having several global minima, and to locate high-fitted local minima on demand. Finally, it was demonstrated that the performance of this method is invariant against shifting and scaling of the search domain and the objective function. References [1] J. Ombach, Stability of evolutionary algorithms, J. Math. Anal. 342 (2008) 326–333. [2] I.J. Dotu, J. Garcia, A. Berlanga, J.M. Molina, A meta-level evolutionary strategy for many-criteria design: application to improving tracking filters, Adv. Eng. Inf., doi:10.1016/j.aei.2008.08.001. [3] M.B. Aryanezhad, M. Hemati, A new genetic algorithm for solving nonconvex nonlinear programming problems, Appl. Math. Comput. 199 (2008) 186– 194. [4] Z. Wu, G. Ding, K. Wang, M. Fukaya, Application of a genetic algorithm to optimize the refrigerant circuit of fin-and-tube heat exchangers for maximum heat transfer or shortest tube, Int. J. Therm. Sci. 47 (2008) 985–997. [5] J.F. Goncalves, J.J.M. Mendes, M.G.C. Resende, A genetic algorithm for the resource constrained multi-project scheduling problem, Eur. J. Oper. Res. 189 (2008) 1171–1190. [6] T.H. Wu, C.C. Chang, A.H. Chung, A simulated annealing algorithm for manufacturing cell formation problems, Expert Syst. Appl. 34 (2008) 1609–1617. [7] S. Suresh, P.B. Sujit, A.K. Rao, Particle swarm optimization approach for multi-objective composite-beam design, Compos. Struct. 81 (2007) 598–605. [8] I.C. Trelea, The particle swarm optimization algorithm: convergence analysis and parameter selection, Inf. Process. Lett. 85 (2003) 317–325. [9] W.H. Liao, Y. Kao, C.M. Fan, Data aggregation in wireless sensor networks using ant colony algorithm, J. Network Comput. Appl. 31 (2008) 387–401. [10] S.M. Hashemi, A. Moradi, M. Rezapour, An ACO algorithm to design UMTS access network using divided and conquer techniques, Eng. Appl. Artif. Intell. 21 (2008) 931–940. [11] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, J. Global Optim. 39 (2007) 459–471. [12] D.T. Pham, A. Ghanbarzade, E. Koc, S. Otri, S. Rahim, M. Zaidi, The bees algorithm – a novel tool for complex optimization problems, in: IPROMS 2006, Cardiff-England, pp. 454–461. [13] D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC) algorithm, Appl. Soft Comput. 8 (2008) 687–697. [14] S. Salhi, N.M. Queen, A hybrid algorithm for identifying global and local minima when optimizing functions with many local minima, Eur. J. Oper. Res. 155 (2004) 51–67. [15] E. Rodriguez-Tello, J.K. Hao, J. Torres-Jimenez, An improved simulated annealing algorithm for bandwidth minimization, Eur. J. Oper. Res. 185 (2008) 1319–1335. [16] T. Stutzle, Iterated local search for the quadratic assignment problem, Eur. J. Oper. Res. 174 (2006) 1519–1539. [17] L. Devroye, Non-uniform Random Variate Generation, Springer-Verlag, New-York, 1986. [18] K.Y. Chan, C.K. Kwong, X.G. Juo, Improved orthogonal array based simulated annealing for design optimization, Exp. Syst. Appl., doi:10.1016/ j.eswa.2008.09.022. [19] I.G. Tsoulos, Modification of real code genetic algorithm for global optimization, Appl. Math. Comput. 203 (2008) 598–607. [20] R. Bellman, Adaptive Control Processes: A Guided Random Tour, Princeton University Press, New Jersey, 1961. [21] J.I. Mulero-Martinez, Functions bandlimited in frequency are free of the curse of dimensionality, Neurocomputing 70 (2007) 1439–1452. [22] F. Heiss, V. Winschel, Likelihood approximation by numerical integration on sparse grids, J. Econom. 144 (2008) 62–80.