Expert Systems with Applications 36 (2009) 6030–6035
Contents lists available at ScienceDirect
Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa
Minimizing the multimodal functions with Ant Colony Optimization approach M. Duran Toksarı Erciyes University, Engineering Faculty, Industrial Engineering Department, Kayseri, Turkey
a r t i c l e
i n f o
Keywords: Ant colony optimization Global optimization Meta-heuristics Combinatorial optimization
a b s t r a c t The ant colony optimization (ACO) algorithms, which are inspired by the behaviour of ants to find solutions to combinatorial optimization problem, are multi-agent systems. This paper presents the ACObased algorithm that is used to find the global minimum of a nonconvex function. The algorithm is based on that each ant searches only around the best solution of the previous iteration. This algorithm was tested on some standard test functions, and successful results were obtained. Its performance was compared with the other algorithms, and was observed to be better. Ó 2008 Elsevier Ltd. All rights reserved.
1. Introduction
2. Ant colony optimization (ACO)
Many heuristic methods, such as simulated annealing-based algorithm (Corana, Marchesi, Martini, & Ridella, 1987), simulated annealing-based on stochastic differential equations (Aluffi-Pentini, Parisi, & Zirilli, 1985), simulated annealing algorithm (Dekkers & Aarts, 1991), genetic algorithms (Pham & Yang, 1993), tabu search Hooke and Jeeves algorithm (Al-Sultan & Al-Fawzan, 1997), and tabu search based on memory (Ji & Tang, 2004), were developed to find global optimization. These methods found the global minimum for the used test functions. However, all the methods require a large number of function evaluations. In this paper, ACO-based algorithm that is used to find global optimization will be suggested. ACO has been inspired by the behaviour of real ant colonies, in particular, by their foraging behaviour. The basic idea is to imitate the cooperative behaviour of ant colonies to solve combinatorial optimization problems. ACO belongs to a class of biologically inspired heuristics. The optimization problems should have many local minimum points for the objective function which may be unconvex, but only one of them is the global minimum. If Fðxmin Þ 6 FðxÞ for all x values, xmin value is defined as the point which makes the function minimum. This paper is organized as follows. First, ACO will be explained shortly. In Section 3, the proposed ACO-based algorithm that is used to find global optimization will be detailed. The computational results will be given and be compared with results of the other heuristic methods in Section 4. Test functions given in Appendixes A and B were used to perform two experiments.
Ant colony optimization is now a classical approach to solve combinatorial optimization problems, which were introduced by Dorigo (1992). One of its main ideas is the indirect communication among the individuals of a colony of agents, called ants. The principle of this method is based on the way ants search for food and find their way back to the nest. During the trips of ants, a chemical trail called pheromone is left on the ground. The role of pheromone is to guide the other ants towards the target point. For one ant, the path is chosen according to the quantity of pheromone. Recently, ACO has been proposed to solve combinatorial optimization problems, such as travelling salesman problem (Dorigo & Gambardella, 1997), layout problem (Solimanpur, Vrat, & Shankar, 2004) and flowshop scheduling problem (Rajendran & Ziegler, 2004; T’kindt et al., 2002). The general ACO algorithm is illustrated in Fig. 1. The procedure of the ACO algorithm manages the scheduling of three activities (Dorigo & Di Caro, 1999; Talbi, Roux, Fonlupt, & Robillard, 2001): the first step consists mainly of the initialization of the pheromone trail. In the iteration (second) step, each ant constructs a complete solution to the problem according to a probabilistic state transition rule. The state transition rule depends mainly on the state of the pheromone. The third step updates the quantity of pheromone; a global pheromone updating rule is applied in two phases. First, a fraction of the pheromone evaporates, and then each ant deposits an amount of pheromone which is proportional to the fitness of its solution. This process is iterated until a stopping criterion. 3. Ant colony optimization to find global optimization The ACO algorithm has been used to find global optimization. The proposed algorithm was obtained by developing the ACObased algorithm used to find global minimum (Toksari, 2006).
E-mail address:
[email protected] 0957-4174/$ - see front matter Ó 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2008.06.077
6031
M.D. Toksarı / Expert Systems with Applications 36 (2009) 6030–6035
intensifies around the best objective function value obtained from the previous iteration and all ants turned towards there to search a solution (Fig. 2b). The solution vector of each ant is updated by using the following formula:
Step 1. Initialization - Initialize pheromone trail Step 2. Solution construction - For each ant Repeat Solution construction using the pheromone trail Step 3. Update the pheromone trail Until stopping criteria
ðt ¼ 1; 2; . . . ; IÞ
ð3:1Þ
ykt ¼ ybest t1 D;
ðt ¼ 1; 2; . . . ; IÞ
ð3:2Þ
whereðxkt ; ykt Þ is the solution vector of the kth ant at iteration t, best ðxbest t1 ; yt1 Þ is the best solution obtained at the iteration t 1, and D is a vector generated randomly from ½a; a range to determine the length of jump. In formulas (3.1) and (3.2), ðÞ sign is used to define the direction of movement for each variable. Then, all the variables are updated at the end of the iteration. Thus, variables of the problem are optimized concurrently and the solution time is decreased. ðÞ sign defines the direction of movement that appears after initial solution. The direction of the movement is defined by
Fig. 1. A generic ACO algorithm.
The superiority of the proposed algorithm is that the variables of the problem are optimized concurrently, but for ACO-based algorithm used to find global minimum (Toksari, 2006) is that they are optimized one by one. In the proposed algorithm, first, m number of ants are being associated with m random initial vectors xkinitial ; ykinitial ðk ¼ 1; 2; . . . ; mÞ for a problem has two variables, f ðx; yÞ (or all the ants may be set to the same value). For example, variables (x, y) are bounded by ð1 < x; y < 1Þ (Fig. 2a). The difference between the generic ACO and the proposed ant colony-based algorithm is that the quantity of pheromone ðst Þ only
a
xkt ¼ xbest t1 D;
best best xbest initial ¼ xinitial þ ðxinitial 0:01Þ
ð3:3Þ
best best best y initial ¼ yinitial þ ðyinitial 0:01Þ
ð3:4Þ
y
x=-1
y=1
1 Ant 2 B
z
x=1
Ant 1 A
1
Ant 5 E
-3
-2
x Ant 3 C
Global Optimum
-1
1
Ant 4 D
-1
2
3
y=-1
-1
b
Global Optimum Ant 4 D1
Ant 1 Ant 2 A1 B1
Ant3 C1
Ant 5 E1
C Fig. 2. (a) Five ants being associated with 5 random initial vectors (the best vector is on the C point. So, the quantity of pheromone only intensifies between global minimum point and C point). (b) The first iteration (searching was only done between global minimum point and C point. At the end of iteration 1, the best vector is on the D1 point. So, the quantity of pheromone only intensifies between global minimum point and D1 point).
6032
M.D. Toksarı / Expert Systems with Applications 36 (2009) 6030–6035
best best best If f xbest sign is used in (3.1). Otherinitial ; yinitial 6 f xinitial ;yinitial ; (+) best best best wise, () sign is used. If f xbest initial ; yinitial 6 f xinitial ; yinitial , (+) sign is used in (3.2). Otherwise, () sign is used. At the end of the each iteration, the quantity of pheromone ðst Þ is updated in two phases. First, the quantity of pheromone ðst Þ is reduced to simulate the evaporation process by the formula (3.2). In a second phase, the pheromone is only reinforced around the best objective function value obtained from the previous iteration (formula (3.3)),
st ¼ 0:1 st1
ð3:5Þ
best st ¼ st1 þ ð0:01 f ðxbest t1 ; yt1 ÞÞ
ð3:6Þ
This process is iterated until number of maximum iteration (I). Steps of the proposed algorithm are shown in Fig. 3. Not to pass over the global minimum, where I is number of maximum pffiffi iteration, a is set as 0:1 a at the end of the each iteration I (i.e. a ¼ 0:1 a). Thus, the length of jump will gradually decrease. It is explained in Fig. 4. 4. Computational results In this section, performance of the proposed ACO-based algorithm is examined on two experiments. All the test functions are multi-model functions with many local minima and it is difficult to seek for the global minima. Solutions obtained by using the
k=1
k=2
k=3
k=n
t = k× I n× I ≤ I
Global Minimum
Initial Minimum
Fig. 4. Jump movements (Toksari, 2006).
algorithm are compared with the previous solutions obtained for these problems. 4.1. Test problems In the first experiment, the proposed ACO-based algorithm is compared with the methods shown in Table 1, by using a set of test functions known from the literature (Dixon & Szegö, 1978) (see Appendix A). In the second experiment, the proposed algorithm is tested by using the Rosenbrock test function in 2 and 4 dimensions (see Appendix B). The proposed algorithm is compared with the methods shown in Table 2. Since all the algorithms shown in Table 1 are tested on different machines, the standard unit of time which makes comparisons machine-independent is used (Dixon & Szegö, 1978). One unit of
Fig. 3. Ant colony optimization for the global minimum.
6033
M.D. Toksarı / Expert Systems with Applications 36 (2009) 6030–6035 Table 1 Methods used for the first experiment
Table 5 Number of objective function evaluations of the second experiment
Method
Name
Reference
A
Multistart
B C D
Controlled random search Density clustering Clustering with distribution function
E
Multi level single linkage
F
Simulated annealing
G H
Simulated annealing-based on stochastic differential equations Tabu search
I
Variable neighbourhood search
ACO
Ant colony optimization
Rinnooy Kan and Timmer (1984) Price (1978) Törn (1978) De Biase and Frontini (1978) Rinnooy Kan and Timmer (1987) Dekkers and Aarts (1991) Aluffi-Pentini et al. (1985) Al-Sultan and AlFawzan (1997) Toksarı and Güner (2007) This paper
Table 2 Methods used for the second experiment Method
Name
Reference
Simplex ARS SA VNS TS
Simplex method Adaptive random search Simulated annealing Variable neighbourhood search Tabu search
Nelder and Mead (1964) Masri, Bekcy, and Safford (1980) Corana et al. (1987) Toksarı and Güner (2007) Al-Sultan and Al-Fawzan (1997)
Method starting point
Simplex
ARS
SA
TS
VNS
ACO
2 dimensions 1001, 1001 1001, 999 999, 999 999, 1001 1443, 1 1, 1443 1.2, 1
993 276 730 907 907 924 161
3411 131841 15141 3802 181280 2629 6630
500,001 508,001 524,001 484,001 492,001 512,001 488,001
1954 1762 2483 2671 1616 4121 1195
1078 862 958 1154 1216 1984 508
1246 978 1024 1487 1402 2085 448
4 dimensions 101, 101, 101, 101 101, 101, 101, 99 101, 101, 99, 99 101, 99, 99, 99 99, 99, 99, 99 99, 101, 99, 101 101, 99, 101, 99 201, 0, 0, 0 1, 201, 1, 1 1, 1, 1, 201
1869 784 973 1079 859 967 870 1419 1077 1265
519,632 194,720 183,608 195,902 190,737 4,172,290 53,878 209,415 215,116 29,069,006
1,288,001 1,328,001 1,264,001 1,296,001 1,304,001 1,280,001 1,272,001 1,288,001 1,304,001 1,272,001
16,528 27,940 13,995 11,443 14,007 16,572 11,471 25,413 8884 34,083
14,324 21,555 11,924 9208 10,219 11,652 8052 17,261 5872 25,526
14,570 22,057 12,055 9412 11,995 12,195 7476 17,616 6024 26,712
Table 6 Final functional value of the second experiment Method starting point
Table 3 Number of objective function evaluations of the first experiment Method
GP
BR
H3
H6
S5
S7
S10
A B C D E Fc Gc Hb Ib ACOb
4400 2500 2499 378 148 563 5349 281 206 264
1600 1800 1558 597 206 505 2700 398 308 324
2500 2400 2584 732 197 1459 3416 578 521 528
6000 7600 3447 807 487 4648 3975 2125 1244 1344
6500 3800 3649 620 404 365a 2446 753 571 648
9300 4900 3606 788 432a 558 4759 755 544 564
11,000 4400 3874 1160 564 797 4741 1203 988 1044
a b c
The global minimum was not found in one of four runs. The average number of function evaluations of four runs. The number of function evaluations of the initial sampling are not included.
Table 4 Running time in units of standard time of the first experiment GP
BR
H3
H6
S5
S7
S10
A B C D Ec Fc Hb Ib ACOb
4.5 3 4 15 0.15 0.9 0.22 0.09 0.11
2 4 4 14 0.25 0.9 0.3 0.15 0.17
7 8 8 16 0.5 5 1.02 0.62 0.74
22 46 16 21 2 20 7.2 3.88 4.10
13 14 10 23 1 0.8a 1.47 0.96 0.97
21 20 13 20 1a 1.5 1.58 0.98 1.02
32 20 15 30 2 2.7 3.34 2.12 2.37
a
c
ARS
SA
TS
VNS
ACO
4.9E10 7.4E10 2.7E10 9.2E10 5.4E11 2.2E10 2.4E10
1586.4 8.6E9 1.2E8 583.2 4.7E10 1468.9 5.5E7
1.8E10 2.6E9 1.2E9 4.2E8 1.5E8 1.6E9 2.0E8
7.1E11 1.5E9 4.2E9 6.1E9 7.2E9 3.4E9 2.6E10
8.4E16 7.9E16 3.1E15 3.7E13 6.8E16 7.2E16 2.2E15
2.3E15 4.5E15 8.2E15 3.7E14 1.6E15 7.2E15 1.2E15
3.70 5.46E17 9.8E18 3.4E17 8.3E18 1.2E17 6.0E18 3.70 9.4E18 3.9E17
1.9E6 1.7E6 3.8E6 2.3E6 2.7E6 2.6E6 3.7 1.1E6 1.2E6 2.2E6
5.0E7 1.8E7 5.9E7 7.4E8 3.3E7 2.8E7 2.3E7 7.5E7 4.6E7 5.2E7
5.1E8 1.6E7 8.3E8 1.4E7 6.3E7 9.5E7 4.5E7 5.7E7 7.2E7 6.1E7
0.4E11 5.9E11 4.1E12 1.4E11 9.3E12 6.4E12 1.7E13 8.7E11 3.8E11 0.8E11
3.3E11 5.8E11 4.1E11 9.2E12 7.3E11 4.2E11 4.7E12 3.96E11 6.2E11 5.5E11
4.2. Results and discussion
Method
b
2 dimensions 1001, 1001 1001, 999 999, 999 999, 1001 1443, 1 1, 1443 1.2, 1 4 dimensions 101, 101, 101, 101 101, 101, 101, 99 101, 101, 99, 99 101, 99, 99, 99 99, 99, 99, 99 99, 101, 99, 101 101, 99, 101, 99 201, 0, 0, 0 1, 201, 1, 1 1, 1, 1, 201
Simplex
The global minimum was not found in one of four runs. The average number of function evaluations of four runs. The number of function evaluations of the initial sampling are not included.
standard time is equivalent to the running time needed for 1000 evaluations of the Shekel 5 function using the point (4, 4, 4, 4).
The results of ACO are listed in Tables 3 and 4 and are compared with the results of the other methods in the Table 1. In Table 3, the number of function evaluations for each method is given. Table 4 shows the computation times in units of standard time for each method. Table 4 shows no results for method G, since the running time available is only in absolute computer time.
Table A.1 H3 (n = 3 and q = 4) i
ai1
ai2
ai3
ci
pi1
pi2
pi3
1 2 3 4
3 0.1 3 0.1
10 10 10 10
30 35 30 35
1 1.2 3 3.2
0.3689 0.4699 0.1091 0.03815
0.1170 0.4387 0.8732 0.5743
0.2673 0.7470 0.5547 0.8828
6034
M.D. Toksarı / Expert Systems with Applications 36 (2009) 6030–6035
Table A.2 H6 (n = 6 and q = 4) i
ai1
ai2
ai3
ai4
ai5
ai6
ci
pi1
pi2
pi3
pi4
pi5
pi6
1 2 3 4
10 0.05 3 17
3 10 3.5 8
17 17 1.7 0.05
3.5 0.1 10 10
1.7 8 17 0.1
8 14 8 14
1 1.2 3 3.2
0.1312 0.2329 0.2348 0.4047
0.1696 0.4135 0.1451 0.8828
0.5569 0.8307 0.3522 0.8732
0.0124 0.3736 0.2883 0.5743
0.8283 0.1004 0.3047 0.1091
0.5886 0.9991 0.6550 0.0381
Tables 3 and 4 indicate that the proposed ACO-based algorithm has a noticeable performance. The proposed algorithm found exactly the global minimum in all the tested problems, while E and F failed in problems S7 and S5, respectively. Results show that the proposed algorithm is very efficient, robust and reliable. Note that the first and second derivatives of each of the tested functions can be easily found. Although other methods use these tools, the proposed algorithm does not utilize the derivations of the function. For the second experiment, Tables 5 and 6 show comparisons of the proposed algorithm with the algorithms shown in Table 2 by using the Rosenbrock function in 2 and 4 dimensions. Function evaluation is used to measure efficiency. Tables 3–6 show that the best values obtained by using the proposed ACO based algorithm are the best found values for the tested problems.
Table A.3 Sh (n = 4 and q = 5, 7, 10) i
ai1
ai2
ai3
ai4
ci
1 2 3 4 5 6 7 8 9 10
4 1 8 6 3 2 5 8 6 7
4 1 8 6 7 9 5 1 2 3.6
4 1 8 6 3 2 3 8 6 7
4 1 8 6 7 9 3 1 2 3.6
0.1 0.2 0.2 0.4 0.4 0.6 0.3 0.7 0.5 0.5
Hn. (Hartmann functions: n = 3, 6; q = 4)
f ðxÞ ¼
q X
ci expð
i¼1
n X
aij ðxj pij Þ2 Þ;
j¼1
S ¼ fx 2 Rn j0 6 xj 6 1; 1 6 j 6 ng; for n ¼ 3; xmin ¼ ð0:114; 0:556; 0:882Þ; f ðxmin Þ ¼ 3:86;
5. Conclusions
for n ¼ 6; xmin ¼ ð0:201; 0:150; 0:477; 0:275; 0:311; 0:657Þ;
In this paper, a powerful and robust algorithm which is based on ant colonies, that is used to find global optimization is proposed. Performance of the proposed ACO-based algorithm is examined on two experiments. When compared with the previous methods used to find global optimization, results show that the proposed algorithm has a noticeable performance. The proposed ACO-based algorithm found exactly the global minimum of tested problems, so it is obvious that the best values obtained by using the proposed ACO-based algorithm are the best found values for the tested problems.
f ðxmin Þ ¼ 3:32: Sh. (Shekel function: n = 4; q = 5, 7, 10)
f ðxÞ ¼
q X ððx ai ÞT ðx ai Þ þ ci Þ1 i¼1
S ¼ fx 2 R4 j0 6 xi 6 1; 1 6 i 6 4g; xmin ¼ ð4; 4; 4; 4Þ; for q ¼ 5; f ðxmin Þ ¼ 10:15; for q ¼ 7; f ðxmin Þ ¼ 10:40; for q ¼ 10; f ðxmin Þ ¼ 10:54:
Appendix A
(see Table A.3) The following test functions are taken from Al-Sultan and AlFawzan (1997). GP. (Goldstein and Price: n = 2)
f ðx1 ; x2 Þ ¼ ½1 þ ðx1 þ x2 þ 1Þ2 ð19 14x1 þ 3x21 14x2 þ 6x1 x2 þ 3x22 Þ ½30 þ ð2x1 3x2 Þ2 ð18 32x1 þ 12x21 þ 48x2 36x1 x2 þ 27x22 Þ;
Appendix B The following Rosenbrock test function in 2 and 4 dimensions is taken from Al-Sultan and Al-Fawzan (1997)
f ðx1 ; x2 Þ ¼ 100ðx2 x21 Þ2 ð1 x1 Þ2 ; xmin ¼ ð1; 1Þ; f ðxmin Þ ¼ 0; 3 X
100ðxiþ1 x2i Þ2 ð1 xi Þ2 :
S ¼ fx 2 R2 j 2 6 xi 6 2; i ¼ 1; 2g;
f ðx1 ; x2 ; x3 ; x4 Þ ¼
xmin ¼ ð0; 1Þ; f ðxmin Þ ¼ 3. There are 4 local minima in the minimization region.
xmin ¼ ð1; 1; 1; 1Þ; f ðxmin Þ ¼ 0:
i¼1
References
BR. (Branin: n = 2) 2
f ðx1 ; x2 Þ ¼ aðx2 bx1 þ cx1 dÞ2 þ eð1 f Þ cosðx1Þ þ e; where a ¼ 1; b ¼ 5:1=ð4p2 Þ; c ¼ 5=p; d ¼ 6; e ¼ 10; f ¼ 1=ð8pÞ: S ¼ fx 2 R2 j 5 6 x1 6 10 and 0 6 x2 6 15g; xmin ¼ ðp; 12:275Þ; ðp; 2:275Þ; ð3p; 2:475Þ; f ðxmin Þ ¼ 5=ð4pÞ: (see Tables A.1 and A.2).
Al-Sultan, K. S., & Al-Fawzan, M. A. (1997). A tabu search Hooke and Jeeves algorithm for unconstrained optimization. European Journal of Operation Research, 1003, 198–208. Aluffi-Pentini, E., Parisi, V., & Zirilli, E. (1985). Global optimization and stochastic differential equations. Journal of Optimization Theory and Applications, 47, 1–16. Corana, A., Marchesi, M., Martini, C., & Ridella, S. (1987). Minimizing multimodal functions of continuous variables with the simulated annealing algorithm. ACM Transactions Mathematical Software, 13/3, 262–280.
M.D. Toksarı / Expert Systems with Applications 36 (2009) 6030–6035 De Biase, L., & Frontini, E. (1978). A stochastic method for global optimization: Its structure and numerical performance. In L. C. W. Dixon & G. P. Szegö (Eds.). Towards global optimization (2, pp. 85–102). Amsterdam: North-Holland. Dekkers, A., & Aarts, E. (1991). Global optimization by simulated annealing. Mathematical Programming, 50, 367–393. Dixon, L. C. W., & Szegö, G. P. (1978). The global optimization problem an introduction. In L. C. W. Dixon & G. P. Szegr (Eds.). Towards global optimization (2, pp. 1–15). Amsterdam: North-Holland. Dorigo, M. (1992). Optimization, learning and natural algorithms. Ph.D. thesis, Politecnico di Milano, Italy. Dorigo, M., & Di Caro, G. (1999). Ant colony optimization a new meta-heuristic. In Proceeding of the 1999 congress on evolutionary computation (Vol. 2, pp. 14701477). Dorigo, M., & Gambardella, L. M. (1997). Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Transactions on Evolutionary Computation, 1, 53–66. Ji, M., & Tang, H. (2004). Global optimizations and tabu search based on memory. Applied Mathematics and Computation, 159, 449–457. Masri, S. E., Bekcy, G. A., & Safford, E. B. (1980). A global optimization algorithm using adaptive random search. Applied Mathematics and Computation, 7, 353–375. Nelder, J. A., & Mead, R. (1964). A simplex method for function minimization. Computer Journal, 7, 308–313. Pham, D. T., & Yang, Y. (1993). Optimization of multi-modal discrete functions using genetic algorithms. Proceedings of the Institution of Mechanical Engineers, 207, 53–59.
6035
Price, W. L. (1978). A controlled random search procedure for global optimization. In L. C. W. Dixon & G. P. Szegö (Eds.). Towards global optimization (2, pp. 71–84). Amsterdam: North-Holland. Rajendran, C., & Ziegler, H. (2004). Ant-colony algorithms for permutation flowshop scheduling to minimize makespan/total flowtime of jobs. European Journal of Operation Research, 155, 426–438. Rinnooy Kan, A. H. G., & Timmer, G. T. (1984). Stochastic methods for global optimization. American Journal of Mathematical and Management Sciences, 4, 7–40. Rinnooy Kan, A. H. G., & Timmer, G. T. (1987). Stochastic global optimization methods part II: Multi level methods. Mathematical Programming, 39, 57–78. Solimanpur, M., Vrat, P., & Shankar, R. (2004). Ant colony optimization algorithm to the inter-cell layout problem in cellular manufacturing. European Journal of Operation Research, 157, 592–606. Talbi, E. G., Roux, O., Fonlupt, C., & Robillard, D. (2001). Parallel ant colonies for the quadratic assignment problem. Future Generation Computer Systems, 17, 441–449. T’kindt, V., Monmarché, N., Tercinet, F., & Laügt, D. (2002). An Ant Colony Optimization algorithm to solve a 2-machine bicriteria flowshop scheduling problem. European Journal of Operation Research, 142, 250–257. Toksari, M. D. (2006). Ant colony optimization for finding the global minimum. Applied Mathematics and Computation, 176, 308–316. Toksarı, M. D., & Güner, E. (2007). Solving the unconstrained optimization problem by a variable neighborhood search. Journal of Mathematical Analysis and Applications, 328, 1178–1187. Törn, A. (1978). A search-clustering approach to global optimization. In L. C. W. Dixon & G. P. Szegö (Eds.). Towards global optimization (2, pp. 49–62). Amsterdam: North-Holland.