Applied Mathematics and Computation 228 (2014) 589–597
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
Real-coded genetic algorithm with uniform random local search B.A. Sawyerr a,b,⇑, A.O. Adewumi b, M.M. Ali c a
Department of Computer Sciences, University of Lagos, Akoka, Yaba, Lagos, Nigeria School of Mathematics, Statistics and Computer Sciences, University of Kwazulu-Natal, Durban, Kwazulu-Natal, South Africa School of Computational and Applied Mathematics, Faculty of Science and TCSE, Faculty of Engineering and Built Environment, University of the Witwatersrand, Johannesburg, South Africa b c
a r t i c l e
i n f o
Keywords: Global optimization Real coded genetic algorithms Uniform random local search
a b s t r a c t Genetic algorithms are efficient global optimizers, but they are weak in performing finegrained local searches. In this paper, the local search capability of genetic algorithm is improved by hybridizing real coded genetic algorithm with ‘uniform random’ local search to form a hybrid real coded genetic algorithm termed ‘RCGAu’. The incorporated local technique is applied to all newly created offspring so that each offspring solution is given the opportunity to effectively search its local neighborhood for the best local optimum. Numerical experiments show that the performance of RCGA is remarkably improved by the uniform random local search technique. Ó 2013 Elsevier Inc. All rights reserved.
1. Introduction In this paper the global optimization problem is considered as:
minimize f ðxÞ;
x X;
ð1Þ n
j
j
j
where x is a continuous variable vector with domain X R defined by the bound constraint, l 6 x 6 u ; j ¼ 1; 2; . . . ; n. The function f ðxÞ : X ! R is a continuous real-valued function. A point x 2 X is called the global minimizer of f, if f ðx Þ 6 f ðxÞ; 8x 2 X. Many real life problems from the fields of engineering, applied science, and management science problems are formulated as nonlinear global optimization problems with both local and global optima. These problems are very difficult to solve using classical optimization techniques because of their non-differentiable, non-smooth, discontinuous, ill conditioned and noisy characteristics [2]. However, several stochastic optimization algorithms have been developed over the last four decades to tackle these problems [1,2,12]. These class of optimization algorithms do not require any properties of the function f ðxÞ. Examples include Simulated annealing [14], Pattern search [11], Tabu search [12], Evolutionary programing (EP) [13], Evolution strategies (ESs) [24], Genetic algorithm (GA) [3,18,20,23], Differential evolutions (DEs) [28], etc. In 1975, John Holland introduced genetic algorithms (GAs) in his monograph [18]. He was inspired by the works of Charles Darwin on the theory of natural selection and natural genetics. The first work to implement and test the performance of GA on function benchmarks was carried out by Kenneth De Jong in his PhD thesis [10]. Subsequently, several variants of GAs have been developed and applied in solving several global optimization problems [3,12,15,20,23]. A typical GA consists of a set of chromosomes (potential solutions), a selection scheme, crossover operator, mutation operator and an objective function. The initial implementations of GA consists of binary string representation of the ⇑ Corresponding author at: Department of Computer Sciences, University of Lagos, Akoka, Yaba, Lagos, Nigeria. E-mail address:
[email protected] (B.A. Sawyerr). 0096-3003/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2013.11.097
590
B.A. Sawyerr et al. / Applied Mathematics and Computation 228 (2014) 589–597
chromosomes. They are also known as Binary-coded genetic algorithms (BCGAs). BCGAs are robust search algorithms but they can be computationally expensive especially when the problem space is large [16]. Michaelewicz [23] showed that real coded genetic algorithms (RCGAs) outperform BCGAs in the continuous parameter optimization problem domain because they are more consistent, more precise and faster in terms of execution. Recent research work on them can be found in [2,3,7–9,19,20,23,26]. Due to the problem of premature convergence in GAs, Hart [17] suggested a hybridization of GAs with local search methods with the aim of achieving an effective optimization method with both global and local search capabilities. Subsequently, several researchers have hybridized RCGA with local optimization techniques with the aim of improving GAs [4–6,26,27,29]. In this study, a new hybrid real-coded genetic algorithm that consists of ‘uniform random’ local search technique is proposed. The remainder of the paper is organized as follows: Section 2 provides a brief description RCGA. Section 3 gives a full description of the proposed hybrid RCGA. In Section 4 the experimental settings and test problems are presented while Section 5 presents the numerical results, comparisons and discussion. Finally in Section 6, the conclusions are given. 2. A brief description of real coded genetic algorithms Genetic algorithms are search algorithms that mimic natural selection and natural genetics. They work by applying the survival of the fittest strategy on string structures with ordered yet randomized information exchange to form robust global optimizers. GAs maintain a set of potential solutions P t ¼ fx1;t ; x2;t ; . . . ; xN;t g at every time step t called a generation. Initially, Pt¼0 is generated randomly between the upper and the lower boundaries of the solution space. Subsequently, Pt is created from the genetic encodings of the fittest parents via crossover operators and occasionally new points in the solution space are sampled using mutation operators. While randomized, GAs resourcefully exploit historical information to sample new search points with expected superior performance [15]. The GA used in this work is the steady state RCGA. At each generation t, the RCGA performs selection, crossover and mutation to update the current population of solutions Pt . After the initial population has been created randomly, tournament selection is used to select m individuals from Pt to make up the mating pool, P^t ¼ x1;t ; x2;t ; . . . ; xm;t , where m 6 N [15,20]. The individuals in the mating pool known as parents are paired and crossover operator is applied on them to produce two offspring. Arithmetic crossover is applied with probability, pc , to all paired members of P^t as follows
cji;t ¼ aj xji;t þ ð1 aj Þxjk;t ; cjk;t ¼ aj xjk;t þ ð1 aj Þxji;t ;
ð2Þ
where xi;t and xk;t denote a paired set of parents, aj Uni f ð½0:5; 1:5Þ for each j; j ¼ 1; 2; . . . ; n. The new pair (ci;t ; ck;t ) of offspring is copied to the set Ht . If on the other hand, crossover probability is unsuccessful, the pair ðxi;t ; xk;t Þ is copied to Ht . Mutation is applied to the components of each member of Ht with probability, pl . If the probability of mutation is successful at a component cji;t of ci;t 2 Ht , then random mutation [23] is carried out as follows j
yji;t ¼ cji;t þ bj ðuj l Þ;
ð3Þ
j
j
j
where b Uni f ð½0:01; 0:01Þ for each j; j ¼ 1; 2; . . . ; n; u and l are the upper and lower boundaries of x 2 X, respectively. On the other hand, if the mutation probability is unsuccessful at component cji;t of ci;t 2 Ht , then cji;t is retained. The resultant zi;t 2 M t consists of both mutated and non mutated components. We denote M t by
Mt ¼ fy1;t ; y2;t ; . . . ; ym;t g;
ð4Þ
where
( yji;t ¼
cji;t þ bj ðuj xji;t Þ; if cji;t ;
cji;t is mutated
ð5Þ
otherwise:
If m < N then m points in M t replaces m worst points in Pt to create P tþ1 . Fig. 1 shows the flow chart of a real coded genetic algorithm. 3. The proposed real coded genetic algorithm with uniform random local search (RCGAu) The proposed hybrid RCGA (RCGAu) incorporates a simple derivative-free local search technique called ‘uniform random’ local search into the RCGA algorithm after the mutation operation. The uniform random local search starts by randomly selecting a solution yi;t 2 M t and creating a trial point xi;t using
xi;t ¼ yi;t þ Dt U;
ð6Þ T
where Dt is a step size parameter and U ¼ ðU 1 ; U 2 ; . . . ; U n Þ is a directional cosines with random components 1
U j ¼ Rj =ðR21 þ þ R2n Þ2 ;
j ¼ 1; 2; . . . ; n:
ð7Þ
B.A. Sawyerr et al. / Applied Mathematics and Computation 228 (2014) 589–597
591
Fig. 1. Flowchart of a real coded genetic algorithm.
Rj Uni f ð½1; 1Þ. There are cases when the components of the trial point xi;t ¼ ðx1i;t ; x2i;t ; . . . ; xni;t Þ generated by Eq. (6) fall outside the search space X during the search. In these cases, the components of xi;t are regenerated using,
xji;t
¼
8 < yj þ kðuj yj Þ; if i;t i;t
xji;t > uj
: yj þ kðyj lj Þ; i;t i;t
xji;t < l :
if
j
ð8Þ
where k Uni f ð½0; 1Þ and yji;t is the corresponding component of the randomly selected solution yi;t 2 M t . The step size parameter, Dt is initialized at time t ¼ 0 according to [25,26] by: j
D0 ¼ s maxfuj l j j ¼ 1; 2; . . . ; ng;
ð9Þ
where s 2 ½0; 1. The idea of using Eq. (9) to generate the initial step length is to accelerate the search by starting with a suitably large step size to quickly traverse the search space and as the search progresses the step size is adaptively adjusted at the end of each generation, t, by:
Dtþ1 ¼
K 1X ci : K i¼1
ð10Þ
where K is the number of euclidean distances c1 ; c2 ; . . . ; cK between K nearest points to the mean x and x of a set of randomly selected distinct points X ¼ fx1 ; x2 ; . . . ; xq g Pt . A step by step description of the uniform random local search is presented in Algorithm 1.
592
B.A. Sawyerr et al. / Applied Mathematics and Computation 228 (2014) 589–597
Algorithm 1: The uniform random local search operator algorithm 1. 2. 3. 4.
Randomly select a solution yi;t 2 M t , where M t is the set of offspring after mutation. Uniformly generate a directional cosines with random components using Eq. 7. Calculate a trial point xi;t within the lower and upper boundaries of the objective function using Eq. 6. Evaluate the new trial solution xi;t . If xi;t < yi;t , then replace yi;t 2 M t , else change the direction of the step length and use the new step length to recalculate a new trial point. 5. Evaluate the new trial point and replace yi;t 2 M t with xi;t , if xi;t < yi;t , otherwise retain yi;t 2 M t .
b t . After the RCGAu in Fig. 2 starts by creating an initial population of solutions P 0 which is used to create a mating pool, P mating pool is created, the parent solutions in the mating pool are paired and arithmetic crossover is applied to them with probability pc to create a set of offspring, ht using Eq. (2). Mutation is applied to the components of each ci;t 2 ht with probability, pl , using Eq. (3) to create M t . The uniform random local search is applied on each solution yi;t 2 M t . The idea behind this operation is to search the neighborhood of each solution for better solution points. After the local search is applied to M t , the fitness values f ðxi;t Þ of
Fig. 2. Flowchart of RCGAu algorithm.
593
B.A. Sawyerr et al. / Applied Mathematics and Computation 228 (2014) 589–597
xi;t 2 !t ¼ fx1;t ; x2;t ; . . . ; xm;t g is determined and a new population, P tþ1 is created by updating P t (i.e. replacing m worst solution (s) in Pt with !t ). Fig. 2 illustrates the workings of RCGAu. 4. Experimental settings A set of 30 test problems listed in Appendix A were taken from [1] and they were used to benchmark the performances of RCGAu and RCGA. These problems are minimization problems of continuous variables with dimensions ranging from 2 to 40 and different degrees of difficulty. For a detailed description of the test problems see [1]. The 30 test problems are divided into two groups. Scalable and non-scalable problems. There are 6 scalable problems and 24 non-scalable problems. The scalable problems were used with dimensions 5; 10; 20; 30 and 40. The two algorithms used in this study were implemented in Microsoft Visual Studio 2010 integrated development environment using C#.NET programing language on Windows 7 operating system running on an AMD Turion II Ultra Dual-core 2 CPU at 2:5 GHz with 4 GB of RAM. 4.1. Parameter selection Parameter tuning in RCGA is a difficult task to accomplish. To achieve this objective, an extensive empirical experiment was carried out to determine a suitable configuration. The final parameter values used are listed in Table 1. The same configuration was used for the two RCGAs. All the values shown in Table 1 are standard parameter settings that have also been successfully used in literature [3,10,19]. 5. Experimental results and discussion In this section the results of RCGA and RCGAu on the 30 test problems are compared and discussed. The performance of the two RCGAs are analysed pairwise using the average function error value (jf ðx Þ f ðxmin j of successful runs), standard deviation for the total runs (STD), mean function evaluation of successful runs (MFE), number of successful runs (SR) and success performance (SP). For detail description of these criteria, see [21,25,26]. If an algorithm fails to solve a problem at least once of out of 25 experimental runs, i.e., its SR = 0, then its MFE and the MFE of the competing algorithm(s) are denoted by ‘-’. The comparison of the results from the experiments on the 6 scalable problems are analysed separately from the experimental results obtained from the other 24 test problems. 5.1. Comparisons of RCGAu with RCGA on 24 test problems The experimental results of RCGAu and RCGA on 24 non-scalable test problems are summarized in Table 2. Table 2 shows that RCGAu obtained a better approximation of the solutions in 21 problems than RCGA. The average function errors is used to describe the difference between the average approximate solutions found by an algorithm in a number of runs and the global minimum. This also shows that RCGAu achieved better target precision (i.e. better quality of solution found) than RCGA in 21 out of 24 test problems. In 17 test problems, the standard deviations of RCGAu are also lower than the standard deviations obtained by RCGA. In terms of MFE, RCGA outperformed RCGAu because of the extra function evaluation calls in the local search component of RCGAu. It should be expected that RCGAu would be more computationally expensive than RCGA in terms of function evalutation calls if the optimal solution is not found at the beginning of the genetic run. In spite of a higher number of MFE, RCGAu shows some superiority over RCGA in terms of the number of times it succeeded in finding the optimal solution. RCGAu recorded a success rate of 93:5% as against 83:5% of RCGA. This clearly shows that the incorporation of a local search method in RCGA is necessary. Table 2 also shows that RCGAu performs better than RCGA as the problem dimension increases. This indicates that RCGAu is more robust and will be able to handle more complex problems better than RCGA. Table 3 presents the normalized success performance (SP) of the two RCGAs on 30 test problems. The SP of an algorithm is the expected running time of that algorithm in finding the optimal solution for a given problem. It estimates the expected Table 1 Parameter Settings for the Experiment. Sno.
Parameter
Value
1 2 3
Population size ðNÞ Maximum number of generation ðTÞ Mutation probability ðpl Þ
100 1000 0:001
4 5
Crossover probability ðpc Þ Mating pool size ðmÞ
0:6 0:1 N
594
B.A. Sawyerr et al. / Applied Mathematics and Computation 228 (2014) 589–597
Table 2 The average function error, standard deviation, mean function evaluations and number of successful runs of RCGA and RCGAu. The average function errors in bold indicates the better performing algorithm for the corresponding problem. Pno.
n
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 28 30
2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 4 4 4 4 4 4 6 10 20
Average function error jf ðxÞ f ðxmin Þj
Standard deviation
Mean function evaluations of successful runs (MFE) RCGA
RCGA
RCGAu
RCGA
RCGAu
4.12E05 5.12E05 4.34E05 3.54E04 4.45E04 2.67E04 1.86E09 1.68E05 2.53E05 6.43E04 9.51E04 3.57E07 1.31E03 1.66E05 6.40E05 1.80E04 2.35E01 1.91E03 7.16E06 7.29E06 3.32E05 2.26E05 7.43E05 6.78E04
5.18E05 4.38E05 5.04E05 4.40E05 9.21E05 2.71E04 1.50E09 1.52E05 1.69E05 3.33E06 5.10E05 1.18E07 1.05E03 1.62E05 5.50E05 1.66E04 2.01E01 6.57E05 6.88E06 5.02E06 5.30E06 2.11E05 2.26E05 2.57E05
5.47E05 3.42E05 2.65E05 4.23E02 5.58E04 2.58E05 3.99E04 3.06E05 3.02E05 3.89E02 3.85E02 4.91E01 3.94E02 2.14E05 1.97E02 2.37E05 2.02E05 4.03E+00 3.64E+00 3.64E+00 3.85E+00 5.48E02 4.73E04 1.88E01
4.50E05 2.97E05 3.17E05 2.97E05 2.66E05 2.88E05 4.88E04 3.30E05 3.36E05 2.66E02 2.57E05 1.99E01 9.56E04 2.37E05 2.32E05 2.26E05 2.36E05 2.81E05 3.12E+00 1.52E+00 2.72E+00 3.95E02 1.39E05 1.16E05
Number of successful runs out of 25 runs
RCGAu
RCGA
746 1002 1332 2097 1904 592 1846 1133 566 2178 3468 3002 9384 1136 2011 1592 336 9157 3646 3542 4556 3216 7788 9970
871 2053 1594 1662 1736 801 3412 1310 688 2460 3568 5244 25,068 1917 2845 2115 488 6031 4106 3608 3584 3549 6807 14,835
25 25 25 24 25 25 25 25 25 10 19 22 24 25 24 25 25 9 10 15 13 19 25 6
76,200
100,352
495 (82:5%)
RCGAu 25 25 25 25 25 25 25 25 25 20 25 16 25 25 25 25 25 25 10 23 20 22 25 25 561 (93:5%)
Table 3 Normalized success performance of RCGA and RCGAu. Pno.
SPbest
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 28 30
746 1002 1332 1662 1736 592 1846 1133 566 3075 3568 3411 9775 1136 2095 1592 336 6031 9115 3922 4480 4033 6807 14,835 Total
Normalized success performance RCGA
RCGAu
1.0000 1.0000 1.0000 1.3143 1.0968 1.0000 1.0000 1.0000 1.0000 1.7707 1.2789 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 4.2176 1.0000 1.5053 1.9557 1.0493 1.1441 2.8002
1.1676 2.0489 1.1967 1.0000 1.0000 1.3530 1.8483 1.1562 1.2155 1.0000 1.0000 2.4019 2.5645 1.6875 1.3581 1.3285 1.4524 1.0000 1.1262 1.0000 1.0000 1.0000 1.0000 1.0000
32.1329
31.9054
number of function evaluations for the algorithm to successfully find an optimal solution. Normalized SP is used to rank algorithms by awarding the first position to the algorithm with the lowest normalized SP. Table 3 shows that RCGAu has a lower normalized SP than RCGA.
595
B.A. Sawyerr et al. / Applied Mathematics and Computation 228 (2014) 589–597
5.2. Comparisons of RCGAu with RCGA on scalable problems This section compares RCGA and RCGAu based on their performances on 6 scalable problems taken from the 30 selected problems. The 6 scalable problems are Ackley, Griewank, Rastrigin, Rosenbrock, Schwefel and spherical functions. Experiments are carried out to benchmark the two algorithms on different instances of the 6 problems. Tables 4–8 summarized the results on the scalable problems with dimensions ranging from 5 to 40.
Table 4 The average function error, SD, MFEs and SR of successful runs of RCGA and RCGAu of scalable problems in dimension 5. Pno.
23 24 25 26 27 29
Average function error jf ðxÞ f ðxmin Þj
Standard deviation
Mean function evaluations of successful runs (MFE)
Number of successful runs out of 25
RCGA
RCGAu
RCGA
RCGAu
RCGA
RCGAu
RCGA
RCGAu
9.90E05 8.76E09 8.77E09 1.26E03 6.51E09 7.00E09
8.33E09 ‘-’ 6.52E09 4.34E03 6.51E09 6.84E09
3.57E04 9.25E-02 1.48E+00 6.97E+01 1.69E+02 2.73E09
1.49E09 6.39E02 1.79E+00 4.29E+00 9.16E+01 2.33E09
42849 ‘-’ 26040 50100 50,100 8531
11525 ‘-’ 11,312 135,702 141,582 5466
25 1 2 1 1 25
25 0 1 16 22 25
177,620
305,587
29
64
Table 5 The average function error, SD, MFEs and SR of successful runs of RCGA and RCGAu of scalable problems in dimension 10. Pno.
23 24 27 29
Average function error jf ðxÞ f ðxmin Þj
Standard deviation
Mean function evaluations of successful runs (MFE)
Number of successful runs out of 25
RCGA
RCGAu
RCGA
RCGAu
RCGA
RCGAu
RCGA
RCGAu
4.25E04 3.70E03 1.00E+00 7.34E09
9.25E09 7.40E03 6.51E09 8.06E09
1.20E+00 2.11E01 5.22E+02 3.13E09
7.43E10 2.67E01 1.52E+02 1.47E09
50,100 42,170 ‘-’ 28,223
27,063 144,347 ‘-’ 12,072
22 2 0 25
25 1 10 25
120,493
183,482
49
61
Table 6 The average function error, SD, MFEs and SR of successful runs of RCGA and RCGAu of scalable problems in dimension 20. Pno.
23 24 27 29
Average function error jf ðxÞ f ðxmin Þj
Standard deviation
Mean function evaluations of successful runs (MFE)
Number of successful runs out of 25
RCGA
RCGAu
RCGA
RCGAu
RCGA
RCGAu
RCGA
RCGAu
5.33E04 5.14E06 1.00E+00 9.36E08
1.56E08 1.97E03 6.51E09 9.03E09
1.76E+00 1.29E01 1.15E+03 2.13E07
3.13E08 2.08E01 4.93E+02 1.22E09
50,100 50,100 ‘-’ 47,200
69,645 76,538 ‘-’ 26,708
9 2 0 25
25 15 7 25
147,400
172,891
36
72
Table 7 The average function error, SD, MFEs and SR of successful runs of RCGA and RCGAu of scalable problems in dimension 30. Pno.
23 24 27 29
Average function error jf ðxÞ f ðxmin Þj
Standard deviation
Mean function evaluations of successful runs (MFE)
Number of successful runs out of 25
RCGA
RCGAu
RCGA
8.59E03 2.53E03 1.00E+00 6.81E07
7.78E05 9.87E04 8.37E09 1.54E08
1.77E+00 3.72E01 1.90E+03 1.38E06
RCGAu
RCGA
RCGAu
RCGA
RCGAu
3.00E01 7.07E03 7.15E+02 3.01E08
50,100 50,100 ‘-’ 49,858
125,007 92,801 ‘-’ 52,900
1 6 0 25
24 15 4 25
150,058
270,708
32
68
596
B.A. Sawyerr et al. / Applied Mathematics and Computation 228 (2014) 589–597
Table 8 The average function error, SD, MFEs and SR of successful runs of RCGA and RCGAu of scalable problems in dimension 40. Pno.
23 24 27 29
Average function error jf ðxÞ f ðxmin Þj
Standard deviation
Mean function evaluations of successful runs (MFE)
Number of successful runs out of 25
RCGA
RCGAu
RCGA
RCGAu
RCGA
RCGAu
RCGA
RCGAu
0.00E+00 3.18E03 1.00E+00 5.33E06
8.19E04 9.33E04 1.02E07 1.99E07
2.46E+00 6.57E02 2.54E+03 9.98E06
6.62E01 2.51E01 1.17E+03 6.54E07
‘-’ 50,100 ‘-’ 50,100
‘-’ 131,438 ‘-’ 92,294
0 9 0 25
18 16 5 25
100,200
223,732
34
64
Table A1 List of test problems. Pno.
Problem
n
f ðx Þ
Bounds
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Aluffi-Pentini Becker and Lago Bohachevsky 1 Bohachevsky 2 Branin Cosine mixture Dekkers and Aarts Goldstein and price Hosaki Multi-Gaussian Periodic Shubert Gulf research Hartman 3 Helical valley Cosine mixture Kowalik Powell’s quadratic Shekel 5 Shekel 7 Shekel 10 Hartman 6 Ackley Griewank Rastrigin Rosenbrock Schwefel Sinusoidal Spherical Sinusoidal
2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 4 4 4 4 4 4 6 10 10 10 10 10 10 10 20
0.3523 0.0000 0.0000 0.0000 0.3979 0.2000 24776.5183 3.0000 2.3458 1.2969 0.9000 186.7309 0.0000 3.8628 0.0000 0.4000 0.0003 0.0000 10.1532 10.4029 10.5364 3.3224 0.0000 0.0000 0.0000 0.0000 4189.8289 3.5000 0.0000 3.5000
10 6 x1 ; x2 6 10 10 6 x1 ; x2 6 10 50 6 x1 ; x2 6 50 50 6 x1 ; x2 6 50 5 6 x1 6 10; 0 6 x2 6 15 1 6 xi 6 1; i 2 f1; 2; . . . ; ng 20 6 x1 ; x2 6 20 2 6 x1 ; x2 6 2 0 6 x1 6 5; 0 6 x2 6 6 2 6 x1 ; x2 6 2 10 6 x1 ; x2 6 10 10 6 xi 6 10; i 2 f1; 2; . . . ; ng 0:1 6 x1 6 100; 0 6 x2 6 25:6; and 0 6 xj 6 1; j 2 f1; 2; 3g 10 6 x1 ; x2 ; x3 6 10 1 6 xi 6 1; i 2 f1; 2; . . . ; ng 0 6 xi 6 0:42; i 2 f1; 2; 3; 4g 10 6 xi 6 10; i 2 f1; 2; 3; 4g 0 6 xj 6 10; j 2 f1; 2; 3; 4g 0 6 xj 6 10; j 2 f1; 2; 3; 4g 0 6 xj 6 10; j 2 f1; 2; 3; 4g 0 6 xj 6 1; j 2 f1; . . . ; 6g 30 6 xi 6 30; i 2 f1; 2; . . . ; ng 600 6 xi 6 600; i 2 f1; 2; . . . ; ng 5:12 6 xi 6 5:12; i 2 f1; 2; . . . ; ng 30 6 xi 6 30; i 2 f1; 2; . . . ; ng 500 6 xi 6 500; i 2 f1; 2; . . . ; ng 0 6 xi 6 180; i 2 f1; 2; . . . ; ng 5:12 6 xi 6 5:12; i 2 f1; 2; . . . ; ng 0 6 xi 6 180; i 2 f1; 2; . . . ; ng
0 6 x3 6 5
In Table 4, RCGAu performed better than RCGA in accuracy because of its low average function error in 4 problems even though it could not solve Griewank1 at dimension 5. The standard deviation of RCGAu is lower than that of RCGA confirming that RCGAu’s performance on the scalable problem at dimension 5 is better than the performance of RCGA. The MFE also shows that RCGAu was able to locate the optimal solution faster than RCGA in 3 problems while it did not have a successful run on Griewank. In terms of number of successful runs, RCGAu was more successful than RCGA. A study of Tables 5–8 shows that in terms of solution finding accuracy, RCGAu outperforms RCGA while RCGA records lower MFE to achieve good performance. One drawback of RCGAu is the cost of high function evaluations to achieve satisfactory performance. A clear advantage of RCGAu is the ability to achieve higher successful runs per problems solved. Finally, Tables 4–8 show that Rastrigin problems was not solved by both RCGA’s within the maximum number of generation, which is resonably set to 5000 generation. Ackley, and Spherical were consistently solved by both algorithms at low and moderate dimensions. It was observed that as the dimesion increases, RCGA fails to find the optimum while RCGAu excels. RCGA consistently failed to solve the Schwefel problem at 10, 20, 30 and 40 dimension.
1
This problem becomes easier as its dimension increases, see ([22]).
B.A. Sawyerr et al. / Applied Mathematics and Computation 228 (2014) 589–597
597
6. Conclusions In this paper, a local search based RCGA called real coded genetic algorithm with uniform random local search method (RCGAu) is proposed. It is compared with RCGA using 30 test problems. The test problems are divided into two groups namely scalable and non-scalable test problems. Each of the scalable problems is tested 25 times with dimensions 5; 10; 20; 30 and 40 by each algorithm. The non-scalable problems are tested 25 by each algorithms. Numerical performance measures such as best fitness value, mean function evaluations of successful runs and some statistical measures are measured and analyzed. The comparative results show that RCGAu performed better than RCGA across the 30 test problems. This indicate that the inclusion of the uniform random local search in genetic algorithms improves the performance of genetic algorithm. Research is currently being carried out in improving RCGAu and also in applying uniform random search to other nature inspired algorithms. Appendix A See Table A1. References [1] M.M. Ali, C. Khompatraporn, Z.B. Zabinsky, A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems, J. Global Optim. 31 (2005) 635–672. [2] M.M. Ali, A. Törn, A population set-based global optimization algorithms: some modifications and numerical studies, Comput. Oper. Res. 31 (10) (2004) 1703–1725. [3] T. Back, Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms, Oxford University Press, New York, 1996. [4] R. Chelouah, P. Siarry, Genetic and nelder-mead algorithms hybridized for a more accurate global optimization of continuous multiminima functions, Eur. J. Oper. Res. 148 (2) (2003) 335–348. [5] K. Deep, K.N. Das, Quadratic approximation based hybrid genetic algorithm for function optimization, Appl. Math. Comput. 203 (2008) 86–98. [6] K. Deep, Dipti, A new hybrid self organizing migrating genetic algorithm for function optimization, in: Dipti Srinivasan, Lipo Wang (Eds.), IEEE congress on evolutionary computation, Singapore, 25–28 September 2007, IEEE Computational Intelligence Society, IEEE Press, 2007, pp. 2796–2803. [7] K. Deep, M. Thakur, A new crossover operator for real coded genetic algorithms, Appl. Math. Comput. 188 (2007) 895–911. [8] K. Deep, M. Thakur, A new mutation operator for real coded genetic algorithms, Appl. Math. Comput. 193 (2007) 211–230. [9] K. Deep, M. Thakur, A real coded multi parent genetic algorithms for function optimization, Appl. Math. Comput. 1 (2) (2008) 67–83. [10] K.A. DeJong, An analysis of the behavior of a class of genetic adaptive systems (Ph.D. thesis), University of Michgan, Ann Arbor, MI, USA, 1975. [11] J.E. Dennis, V.J. Torczon, Derivative-free pattern search methods for multidisciplinary design problems, in: John J Grefenstette (Ed.), Proceedings of the AIAA/NASA/USAF/ISSMO symposium on Multidisciplinary Analysis and Optimization, Panama City Beach, Florida, 7–9 September 1994. [12] A.P. Engelbrecht, Fundamental of Computational Swarm Intelligence, John Wiley & Sons Ltd, West Sussex, England, 2005. [13] L.J. Fogel, On the organization of intellect (Ph.D. thesis), University of California, Los Angeles, CA, USA, 1964. [14] M.N. Gabere, Simulated annealing driven pattern search algorithms for global optimization, Master’s thesis, University of the Witwatersrand, Johannesburg, South Africa, 2007. [15] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Reading, Massachusetts, 1989. [16] D.E. Goldberg, Real-coded genetic algorithms, virtual alphabets, and blocking, Complex Syst. 5 (2) (1991) 139–168. [17] W. Eugene Hart, Adaptive global optimization with local search (Ph.D. thesis), University of California, San Diego, USA, 1994. [18] J.H. Holland, Adaptation in natural and artificial systems, An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence, MIT press, Cambridge, Massachusetts, 1975. [19] P. Kaelo, Some population set-based methods for unconstrained global optimization (Ph.D. thesis), University of the Witwatersrand, Johannesburg, South Africa, 2005. [20] P. Kaelo, M.M. Ali, Integrated crossover rules in real coded genetic algorithms, Eur. J. Oper. Res. 176 (2007) 60–76. [21] J.J. Liang, T.P. Runarsson, E. Mezura-Montes, M. Clerc, P.N. Suganthan, K. Deb C.A. Coello, Coello, Problem definitions and evaluation criteria for the congress on evolutionary computation 2006, Technical report, IEEE Special session on constrained real-parameter, optimization, 2006. [22] M. Locatelli, A note on the griewank test function, J. Global Optim. 27 (2003) 169–174. [23] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer-Verlag, Berlin Heidelberg, N.Y., 1996. [24] I. Rechenberg, Artificial evolution and artificial intelligence, in: Richard Forsyth (Ed.), Machine Learning: Principle and Techniques, Chapman & Hall Ltd., London, UK, 1989. [25] B.A. Sawyerr, Hybrid real coded genetic algorithms with pattern search and projection (Ph.D. thesis), University of Lagos, Lagos, Nigeria, 2010. [26] B.A. Sawyerr, M.M. Ali, A.O. Adewumi, A comparative study of some real coded genetic algorithms for unconstrained global optimization, Optim. Methods Software 26 (6) (2011) 945–970. [27] M. Settles, T. Soule, Breeding swarms: A ga/pso hybrid, in: Proceedings of Genetic and Evolutionary Computation – GECCO 2005, July 2005. [28] R. Storn, K. Price, Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim. 2 (1997) 341– 359. [29] J. Yen, B. Lee, A simplex genetic algorithm hybrid. Evolutionary computation, in: Proceedings of the Congress on Evolutionary Computation, IEEE Press, 1997, pp. 175–180.